Use @send_email() and send Pete an email with the details.
Tell the customer we will notify them when we have cars that fit their budget and model preference.
ADL Studio
A complete environment for building, testing, and refining your agents.
Reliable Agent Logic Authoring
Automated Testing
Performance Analytics
AI-Powered Suggestions
Interactive Chat Playground
Contract-Based Reliability
Get up and running quickly with Docker.
Run the latest version of the ADL Server using Docker. Be sure to replace [OPENAI_API_KEY] with your actual API key.
docker run -p 8080:8080 \ -e ARC_AI_KEY=[OPENAI_API_KEY] \ -e ARC_MODEL=gpt-4o \ -e ARC_CLIENT=openai \ ghcr.io/eclipse-lmos/adl-server:latest
Note: You may need to authenticate with GitHub Packages first. See the GitHub documentation for instructions.
Beta Notice: The ADL Studio is still in the beta phase and may not completely implement all ADL features. We appreciate your feedback and patience as we continue to improve the platform.
Why Agent Programming?
Prompt-based systems are not reliable or verifiable.
ADL defines programmable agent behavior for production systems.
Agent behavior must be explicitly defined.
ADL enforces rules, boundaries, and execution structure.
Agents execute within scoped instructions.
Behavior is constrained to reduce ambiguity and failure.
Agents maintain structured state across interactions.
Workflows become persistent and verifiable.
ADL Examples
See the difference between standard LLM behavior and ADL's controlled execution and how ADL can improve dialog design.
LLMs often rush to complete tasks in a single turn. ADL's Steps break down interactions, creating natural, stateful conversations.
"Ask the customer for their budget and if they want to trade in their old car."
LLMs are "lazy" and often skip invisible backend tasks if they feel the user doesn't need to know. ADL ensures tools are called every time.
"When a customer is interested, tell them to contact sales. Use inform_interest to signal our department."
Instead of writing prompts and hoping the agent behaves correctly, you describe what you want in clear, structured UseCases.
ADL turns those UseCases into reliable behavior.
The agent follows your rules, keeps context, and works the way you expect.
UseCase: refund_customer
UseCase: upgrade_customer
ADL UseCase Format & Capabilities
ADL separates agent behavior definition from LLM prompting, providing a structured format backed by rules and conventions.
Each use case defines how the agent responds to a specific scenario.
Customer needs to reset their password.
Build adaptive agents that change their behavior based on context. Conditionals act like "if statements" for your prompts, letting you include or exclude instructions based on user attributes, dates, or conversation state.
ADL defines several built-in conditionals, and you can inject custom logic at runtime to handle complex business rules.
<c1,
c2> Multiple conditions (AND).
<c1
or c2> Multiple conditions (OR).
<!condition>
Negation (e.g. <!is_weekend>).
<else>
Fallback branch. True if no other Conditional applies.
<is_weekend>
True if the current date is a weekend.
<date>
Matches the current date, for example, <10.02.2006>
<step_n>
True for each turn (e.g. <step_1>, <step_2>).
Multi-line conditionals are also supported:
Customer asks for the current news.
Use markdown coding blocks to insert code directly into your UseCase.
Predefined Functions
Empower your agents to take action. Define tools inline, and the ADL Engine will handle the orchestration—calling the function securely and feeding the result back to the agent.
By explicitly declaring the tools that are required, the ADL Engine can:
Sometimes you need absolute compliance. Static responses bypass the LLM entirely for specific turns, ensuring that legal disclaimers, greetings, or fallback messages are delivered exactly as written.
In this example, the Agent will always return the text within the brackets.
Conversation flows enable the author to convey decision trees in their use cases.
The UseCase above describes a simple conversation flow with multiple branches. The ADL Engine will parse the UseCase and ensure that the Agent follows the defined flow, allowing at the same time the user to jump to other UseCases if needed.
UseCase presented as a decision tree:
HTML templates can be defined in the UseCase, allowing you to create rich, styled responses that go beyond plain text. By using placeholders for dynamic content, you can ensure that your agents deliver visually appealing and contextually relevant information.
In the example above, the agent will return a styled HTML snippet with the title of the top news article. The placeholder <news.title> will be replaced with the actual title retrieved by the @get_news() tool.
Clients displaying this content should support Tailwind CSS to render the styles correctly.
The template language Mustache is used for placeholders, allowing for simple variable interpolation.
Html comments can be used to inform the system how to extract the variables form the generated output.
Precision in language is crucial for defining agent behavior. While "should" or "can" imply optionality, "MUST" is a definitive directive. In ADL, "MUST" is a reserved keyword that signifies a mandatory requirement.
The ADL Engine extracts these "MUST" instructions to:
Connect Anywhere
The ADL Engine accepts ADL files and exposes them via standard protocols, integrating seamlessly into your existing ecosystem.
Structured agent definitions
Core execution runtime
Integration is effortless. ADL Server exposes a standard /v1/chat/completions
endpoint.
This allows you to swap out your existing OpenAI API calls for ADL Server calls, instantly upgrading your application with ADL's structured capabilities without rewriting your client code.
curl https://adl-server/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $KEY" \ -d '{ "model": "finance-agent", "messages": [ { "role": "user", "content": "Analyze the Q3 report attached." } ] }'
Comprehensive API for managing, compiling, testing, and evaluating your ADLs.
Returns the supported ADL version.
List ADLs that semantically match a searchTerm.
Retrieve a single ADL by ID.
Find ADL UseCases matching a conversation context.
Save ADL definitions to storage.
Auto-generate test cases.
Fetch test cases.
Run evaluation logic against inputs.
Compiles ADL to agent-ready format.
Check syntax and tool references.
Generate the full system prompt.
Get AI-suggested improvements.