A Rust framework for building AI agent workflows with tools, streaming LLM responses, event tracking, and intelligent tool-loop prevention.
- Agents backed by pluggable LLM providers (OpenAI, llama.cpp / LM Studio)
- Tools — native Rust functions or external MCP servers
- Workflows — sequential, conditional, transform, and nested sub-workflow steps
- Streaming — token-by-token LLM output via channels
- Events — unified
scope × type × statusevent stream for full observability - Context management — pluggable history pruning (token budget, sliding window, summarization)
- Tool loop prevention — detects and short-circuits repeat tool calls
- Config — load runtime config from YAML or TOML
[dependencies]
agent-runtime = "0.4"
tokio = { version = "1", features = ["full"] }use agent_runtime::llm::LlamaClient;
use agent_runtime::types::AgentInput;
use agent_runtime::{Agent, AgentConfig};
use std::sync::Arc;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let client = Arc::new(LlamaClient::new("http://localhost:8080", "llama"));
let agent = Agent::new(
AgentConfig::builder("assistant")
.system_prompt("You are a helpful assistant.")
.build(),
)
.with_client(client);
let output = agent
.execute(&AgentInput::from_text("What is 42 * 137?"))
.await?;
println!("{}", output.data);
Ok(())
}use agent_runtime::tools::{CalculatorTool, ToolRegistry};
use agent_runtime::{Agent, AgentConfig};
use std::sync::Arc;
let mut registry = ToolRegistry::new();
registry.register(CalculatorTool);
let agent = Agent::new(
AgentConfig::builder("math-bot")
.system_prompt("Use tools to compute answers.")
.tools(Arc::new(registry))
.build(),
)
.with_client(client);use agent_runtime::workflow::steps::{AgentStep, TransformStep};
use agent_runtime::{Runtime, Workflow};
let workflow = Workflow::builder()
.add_step(Box::new(AgentStep::new(researcher_config)))
.add_step(Box::new(TransformStep::new(
"summarize-prompt".into(),
|data| serde_json::json!({ "text": format!("Summarize: {}", data) }),
)))
.add_step(Box::new(AgentStep::new(summarizer_config)))
.build();
let runtime = Runtime::new();
let run = runtime.execute(workflow).await;use agent_runtime::{EventScope, EventType, Runtime};
let runtime = Runtime::new();
let mut rx = runtime.event_stream().subscribe();
tokio::spawn(async move {
while let Ok(event) = rx.recv().await {
match (event.scope, event.event_type) {
(EventScope::LlmRequest, EventType::Progress) => {
if let Some(chunk) = event.data.get("chunk").and_then(|c| c.as_str()) {
print!("{}", chunk);
}
}
(EventScope::Tool, EventType::Completed) => {
println!("✓ {}", event.component_id);
}
_ => {}
}
}
});
runtime.execute(workflow).await;use agent_runtime::tools::McpClient;
let mcp = McpClient::new_stdio(
"npx",
vec!["-y", "@modelcontextprotocol/server-filesystem", "/tmp"],
).await?;
let tools = mcp.list_tools().await?;# agent-runtime.yaml
llm:
base_url: "http://localhost:8080"
model: "llama"
agents:
- name: researcher
system_prompt: "You are a research assistant."
max_tool_iterations: 10use agent_runtime::RuntimeConfig;
let config = RuntimeConfig::from_file("agent-runtime.yaml")?;src/
├── agent/ Agent + AgentConfig + execution loop
├── config.rs YAML/TOML configuration
├── context/ WorkflowContext + pruning strategies/
├── error.rs Error types
├── event/ Event, EventStream, EventScope/Type/Status
├── llm/ LlmClient trait + provider/{llama, openai}
├── runtime/ Runtime + retry + timeout
├── tools/ Tool trait, registry, native, mcp, loop_detection, builtin
├── types.rs AgentInput/Output, ToolResult, shared types
└── workflow/ Workflow + step + steps/{agent, transform, conditional, subworkflow}
Every event has a scope (Workflow, WorkflowStep, Agent, LlmRequest, Tool, System),
a type (Started, Progress, Completed, Failed, Canceled), and a status.
Component IDs follow predictable formats: workflow_name, workflow:step:N, agent_name,
agent:llm:N, tool_name:N, system:subsystem.
docs/— full guides for events, tools, workflows, MCP, configurationcrates/agent-discourse/— multi-agent demo
cargo test
cargo clippy --workspace --all-targets -- -D warningsDual-licensed under MIT or Apache-2.0 at your option.