A unified agent orchestration hub that lets you configure and manage heterogeneous AI agents via YAML and expose them through standardized protocols.
You want to use multiple AI agents together - Claude Code for refactoring, a custom analysis agent, maybe Goose for specific tasks. But each has different APIs, protocols, and integration patterns. Coordinating them means writing glue code for each combination.
AgentPool acts as a protocol bridge. Define all your agents in one YAML file - whether they're native (PydanticAI-based), external ACP agents (Claude Code, Codex, Goose), or AG-UI agents. Then expose them all through ACP or AG-UI protocols, letting them cooperate, delegate, and communicate through a unified interface.
flowchart TB
subgraph AgentPool
subgraph config[YAML Configuration]
native[Native Agents<br/>PydanticAI]
acp_agents[ACP Agents<br/>Claude Code, Goose, Codex]
agui_agents[AG-UI Agents]
workflows[Teams & Workflows]
end
subgraph interface[Unified Agent Interface]
delegation[Inter-agent delegation]
routing[Message routing]
context[Shared context]
end
config --> interface
end
interface --> acp_server[ACP Server]
interface --> agui_server[AG-UI Server]
acp_server --> clients1[Zed, Toad, ACP Clients]
agui_server --> clients2[AG-UI Clients]
uv tool install agentpool[default]# agents.yml
agents:
assistant:
type: native
model: openai:gpt-4o
system_prompt: "You are a helpful assistant."# Run via CLI
agentpool run assistant "Hello!"
# Or start as ACP server (for Zed, Toad, etc.)
agentpool serve-acp agents.ymlThe real power comes from mixing agent types:
agents:
# Native PydanticAI-based agent
coordinator:
type: native
model: openai:gpt-4o
toolsets:
- type: subagent # Can delegate to all other agents
system_prompt: "Coordinate tasks between available agents."
# Claude Code agent (direct integration)
claude:
type: claude
description: "Claude Code for complex refactoring"
# ACP protocol agents
goose:
type: acp
provider: goose
description: "Goose for file operations"
codex:
type: acp
provider: codex
description: "OpenAI Codex agent"
# AG-UI protocol agent
agui_agent:
type: agui
url: "http://localhost:8000"
description: "Custom AG-UI agent"Now coordinator can delegate work to any of these agents, and all are accessible through the same interface.
Agents can form teams (parallel) or chains (sequential):
teams:
review_pipeline:
mode: sequential
members: [analyzer, reviewer, formatter]
parallel_coders:
mode: parallel
members: [claude, goose]async with AgentPool("agents.yml") as pool:
# Parallel execution
team = pool.get_agent("analyzer") & pool.get_agent("reviewer")
results = await team.run("Review this code")
# Sequential pipeline
chain = analyzer | reviewer | formatter
result = await chain.run("Process this")Everything is configurable - models, tools, connections, triggers, storage:
agents:
analyzer:
type: native
model:
type: fallback
models: [openai:gpt-4o, anthropic:claude-sonnet-4-0]
toolsets:
- type: subagent
- type: resource_access
mcp_servers:
- "uvx mcp-server-filesystem"
knowledge:
paths: ["docs/**/*.md"]
connections:
- type: node
name: reporter
filter_condition:
type: word_match
words: [error, warning]- MCP: Full support including elicitation, sampling, progress reporting
- ACP: Serve agents to Zed, Toad, and other ACP clients
- AG-UI: Expose agents through AG-UI protocol
- Structured Output: Define response schemas inline or import Python types
- Storage & Analytics: Track all interactions with configurable providers
- File Abstraction: UPath-backed operations work on local and remote sources
- Triggers: React to file changes, webhooks, or custom events
- Streaming TTS: Voice output support for all agents
agentpool run agent_name "prompt" # Single run
agentpool serve-acp config.yml # Start ACP server
agentpool watch --config agents.yml # React to triggers
agentpool history stats --group-by model # View analyticsfrom agentpool import AgentPool
async with AgentPool("agents.yml") as pool:
agent = pool.get_agent("assistant")
# Simple run
result = await agent.run("Hello")
# Streaming
async for event in agent.run_stream("Tell me a story"):
print(event)
# Multi-modal
result = await agent.run("Describe this", Path("image.jpg"))For complete documentation including advanced configuration, connection patterns, and API reference, visit phil65.github.io/agentpool.