Runnable examples for ROMA — the Recursive Open Meta-Agent framework by Sentient AGI.
ROMA decomposes any goal into a DAG of subtasks and solves them through a five-stage pipeline: Atomizer → Planner → Executor → Aggregator → Verifier. This repo shows you exactly how to use it, from a single function call to a fully wired manual pipeline.
| Demo | Description |
|---|---|
one_shot |
solve() — one call, full pipeline, done |
pipeline |
Step through each module manually (inspect every stage) |
async_demo |
async_event_solve() — parallel subtask execution |
custom |
Pass any task from the CLI, get a result + DAG trace |
Prerequisites: Python 3.12+, at least one LLM API key
git clone https://github.com/Julian-dev28/roma-examples.git
cd roma-examples
pip install roma-dspy python-dotenv
cp .env.example .env
# open .env and add your key — XAI_API_KEY, OPENROUTER_API_KEY, OPENAI_API_KEY, etc.
python app.pyThat opens an interactive menu. To run a specific demo directly:
python app.py --demo one_shot
python app.py --demo pipeline
python app.py --demo async_demo
python app.py --demo custom --task "Draft a go-to-market strategy for a developer tools startup"ROMA uses DSPy + LiteLLM under the hood, so any LiteLLM-compatible model string works. The app auto-detects which key you have and selects models accordingly.
| Provider | Key | Example models |
|---|---|---|
| OpenRouter | OPENROUTER_API_KEY |
openrouter/google/gemini-2.5-flash |
| xAI | XAI_API_KEY |
xai/grok-3-latest, xai/grok-3-mini-latest |
| OpenAI | OPENAI_API_KEY |
openai/gpt-4o-mini |
| Anthropic | ANTHROPIC_API_KEY |
anthropic/claude-sonnet-4-5 |
GOOGLE_API_KEY |
google/gemini-2.0-flash |
|
| Fireworks | FIREWORKS_API_KEY |
fireworks_ai/.../kimi-k2 |
Only one key is required. OpenRouter is the easiest option since a single key covers every provider.
┌─────────────────────────────────────────────┐
│ RecursiveSolver │
│ │
goal ──► Atomizer ────┤ is_atomic = True ──────────► Executor │
(plan/exec?) │ │ │
│ is_atomic = False │ │
│ │ ▼ │
│ ▼ result │
│ Planner │
│ [sub₁, sub₂, sub₃] │
│ │ │
│ ▼ (parallel when no dependencies) │
│ Executor × N ──► subtask results │
│ │ │
│ ▼ │
│ Aggregator ──► synthesized_result │
│ │ │
│ ▼ │
│ Verifier ──► verdict (bool) + feedback │
└─────────────────────────────────────────────┘
Each stage is independently configurable — swap models, attach tools, or skip stages entirely depending on your use case.
from roma_dspy import solve
result = solve("Explain the CAP theorem in plain English", max_depth=2)
print(result.result)from roma_dspy import Atomizer, Planner, Executor, Aggregator, Verifier
atomizer = Atomizer(model="xai/grok-3-mini-latest")
planner = Planner(model="xai/grok-3-latest")
executor = Executor(model="xai/grok-3-latest")
aggregator = Aggregator(model="xai/grok-3-latest")
verifier = Verifier(model="xai/grok-3-mini-latest")
atomized = atomizer.forward("Compare REST and GraphQL for a mobile API")
if not atomized.is_atomic:
plan = planner.forward("Compare REST and GraphQL for a mobile API")
results = [executor.forward(s.goal) for s in plan.subtasks]
filled = [s.model_copy(update={"result": r.output})
for s, r in zip(plan.subtasks, results)]
agg = aggregator.forward("Compare REST and GraphQL for a mobile API", filled)
verdict = verifier.forward("Compare REST and GraphQL for a mobile API",
agg.synthesized_result)
print(agg.synthesized_result)
print("Passes verification:", verdict.verdict)import asyncio
from roma_dspy import async_event_solve
async def main():
node = await async_event_solve(
"Build a week-long learning plan for Kubernetes",
max_depth=2,
concurrency=4,
)
print(node.result)
asyncio.run(main())roma-examples/
├── app.py # interactive demo app (4 demos, CLI flags)
├── .env.example # copy → .env, add your key
├── requirements.txt
└── README.md
- ROMA repo — framework source, benchmarks, Docker setup
- roma-dspy on PyPI
- DSPy docs — the underlying LLM programming framework
- LiteLLM docs — provider/model reference
