- Parent README: open-source/README.md
- Child README: open-source/a0/src/README.md
- Remote: github.com/forkjoin-ai/a0
- Package name:
@a0n/a0
A0 is a lean Nx-compatible monorepo task runner. It keeps project.json and nx.json as the workspace contract, but replaces Nx's daemon/plugin-heavy execution path with a small runtime built around:
- An atmospheric-pressure-cannon execution model: minimal pressure, immediate release, no daemon ballast
- A meta-laminar pipeline of laminar pipelines: each ready frontier becomes one GG batch, then folds into the next frontier
- Speculative frontier planning so the next GG batch is compiled while the current frontier is still running
- Wallington rotation for low-overhead lane assignment and permit reuse
- Worthington-style collapse at each ready-frontier join boundary
- Gnosis-style
fork/foldsemantics at the task-runtime boundary, with SCC collapse so cyclic workspace subgraphs do not deadlock - Aeon-logic structural and fork/race/fold verification on every emitted GG batch before execution
- Corridor caching keyed by project fingerprints and dependency folds
- Eager warm mode that watches workspace edits, folds change bursts, and replays selected graphs to keep hot caches resident
- Per-item sticky-pass caching for parseable ESLint targets, plus preserved sticky isolated-test execution for parseable
aeon-testtargets - A local-first control plane that emits OTEL-shaped task spans, mirrors them into a JSONL audit trail, and keeps a canonical QDoc workspace mesh for runs, tasks, deployments, certifications, formal receipts, logs, metrics, VFS snapshots, and leases
- Contract-aware Cloudflare Worker command execution that can materialize a temporary Wrangler config from admitted Aeon/Forge runtime env so project-graph deploy targets keep their
dependsOnsemantics without app-local wrappers, including directwranglertargets andbun run/pnpm run/npm rundeploy-script indirection - Fail-closed formal verification for Lean and TLA, with lightweight Gnosis-style checks for regular/pre-commit use and authoritative
lake buildverification on strict/deploy paths - Managed Lean cache stewardship for
.lake, including dependency-build pruning, Mathlib cache hydration, and exact-match formal receipts under.a0/cache/formal/receipts.json
The public a0 / nx launcher now enters through gnode instead of Bun: a small wrapper hands CLI argv to a strict orchestration-shaped entrypoint, and that entrypoint hands off to the full cli.ts implementation.
The compatibility target is the subset this workspace actually uses: run, run-many, affected, warm, quality, show, top, logs, control, reset, clean, formal, cache, forge, install, docker, plus shorthand target commands such as nx build <project> and nx dev <project>.
a0 quality is the first-class local/CI quality gate. It renders one combined lint/test/typecheck graph, compiles the ready frontiers through Betty, and executes them through Gnosis without leaving the A0 scheduler path.
a0 install replaces pnpm/npm/yarn/bun for package installation by compiling the workspace dependency graph into a GG topology and verifying it through Betty's 13-phase compiler and aeon-logic's model checker before executing. If the topology fails verification, installation is refused -- strictly and without compromise. No --force, no --legacy-peer-deps, no --shamefully-hoist. package.json is the lingua franca; no new lockfile format. The resolver uses Node 22 built-in fetch() against the npm registry, caches metadata in .a0/cache/registry/, detects circular dependencies via Tarjan-style DFS, and enforces buleyean positivity (every package must have at least one resolvable version). Workspace packages are symlinked directly via workspace:* protocol resolution. Registry packages are resolved, integrity-verified, and installed through the verified topology with parallelism controlled by a WallingtonPermitPool.
a0 docker manages containers through topology-verified Dockerfile compilation. It auto-detects Dockerfiles across all workspace projects, parses each into a GG topology via @a0n/gnosis/dockerfile-topology (FROM maps to fork nodes, RUN to PROCESS edges, COPY --from to RACE edges, EXPOSE to VENT nodes), compiles through Betty, and refuses to build if any diagnostic error fires. The status surface shows per-project Dockerfile verification state, multi-stage info, exposed ports, and aeon-forge deployability (presence of aeon.toml or wrangler.toml).
Install and Docker entrypoints:
a0 install [--frozen] [--dry-run] [--verbose] [--parallel=8] [--json]a0 docker status [--json]auto-discovers and verifies all Dockerfiles in the workspacea0 docker build [project] [--tag=<tag>] [--target=<stage>] [--push] [--no-cache] [--dry-run]a0 docker run --tag=<tag> [--port=<host:container>] [--detach] [--rm]a0 docker push --tag=<tag>a0 docker topology [project] [--json]shows the GG topology for each Dockerfilea0 clean artifacts [--scope=root|workspace] [--dry-run] [--json]prunes only known generated artifact directories and refuses to touch any candidate that still contains tracked files
Formal and deploy-facing entrypoints now live directly on a0:
a0 formal verify [project|path] [--all] [--strict] [--json]a0 cache lean status [project|path] [--all] [--json]a0 cache lean prune [project|path] [--all] [--dry-run] [--json]a0 cache lean hydrate [project|path] [--all] [--json]a0 forge check [project] [--deep] [--json]a0 forge deploy [project] [--remote] [--env=production] [--json]a0 control serve [--interval-ms=2000] [--once] [--json] [--relay-url=<url>] [--relay-room-name=<room>]keeps the workspace control QDoc hot locally, syncs it over DashRelay when available, continuously ingests Forge router log, aggregate metric, and control-event streams into the same mesh, records real relay transport lifecycle intoworkspace.relay, exports dedupeda0.sync.workspace_snapshotsummaries into the shared Forge event bus when composed with one, serializes bridge ticks so they do not overlap, and reclaims expired worker leases back into runnable mesh tasks while surfacing relay stability, reconnect/drop/error counters, and flapping alertsa0 control worker [--parallel=2] [--accept=remote,auto] [--actual-placement=remote] [--once] [--json] [--relay-url=<url>] [--relay-room-name=<room>]claims runnable tasks from the workspace mesh, heartbeats leases, executes them, writes task/log state back into the same control doc, and emits worker-session sync telemetry with per-worker relay status, reconnect/drop/error counts, interruption/recovery state, and stale-worker detection once the heartbeat goes quieta0 logs <project|workspace> [--lines=100] [--level=ERROR] [--grep=text] [--watch] [--interval-ms=1000] [--json]reads normalized process logs from the control doc, the local Forge router, or the remote Forgo log adapter;workspaceexposes the system bucket with worker lifecycle, relay lifecycle, router-event, and mesh-sync events, and--watchtails the canonical control doc for live operator usea0 show control [project] [--json]now includes repo-wide project status, local verification/build state, deploy/certification/formal summaries, Lean cache metadata,workspace.meshpressure metrics,workspace.syncworker-state metrics,workspace.relaytransport metrics,workspace.routerlive-router metrics, active worker heartbeat summaries,relay-live,relay-peers,router-live, and alert lines, including relay stability plus reconnect/drop/error counters,mesh-syncrecovered/interrupted/stale worker counts, and the canonical control-doc snapshot patha0 top [project] [--watch] [--interval-ms=2000] [--json]renders the same control snapshot either once for machines or as a watchable operator view for humans, including live mesh task/run/lease pressure, worker session state, relay transport health, live router health, and hot-fault summaries, with watch mode driven by the same live router log/metric/event bridge ascontrol serveand the roster showing each worker's current relay status, reconnect/drop/error counts, and stale-worker marker when the heartbeat has gone quiet
The mesh-oriented surfaces also accept --control-doc-path=<path> plus relay flags (--relay-url, --relay-room-name, --relay-api-key, --relay-ucan, --relay-client-id, --relay-transport) so a runner, bridge, log reader, and worker can be pointed at the same canonical room/doc explicitly instead of relying only on ambient environment.
Generators, project scaffolding, and plugin-specific authoring surfaces are intentionally out of scope.