Five-agent MUD world model: MH, PH, WM, DH, VH. WM (Mistral-7B) is local and trainable; MH/PH/DH/VH use GPT-5-mini (API). See docs/ARCHITECTURE.md for the full spec.
python -m venv .venv
source .venv/bin/activate # or .venv\Scripts\activate on Windows
pip install -r requirements.txt
cp .env.example .env
# Edit .env: set OPENAI_API_KEY, MUD_HOST, MUD_PORT.env:OPENAI_API_KEY,MUD_HOST,MUD_PORTconfig.yaml: timeouts, paths, model names, training options
- Orchestrator (main loop):
python main.py [max_steps]— defaults to 10 rounds then exits; pass0for unlimited. Logs todata/logs/orchestrator.logand stderr (MH/PH/WM/DH/VH per step). - Interrupt: Press Ctrl+C to stop gracefully; the client disconnects and the final step count is logged (no traceback).
- Manual override: While the orchestrator is running in a TTY, type a command and press Enter to send it as the next action instead of DH's choice. Type pause to stop the loop and send raw commands directly to the MUD; type resume to refresh state (kickoff) and continue the automated loop.
- Train WM on logs:
python train.py [--trace-glob ...] [--mode outcome_summary|next_line] [--output-dir ...]
python scripts/test_memory.py— memory read/write (no deps)python scripts/test_mud_client.py— telnet connect (needs MUD_HOST)python scripts/test_agents_api.py— MH, PH, DH, VH (needs OPENAI_API_KEY)python scripts/test_wm.py— WM inference (needs GPU/model)python scripts/test_train.py— training pipeline (needs GPU + transformers/datasets/peft; otherwise skips)
src/mud/— telnet client, buffer, silence detectionsrc/agents/— MH, PH, WM, DH, VHsrc/memory/— commands.md, current_location.md, mobs.mdsrc/orchestrator.py— main loopprompts/— prompt templatesdata/— memory files,data/logs/traces.jsonl,data/checkpoints/wm
This project was developed and tested against tbaMUD, a MUD codebase descended from CircleMUD and DikuMUD. Compatibility with other MUD servers is not guaranteed.
This project has been vibecoded. No guarantees, warranties, or fitness for any particular purpose are provided. Use at your own risk.