One Rust binary. Any model. From solo dev to agent fleet.
$ curl -sSf https://dxos.dev/install | sh
macOS and Linux · x86_64 and ARM
Not a wrapper. An engine.
Single Rust binary. No Node.js, no Python, no Electron. Download and run.
Ollama, Claude, GPT, Gemini, OpenRouter. Use what you have. Switch anytime.
Pull a local model with Ollama and code anywhere. No internet required.
Read, Write, Glob, Grep, Bash, Git, Think, Done. ~400 tokens overhead vs 10,000+.
Spawn agent fleets for parallel tasks. Built-in orchestration and coordination.
SQLite-backed brain. Your agent remembers context across sessions.
Beautiful terminal UI built with Ratatui. Panels, streaming, syntax highlighting.
Apache-2.0. Read every line. Change anything. Self-host everything.
The numbers that matter
| Claude Code | Cursor | OpenCode | DXOS | |
|---|---|---|---|---|
| Open source | No | No | Yes | Yes |
| Provider lock-in | Anthropic | OpenAI default | Any | Any model |
| Works offline | No | No | Yes | Yes |
| Multi-agent | No | No | No | Fleet mode |
| Persistent memory | Plugin | No | No | Built-in |
| Dependencies | Node.js | Electron | Go binary | Zero |
| Tool token overhead | ~10,000 | Unknown | ~1,200 | ~400 |
| Cost | $200/mo | $20/mo | Free | Free |
Every AI tool burns tokens just describing its tools. More tools = more cost per request.