The problem with AI coding assistants
Every time you start a new session, you re-explain your codebase. Every time you switch between Claude Code and Codex, you lose context. Every time an API spec changes, nobody catches it until production breaks.
Delimit fixes all three.
One workspace across every model
Your tasks, memory, and governance policies persist across Claude Code, Codex, Cursor, and Gemini CLI. Start a task in Claude, continue it in Codex, finish it in Gemini. The ledger tracks everything.
27-type breaking change detection
Not AI inference — deterministic rules. Endpoint removed, type changed, required field added, security scheme dropped. Same input always produces the same result. Catches what code review misses.
Governance that ships with you
Policy presets (strict/default/relaxed) or custom YAML rules. Pre-commit hooks block breaking changes locally. GitHub Action blocks them on PRs. The same engine runs everywhere.
180+ MCP tools across 5 domains
- Govern — lint, diff, policy, semver, security audit
- Context — memory, ledger, sessions, cross-model handoffs
- Ship — deploy, publish, rollback, changelog, evidence
- Observe — metrics, logs, alerts, drift detection
- Orchestrate — multi-model deliberation, agent dispatch, swarm triggers
Multi-model deliberation
Ask a hard question, get consensus from multiple AI models debating each other. Free tier uses Gemini Flash + GPT-4o-mini. BYOK for any model.
Zero-config start
npx delimit-cli setup
Detects your framework, finds your specs, installs governance, configures your AI assistants. One command.
Links
Server Config
{
"mcpServers": {
"delimit": {
"type": "stdio",
"command": "python3",
"args": [
"~/.delimit/server/ai/server.py"
],
"cwd": "~/.delimit/server",
"env": {
"PYTHONPATH": "~/.delimit/server"
}
}
}
}