Query multiple LLMs in parallel from AI coding tools. Ask a question to OpenAI, Gemini, Groq, or any OpenAI-compatible API — and CLI agents like Claude Code, Codex, and Gemini CLI — then compare answers, run consensus votes, structured debates, and judge evaluations. Rubber duck debugging, but the ducks talk back.
Overview
Rubber duck debugging, but the ducks talk back. An MCP server that bridges to multiple LLMs so you can get different perspectives without leaving your editor.
Features
- Multi-provider querying — Ask OpenAI, Gemini, Groq, Together AI, Ollama, or any OpenAI-compatible API in parallel
- CLI agent integration — Spawn Claude Code, Codex, Gemini CLI, Grok, and Aider as "ducks" with full codebase access
- Duck Council — Query all providers simultaneously and compare responses side by side
- Consensus voting — Have ducks vote on the best approach, with confidence scores
- Structured debates — Ducks argue for and against, then a judge picks the winner
- Conversations — Maintain chat context across multiple messages
- MCP Bridge — Ducks can use external MCP tools (web search, file access, etc.)
- Health monitoring — Automatic failover when a provider goes down
- Security controls — Rate limiting, token constraints, pattern blocking, PII redaction
- Usage analytics — Track requests, tokens, and estimated costs per provider
- Interactive UIs — Rich HTML panels for comparison, voting, and debate visualization
Installation
npx mcp-rubber-duck
Also available via Docker and the official MCP Registry.
Server Config
{
"mcpServers": {
"rubber-duck": {
"command": "mcp-rubber-duck",
"env": {
"MCP_SERVER": "true",
"OPENAI_API_KEY": "<YOUR_OPENAI_KEY>",
"GEMINI_API_KEY": "<YOUR_GEMINI_KEY>",
"DEFAULT_PROVIDER": "openai"
}
}
}
}