Tentra gives your AI coding agent memory. The code-graph indexer walks your repository with Tree-sitter locally, extracts symbols and call edges, and stores them in a persistent
queryable graph. On the tentra monorepo we measured 99.4% token reduction (156.8× ratio) across 8 "where is X implemented?" queries — 114,644 tokens via file re-read vs 731 tokens via
graph query. Uses the "agent-as-LLM" pattern — the Claude/Cursor/Codex agent you already have does the semantic extraction, so there is zero LLM cost on Tentra's side and no API key
setup for end users. Also includes an architecture workspace with 9 MCP tools for diagrams and 14-framework code export.
Server Config
{
"mcpServers": {
"tentra": {
"type": "sse",
"url": "https://trytentra.com/api/mcp?key=YOUR_API_KEY"
}
}
}