#runtime
15 results found
Lemonade
Local LLM Server with NPU Acceleration
Podman MCP Server
Model Context Protocol (MCP) server for container runtimes (Podman and Docker)
Welcome to your Lovable project
Build AI Agents MCP-based
node-runtime-mcp
the mcp server that run the code in Node.js container and obtain the result
Golf
Production-Ready MCP Server Framework • Build, deploy & scale secure AI agent infrastructure • Includes Auth, Observability, Debugger, Telemetry & Runtime • Run real-world MCPs powering AI Agents
Lightrun MCP Server
Lightrun MCP connects AI coding assistants to live runtime context from production and staging applications for safe code-level debugging without redeploying. It lets MCP-compatible clients discover runtime sources, inspect live expression values, capture call stacks, measure execution duration, count executions, and collect numeric runtime metrics directly from the AI workflow.
Cycles MCP Server
AI agents call LLMs, invoke tools, and hit APIs — but have no built-in way to cap spend. A single agent loop can burn hundreds of dollars before anyone notices. Cycles MCP Server gives any MCP-compatible agent a runtime budget authority: tools to check, reserve, spend, and release budget before and after every costly operation. Works with Claude Desktop, Claude Code, Cursor, Windsurf, and any MCP host. Supports per-tenant budgets, soft-landing caps, and automatic heartbeat for long-running operations.
Novyx Mcp
Persistent memory, knowledge graph, governed actions, and runtime orchestration for AI agents. 107 tools. Works locally with zero-config SQLite(no API key needed) or connects to Novyx Cloud for the full surface. Install: uvx novyx-mcp