Submit

项目介绍

@xiaoxiaoningdesui

A lightweight large language model input-output tracker—launch with a single command, integrate effortlessly.
Overview

What is LLM-Tracker?

LLM-Tracker is a lightweight input-output tracker for large language models, designed to record interactions between applications and models during development.

How to use LLM-Tracker?

To use LLM-Tracker, compile the project using Go, and launch it with the command ./llm-tracker.exe --configFile=config.toml. Ensure you have the necessary configuration files in place.

Key features of LLM-Tracker?

  • Hides the tools list in input to reduce clutter during interactions.
  • Supports both streaming and non-streaming responses, allowing for efficient processing.
  • Compatible with various modes including mcp/function_call/chat.
  • Escapes special characters in input and output for better readability.

Use cases of LLM-Tracker?

  1. Tracking input-output interactions for large language models in real-time.
  2. Integrating with applications that utilize models like ollama and deepseek.
  3. Debugging and optimizing model interactions by analyzing recorded data.

FAQ from LLM-Tracker?

  • What models does LLM-Tracker support?

LLM-Tracker currently supports models deployed with ollama, deepseek, and theoretically any model compatible with OpenAI SDK.

  • How do I integrate LLM-Tracker with my application?

Change the IP and port in your workflow/client code to 127.0.0.1:1234 to connect to LLM-Tracker.

  • Is LLM-Tracker free to use?

Yes! LLM-Tracker is open-source and available under the MIT license.

© 2025 MCP.so. All rights reserved.

Build with ShipAny.