MCP Chat
A command-line interface application that enables interaction with LLMs through document retrieval, command-based prompts, and extensible tool integrations using the Model Control Protocol architecture.
README Documentation
MCP Chat
MCP Chat is a command-line interface application. The application supports document retrieval, command-based prompts, and extensible tool integrations via the MCP (Model Control Protocol) architecture.
Prerequisites
- Python 3.9+
- Any Chat Completions LLM API Key and Provider (i.e: Gemini)
Setup
Step 1: Configure the environment variables
- Create or edit the
.env
file in the project root and verify that the following variables are set correctly:
LLM_API_KEY="" # Enter your GEMINI API secret key
LLM_CHAT_COMPLETION_URL="https://generativelanguage.googleapis.com/v1beta/openai/"
LLM_MODEL="gemini-2.0-flash"
Step 2: Install dependencies
uv is a fast Python package installer and resolver.
- Install uv, if not already installed:
pip install uv
- Create and activate a virtual environment:
uv venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
- Install dependencies:
uv sync
- Start MCP Server:
uv run uvicorn mcp_server:mcp_app --reload
- Run the project with ChatAgent in CLI
uv run main.py
- Optionally start inspector
npx @modelcontextprotocol/inspector
Usage
Basic Interaction
Simply type your message and press Enter to chat with the model.
Document Retrieval
Use the @ symbol followed by a document ID to include document content in your query:
> Tell me about @deposition.md
Commands
Use the / prefix to execute commands defined in the MCP server:
> /summarize deposition.md
Commands will auto-complete when you press Tab.
Development
Adding New Documents
Edit the mcp_server.py
file to add new documents to the docs
dictionary.
Implementing MCP Features
To fully implement the MCP features:
- Complete the TODOs in
mcp_server.py
- Implement the missing functionality in
mcp_client.py
Linting and Typing Check
There are no lint or type checks implemented.