README Documentation
🤖 AI Customer Support Bot - MCP Server
A modern, extensible MCP server framework for building AI-powered customer support systems
Features • Quick Start • API Reference • Architecture • Contributing
🌟 Overview
A Model Context Protocol (MCP) compliant server framework built with modern Python. Designed for developers who want to create intelligent customer support systems without vendor lock-in. Clean architecture, battle-tested patterns, and ready for any AI provider.
graph TB
Client[HTTP Client] --> API[API Server]
API --> MW[Middleware Layer]
MW --> SVC[Service Layer]
SVC --> CTX[Context Manager]
SVC --> AI[AI Integration]
SVC --> DAL[Data Access Layer]
DAL --> DB[(PostgreSQL)]
✨ Features
🏗️ Clean Architecture 📡 MCP Compliant |
🔒 Production Ready 🚀 High Performance |
🔌 AI Agnostic 📊 Health Monitoring |
🛡️ Secure by Default 📦 Batch Processing |
🚀 Quick Start
Prerequisites
- Python 3.8+
- PostgreSQL
- Your favorite AI service (OpenAI, Anthropic, etc.)
Installation
# Clone and setup
git clone https://github.com/ChiragPatankar/AI-Customer-Support-Bot--MCP-Server.git
cd AI-Customer-Support-Bot--MCP-Server
# Create virtual environment
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Setup environment
cp .env.example .env
# Edit .env with your configuration
Configuration
# .env file
DATABASE_URL=postgresql://user:password@localhost/customer_support_bot
SECRET_KEY=your-super-secret-key
RATE_LIMIT_REQUESTS=100
RATE_LIMIT_PERIOD=60
Run
# Setup database
createdb customer_support_bot
# Start server
python app.py
# 🚀 Server running at http://localhost:8000
📡 API Reference
Core Endpoints
Health Check
GET /mcp/health
Process Single Query
POST /mcp/process
Content-Type: application/json
X-MCP-Auth: your-token
X-MCP-Version: 1.0
{
"query": "How do I reset my password?",
"priority": "high"
}
Batch Processing
POST /mcp/batch
Content-Type: application/json
X-MCP-Auth: your-token
{
"queries": [
"How do I reset my password?",
"What are your business hours?"
]
}
Response Format
Success Response
{
"status": "success",
"data": {
"response": "Generated AI response",
"confidence": 0.95,
"processing_time": "120ms"
},
"meta": {
"request_id": "req_123456",
"timestamp": "2024-02-14T12:00:00Z"
}
}
Error Response
{
"code": "RATE_LIMIT_EXCEEDED",
"message": "Rate limit exceeded",
"details": {
"retry_after": 60,
"timestamp": "2024-02-14T12:00:00Z"
}
}
🏗️ Architecture
Project Structure
📦 AI-Customer-Support-Bot--MCP-Server
├── 🚀 app.py # FastAPI application
├── 🗄️ database.py # Database configuration
├── 🛡️ middleware.py # Auth & rate limiting
├── 📋 models.py # ORM models
├── ⚙️ mcp_config.py # MCP protocol config
├── 📄 requirements.txt # Dependencies
└── 📝 .env.example # Environment template
Layer Responsibilities
Layer | Purpose | Components |
---|---|---|
API | HTTP endpoints, validation | FastAPI routes, Pydantic models |
Middleware | Auth, rate limiting, logging | Token validation, request throttling |
Service | Business logic, AI integration | Context management, AI orchestration |
Data | Persistence, models | PostgreSQL, SQLAlchemy ORM |
🔌 Extending with AI Services
Add Your AI Provider
- Install your AI SDK:
pip install openai # or anthropic, cohere, etc.
- Configure environment:
# Add to .env
AI_SERVICE_API_KEY=sk-your-api-key
AI_SERVICE_MODEL=gpt-4
- Implement service integration:
# In service layer
class AIService:
async def generate_response(self, query: str, context: dict) -> str:
# Your AI integration here
return ai_response
🔧 Development
Running Tests
pytest tests/
Code Quality
# Format code
black .
# Lint
flake8
# Type checking
mypy .
Docker Support
# Coming soon - Docker containerization
📊 Monitoring & Observability
Health Metrics
- ✅ Service uptime
- 🔗 Database connectivity
- 📈 Request rates
- ⏱️ Response times
- 💾 Memory usage
Logging
# Structured logging included
{
"timestamp": "2024-02-14T12:00:00Z",
"level": "INFO",
"message": "Query processed",
"request_id": "req_123456",
"processing_time": 120
}
🔒 Security
Built-in Security Features
- 🔐 Token Authentication - Secure API access
- 🛡️ Rate Limiting - DoS protection
- ✅ Input Validation - SQL injection prevention
- 📝 Audit Logging - Request tracking
- 🔒 Environment Secrets - Secure config management
🚀 Deployment
Environment Setup
# Production environment variables
DATABASE_URL=postgresql://prod-user:password@prod-host/db
RATE_LIMIT_REQUESTS=1000
LOG_LEVEL=WARNING
Scaling Considerations
- Use connection pooling for database
- Implement Redis for rate limiting in multi-instance setups
- Add load balancer for high availability
- Monitor with Prometheus/Grafana
🤝 Contributing
We love contributions! Here's how to get started:
Development Setup
# Fork the repo, then:
git clone https://github.com/your-username/AI-Customer-Support-Bot--MCP-Server.git
cd AI-Customer-Support-Bot--MCP-Server
# Create feature branch
git checkout -b feature/amazing-feature
# Make your changes
# ...
# Test your changes
pytest
# Submit PR
Contribution Guidelines
- 📝 Write tests for new features
- 📚 Update documentation
- 🎨 Follow existing code style
- ✅ Ensure CI passes
📈 Roadmap
- 🐳 Docker containerization
- 🔄 Redis caching layer
- 📊 Prometheus metrics
- 🔍 OpenTelemetry tracing
- 🌐 GraphQL API
- 📱 WebSocket support
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
Built with ❤️ by Chirag Patankar
⭐ Star this repo if you find it helpful! ⭐