README Documentation
š¤ AI Customer Support Bot - MCP Server
A modern, extensible MCP server framework for building AI-powered customer support systems
Features ⢠Quick Start ⢠API Reference ⢠Architecture ⢠Contributing
š Overview
A Model Context Protocol (MCP) compliant server framework built with modern Python. Designed for developers who want to create intelligent customer support systems without vendor lock-in. Clean architecture, battle-tested patterns, and ready for any AI provider.
graph TB
Client[HTTP Client] --> API[API Server]
API --> MW[Middleware Layer]
MW --> SVC[Service Layer]
SVC --> CTX[Context Manager]
SVC --> AI[AI Integration]
SVC --> DAL[Data Access Layer]
DAL --> DB[(PostgreSQL)]
⨠Features
|
šļø Clean Architecture š” MCP Compliant |
š Production Ready š High Performance |
|
š AI Agnostic š Health Monitoring |
š”ļø Secure by Default š¦ Batch Processing |
š Quick Start
Prerequisites
- Python 3.8+
- PostgreSQL
- Your favorite AI service (OpenAI, Anthropic, etc.)
Installation
# Clone and setup
git clone https://github.com/ChiragPatankar/AI-Customer-Support-Bot--MCP-Server.git
cd AI-Customer-Support-Bot--MCP-Server
# Create virtual environment
python -m venv venv
source venv/bin/activate # Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Setup environment
cp .env.example .env
# Edit .env with your configuration
Configuration
# .env file
DATABASE_URL=postgresql://user:password@localhost/customer_support_bot
SECRET_KEY=your-super-secret-key
RATE_LIMIT_REQUESTS=100
RATE_LIMIT_PERIOD=60
Run
# Setup database
createdb customer_support_bot
# Start server
python app.py
# š Server running at http://localhost:8000
š” API Reference
Core Endpoints
Health Check
GET /mcp/health
Process Single Query
POST /mcp/process
Content-Type: application/json
X-MCP-Auth: your-token
X-MCP-Version: 1.0
{
"query": "How do I reset my password?",
"priority": "high"
}
Batch Processing
POST /mcp/batch
Content-Type: application/json
X-MCP-Auth: your-token
{
"queries": [
"How do I reset my password?",
"What are your business hours?"
]
}
Response Format
Success Response
{
"status": "success",
"data": {
"response": "Generated AI response",
"confidence": 0.95,
"processing_time": "120ms"
},
"meta": {
"request_id": "req_123456",
"timestamp": "2024-02-14T12:00:00Z"
}
}
Error Response
{
"code": "RATE_LIMIT_EXCEEDED",
"message": "Rate limit exceeded",
"details": {
"retry_after": 60,
"timestamp": "2024-02-14T12:00:00Z"
}
}
šļø Architecture
Project Structure
š¦ AI-Customer-Support-Bot--MCP-Server
āāā š app.py # FastAPI application
āāā šļø database.py # Database configuration
āāā š”ļø middleware.py # Auth & rate limiting
āāā š models.py # ORM models
āāā āļø mcp_config.py # MCP protocol config
āāā š requirements.txt # Dependencies
āāā š .env.example # Environment template
Layer Responsibilities
| Layer | Purpose | Components |
|---|---|---|
| API | HTTP endpoints, validation | FastAPI routes, Pydantic models |
| Middleware | Auth, rate limiting, logging | Token validation, request throttling |
| Service | Business logic, AI integration | Context management, AI orchestration |
| Data | Persistence, models | PostgreSQL, SQLAlchemy ORM |
š Extending with AI Services
Add Your AI Provider
- Install your AI SDK:
pip install openai # or anthropic, cohere, etc.
- Configure environment:
# Add to .env
AI_SERVICE_API_KEY=sk-your-api-key
AI_SERVICE_MODEL=gpt-4
- Implement service integration:
# In service layer
class AIService:
async def generate_response(self, query: str, context: dict) -> str:
# Your AI integration here
return ai_response
š§ Development
Running Tests
pytest tests/
Code Quality
# Format code
black .
# Lint
flake8
# Type checking
mypy .
Docker Support
# Coming soon - Docker containerization
š Monitoring & Observability
Health Metrics
- ā Service uptime
- š Database connectivity
- š Request rates
- ā±ļø Response times
- š¾ Memory usage
Logging
# Structured logging included
{
"timestamp": "2024-02-14T12:00:00Z",
"level": "INFO",
"message": "Query processed",
"request_id": "req_123456",
"processing_time": 120
}
š Security
Built-in Security Features
- š Token Authentication - Secure API access
- š”ļø Rate Limiting - DoS protection
- ā Input Validation - SQL injection prevention
- š Audit Logging - Request tracking
- š Environment Secrets - Secure config management
š Deployment
Environment Setup
# Production environment variables
DATABASE_URL=postgresql://prod-user:password@prod-host/db
RATE_LIMIT_REQUESTS=1000
LOG_LEVEL=WARNING
Scaling Considerations
- Use connection pooling for database
- Implement Redis for rate limiting in multi-instance setups
- Add load balancer for high availability
- Monitor with Prometheus/Grafana
š¤ Contributing
We love contributions! Here's how to get started:
Development Setup
# Fork the repo, then:
git clone https://github.com/your-username/AI-Customer-Support-Bot--MCP-Server.git
cd AI-Customer-Support-Bot--MCP-Server
# Create feature branch
git checkout -b feature/amazing-feature
# Make your changes
# ...
# Test your changes
pytest
# Submit PR
Contribution Guidelines
- š Write tests for new features
- š Update documentation
- šØ Follow existing code style
- ā Ensure CI passes
š License
This project is licensed under the MIT License - see the LICENSE file for details.
Built with ā¤ļø by Chirag Patankar
ā Star this repo if you find it helpful! ā
Report Bug ⢠Request Feature ⢠Documentation
