JUHE API Marketplace
ChiragPatankar avatar
MCP Server

AI Customer Support Bot - MCP Server

A Model Context Protocol (MCP) server that provides AI-powered customer support using Cursor AI and Glama.ai integration.

2
GitHub Stars
8/18/2025
Last Updated
No Configuration
Please check the documentation below.

README Documentation

🤖 AI Customer Support Bot - MCP Server

A modern, extensible MCP server framework for building AI-powered customer support systems

FeaturesQuick StartAPI ReferenceArchitectureContributing


🌟 Overview

A Model Context Protocol (MCP) compliant server framework built with modern Python. Designed for developers who want to create intelligent customer support systems without vendor lock-in. Clean architecture, battle-tested patterns, and ready for any AI provider.

graph TB
    Client[HTTP Client] --> API[API Server]
    API --> MW[Middleware Layer]
    MW --> SVC[Service Layer]
    SVC --> CTX[Context Manager]
    SVC --> AI[AI Integration]
    SVC --> DAL[Data Access Layer]
    DAL --> DB[(PostgreSQL)]

✨ Features

🏗️ Clean Architecture
Layered design with clear separation of concerns

📡 MCP Compliant
Full Model Context Protocol implementation

🔒 Production Ready
Auth, rate limiting, monitoring included

🚀 High Performance
Built on FastAPI with async support

🔌 AI Agnostic
Integrate any AI provider easily

📊 Health Monitoring
Comprehensive metrics and diagnostics

🛡️ Secure by Default
Token auth and input validation

📦 Batch Processing
Handle multiple queries efficiently

🚀 Quick Start

Prerequisites

  • Python 3.8+
  • PostgreSQL
  • Your favorite AI service (OpenAI, Anthropic, etc.)

Installation

# Clone and setup
git clone https://github.com/ChiragPatankar/AI-Customer-Support-Bot--MCP-Server.git
cd AI-Customer-Support-Bot--MCP-Server

# Create virtual environment
python -m venv venv
source venv/bin/activate  # Windows: venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

# Setup environment
cp .env.example .env
# Edit .env with your configuration

Configuration

# .env file
DATABASE_URL=postgresql://user:password@localhost/customer_support_bot
SECRET_KEY=your-super-secret-key
RATE_LIMIT_REQUESTS=100
RATE_LIMIT_PERIOD=60

Run

# Setup database
createdb customer_support_bot

# Start server
python app.py
# 🚀 Server running at http://localhost:8000

📡 API Reference

Core Endpoints

Health Check

GET /mcp/health

Process Single Query

POST /mcp/process
Content-Type: application/json
X-MCP-Auth: your-token
X-MCP-Version: 1.0

{
  "query": "How do I reset my password?",
  "priority": "high"
}

Batch Processing

POST /mcp/batch
Content-Type: application/json
X-MCP-Auth: your-token

{
  "queries": [
    "How do I reset my password?",
    "What are your business hours?"
  ]
}
Response Format

Success Response

{
  "status": "success",
  "data": {
    "response": "Generated AI response",
    "confidence": 0.95,
    "processing_time": "120ms"
  },
  "meta": {
    "request_id": "req_123456",
    "timestamp": "2024-02-14T12:00:00Z"
  }
}

Error Response

{
  "code": "RATE_LIMIT_EXCEEDED",
  "message": "Rate limit exceeded",
  "details": {
    "retry_after": 60,
    "timestamp": "2024-02-14T12:00:00Z"
  }
}

🏗️ Architecture

Project Structure

📦 AI-Customer-Support-Bot--MCP-Server
├── 🚀 app.py              # FastAPI application
├── 🗄️  database.py         # Database configuration
├── 🛡️  middleware.py       # Auth & rate limiting
├── 📋 models.py          # ORM models
├── ⚙️  mcp_config.py      # MCP protocol config
├── 📄 requirements.txt   # Dependencies
└── 📝 .env.example      # Environment template

Layer Responsibilities

LayerPurposeComponents
APIHTTP endpoints, validationFastAPI routes, Pydantic models
MiddlewareAuth, rate limiting, loggingToken validation, request throttling
ServiceBusiness logic, AI integrationContext management, AI orchestration
DataPersistence, modelsPostgreSQL, SQLAlchemy ORM

🔌 Extending with AI Services

Add Your AI Provider

  1. Install your AI SDK:
pip install openai  # or anthropic, cohere, etc.
  1. Configure environment:
# Add to .env
AI_SERVICE_API_KEY=sk-your-api-key
AI_SERVICE_MODEL=gpt-4
  1. Implement service integration:
# In service layer
class AIService:
    async def generate_response(self, query: str, context: dict) -> str:
        # Your AI integration here
        return ai_response

🔧 Development

Running Tests

pytest tests/

Code Quality

# Format code
black .

# Lint
flake8

# Type checking
mypy .

Docker Support

# Coming soon - Docker containerization

📊 Monitoring & Observability

Health Metrics

  • ✅ Service uptime
  • 🔗 Database connectivity
  • 📈 Request rates
  • ⏱️ Response times
  • 💾 Memory usage

Logging

# Structured logging included
{
  "timestamp": "2024-02-14T12:00:00Z",
  "level": "INFO",
  "message": "Query processed",
  "request_id": "req_123456",
  "processing_time": 120
}

🔒 Security

Built-in Security Features

  • 🔐 Token Authentication - Secure API access
  • 🛡️ Rate Limiting - DoS protection
  • Input Validation - SQL injection prevention
  • 📝 Audit Logging - Request tracking
  • 🔒 Environment Secrets - Secure config management

🚀 Deployment

Environment Setup

# Production environment variables
DATABASE_URL=postgresql://prod-user:password@prod-host/db
RATE_LIMIT_REQUESTS=1000
LOG_LEVEL=WARNING

Scaling Considerations

  • Use connection pooling for database
  • Implement Redis for rate limiting in multi-instance setups
  • Add load balancer for high availability
  • Monitor with Prometheus/Grafana

🤝 Contributing

We love contributions! Here's how to get started:

Development Setup

# Fork the repo, then:
git clone https://github.com/your-username/AI-Customer-Support-Bot--MCP-Server.git
cd AI-Customer-Support-Bot--MCP-Server

# Create feature branch
git checkout -b feature/amazing-feature

# Make your changes
# ...

# Test your changes
pytest

# Submit PR

Contribution Guidelines

  • 📝 Write tests for new features
  • 📚 Update documentation
  • 🎨 Follow existing code style
  • ✅ Ensure CI passes

📈 Roadmap

  • 🐳 Docker containerization
  • 🔄 Redis caching layer
  • 📊 Prometheus metrics
  • 🔍 OpenTelemetry tracing
  • 🌐 GraphQL API
  • 📱 WebSocket support

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.


Built with ❤️ by Chirag Patankar

Star this repo if you find it helpful!

Report BugRequest FeatureDocumentation

Quick Actions

Key Features

Model Context Protocol
Secure Communication
Real-time Updates
Open Source