Orchestrator MCP
An intelligent MCP server that orchestrates multiple MCP servers with AI-enhanced workflow automation and production-ready context engine capabilities for codebase analysis.
README Documentation
Orchestrator MCP
An intelligent MCP (Model Context Protocol) server that orchestrates multiple MCP servers and provides AI-enhanced workflow automation with production-ready context engine capabilities.
🌟 Features
Core Orchestration
- Multi-Server Orchestration: Connect to multiple MCP servers simultaneously
- Universal Compatibility: Works with npm, uvx, Python, and other MCP server types
- Server Management: Dynamic server discovery and health monitoring
- Scalable Architecture: Easy to add new servers and capabilities
🧠 AI Enhancement Layer
- Intelligent Tool Routing: AI analyzes requests and selects optimal tools
- Workflow Automation: Multi-step processes orchestrated automatically
- Intent Understanding: Natural language request analysis and planning
- Context Synthesis: Combines results from multiple tools into coherent responses
- Result Enhancement: AI improves and formats outputs for better user experience
🎯 Context Engine (PRODUCTION READY!)
- Large Context Analysis: Process 50K+ characters using Gemini's 1M+ token context
- Intelligent Code Understanding: AI-powered codebase analysis with 95% confidence
- Real-time File Discovery: Dynamic file loading and relationship mapping
- Quality Assessment: Identify placeholder vs real implementations
- Performance Optimized: 30s execution time for complex analysis
Built-in Capabilities
- Web Search: DuckDuckGo search for current information
- Fallback Mode: Graceful degradation when AI is not available
Current Status
🎉 PRODUCTION READY - Context Engine Complete!
✅ Context Engine: 85.7% quality score, 95% analysis confidence ✅ AI Enhancement Layer: Complete with intelligent routing and workflow automation ✅ Multi-Server Orchestration: 6/6 MCP servers connected and functional
🚀 Quick Start
-
Install dependencies:
npm install
-
Build the project:
npm run build
-
Configure in your MCP client (e.g., Claude Desktop, VS Code):
See the example configuration files in the
examples/
directory:examples/claude-desktop-config.json
- For Claude Desktopexamples/vscode-mcp.json
- For VS Code
-
Start using the orchestrator through your MCP client!
MCP Integration
For Stdio MCP Server:
- Name:
Orchestrator MCP
- Command:
node
- Arguments:
/path/to/orchestrator-mcp/dist/index.js
For Development:
- Command:
npx
- Arguments:
orchestrator-mcp
(after publishing to npm)
🛠️ Available Tools
Core AI Enhancement Tools
The orchestrator exposes a minimal set of tools focused on unique capabilities that enhance AI assistants:
ai_process
- Primary Interface - Process requests using AI orchestration with intelligent tool selectionget_info
- System introspection - Get information about connected servers and available capabilitiesai_status
- Health monitoring - Get the status of AI orchestration capabilities
Connected Server Tools
All tools from connected MCP servers are automatically available through AI orchestration:
- Filesystem operations (read, write, search files)
- Git operations (repository management, status, history)
- Memory system (knowledge graph storage)
- Web search (DuckDuckGo search for current information)
- Browser automation (Puppeteer for web scraping and automation)
- Sequential thinking (Dynamic problem-solving through thought sequences)
- And more...
🔗 Connected Servers
Currently enabled servers:
- filesystem (npm) - File operations with secure access controls
- sequential-thinking (npm) - Dynamic problem-solving through thought sequences
- git (uvx) - Git repository tools and operations
- memory (npm) - Knowledge graph-based persistent memory
- puppeteer (npm) - Browser automation and web scraping
- duckduckgo-search (npm) - Privacy-focused web search
🤖 AI Configuration
To enable AI features, you need an OpenRouter API key. Additional API keys can be configured for enhanced integrations:
-
Required for AI features: Get an API key from OpenRouter
-
Optional integrations: None currently required - all enabled servers work without additional API keys
-
Configure the API keys in your MCP client settings:
For Claude Desktop (
~/.claude_desktop_config.json
):{ "mcpServers": { "Orchestrator MCP": { "command": "node", "args": ["/path/to/project/dist/index.js"], "env": { "OPENROUTER_API_KEY": "your_api_key_here", "OPENROUTER_DEFAULT_MODEL": "anthropic/claude-3.5-sonnet", "OPENROUTER_MAX_TOKENS": "2000", "OPENROUTER_TEMPERATURE": "0.7" } } } }
For VS Code (
.vscode/mcp.json
):{ "inputs": [ { "type": "promptString", "id": "openrouter-key", "description": "OpenRouter API Key", "password": true } ], "servers": { "Orchestrator MCP": { "type": "stdio", "command": "node", "args": ["/path/to/project/dist/index.js"], "env": { "OPENROUTER_API_KEY": "${input:openrouter-key}", "OPENROUTER_DEFAULT_MODEL": "anthropic/claude-3.5-sonnet", "OPENROUTER_MAX_TOKENS": "2000", "OPENROUTER_TEMPERATURE": "0.7" } } } }
AI Models Supported
The orchestrator works with any model available on OpenRouter, including:
- Anthropic Claude (recommended)
- OpenAI GPT models
- Meta Llama models
- Google Gemini models
- And many more!
📖 Usage Examples
🎯 Context Engine (Production Ready!)
# Intelligent Codebase Analysis
{"tool": "ai_process", "arguments": {"request": "Analyze the current intelligence layer implementation. Show me what's actually implemented vs placeholder code"}}
{"tool": "ai_process", "arguments": {"request": "Find all quality assessment code and identify which parts are real vs mock implementations"}}
{"tool": "ai_process", "arguments": {"request": "Analyze the context management capabilities and identify gaps in the current implementation"}}
# Large Context Code Understanding
{"tool": "ai_process", "arguments": {"request": "Load the entire src/intelligence directory and provide a comprehensive analysis of the architecture"}}
{"tool": "ai_process", "arguments": {"request": "Analyze relationships between context engine, AI workflows, and orchestrator components"}}
{"tool": "ai_process", "arguments": {"request": "Identify all placeholder implementations across the codebase and prioritize which to implement first"}}
Primary AI Interface
# Code Analysis & Development
{"tool": "ai_process", "arguments": {"request": "Find all TypeScript files with TODO comments and create a summary report"}}
{"tool": "ai_process", "arguments": {"request": "Analyze the codebase architecture and identify potential improvements"}}
{"tool": "ai_process", "arguments": {"request": "Check git status, review recent commits, and summarize changes since last week"}}
# Research & Information Gathering
{"tool": "ai_process", "arguments": {"request": "Search for Next.js 15 new features and create a comparison with version 14"}}
{"tool": "ai_process", "arguments": {"request": "Search for TypeScript 5.3 release notes and extract breaking changes"}}
{"tool": "ai_process", "arguments": {"request": "Research React Server Components best practices and save key insights to memory"}}
# Code Analysis & Quality
{"tool": "ai_process", "arguments": {"request": "Analyze code quality across the project and generate improvement recommendations"}}
{"tool": "ai_process", "arguments": {"request": "Review git history and identify patterns in recent changes"}}
# Complex Multi-Step Workflows
{"tool": "ai_process", "arguments": {"request": "Search for React testing best practices, analyze our current test files, and suggest specific improvements"}}
{"tool": "ai_process", "arguments": {"request": "Search for competitor documentation, compare with our API design, and identify feature gaps"}}
System Introspection
# Get server information and capabilities
{"tool": "get_info", "arguments": {}}
# Check AI orchestration health
{"tool": "ai_status", "arguments": {}}
AI-Enhanced Workflows
The ai_process
tool can handle complex requests like:
- "Analyze my project structure and suggest improvements"
- "Find recent commits and create a summary"
- "Search for TODO comments and organize them by priority"
- "Take a screenshot of the homepage and analyze its performance"
🏗️ Architecture
Multi-Runtime Support
The orchestrator uses a registry-based architecture supporting:
- npm servers: TypeScript/JavaScript servers via npx
- uvx servers: Python servers via uvx
- Built-in tools: Native orchestrator capabilities
AI Enhancement Layer
User Request → Intent Analysis → Tool Selection → Workflow Planning → Execution → Result Synthesis
⚙️ Configuration
Server Configuration
Server configurations are managed in src/orchestrator/server-configs.ts
. Each server includes:
- Runtime environment (npm, uvx, python, etc.)
- Command and arguments
- Environment requirements
- Enable/disable status
- Development phase assignment
Environment Variables
All environment variables are configured through your MCP client settings. The following variables are supported:
AI Configuration (OpenRouter):
OPENROUTER_API_KEY
(required for AI features) - Your OpenRouter API keyOPENROUTER_DEFAULT_MODEL
(optional) - Default model to use (default: "anthropic/claude-3.5-sonnet")OPENROUTER_MAX_TOKENS
(optional) - Maximum tokens per request (default: "2000")OPENROUTER_TEMPERATURE
(optional) - Temperature for AI responses (default: "0.7")
MCP Server Integrations: All currently enabled MCP servers work without additional API keys or configuration.
🔧 Development
Scripts
npm run build
- Build the projectnpm run dev
- Watch mode for development (TypeScript compilation)npm run start
- Start the server (for MCP client use)npm run start:dev
- Start with .env file support (for local development/testing)npm test
- Run tests (when available)
Local Development
For local development and testing, you can use the development script that loads environment variables from a .env
file:
-
Copy the example environment file:
cp .env.example .env
-
Edit
.env
with your actual API keys -
Run the development server:
npm run start:dev
Note: The regular npm start
command is intended for MCP client use and expects environment variables to be provided by the MCP client configuration.
📝 License
MIT