mcp-server-ollama-deep-researcher
This is a Model Context Protocol (MCP) server adaptation of LangChain Ollama Deep Researcher. It provides the deep research capabilities as MCP tools that can be used within the model context protocol ecosystem, allowing AI assistants to perform in-depth research on topics (locally) via Ollama
README Documentation
Ollama Deep Researcher DXT Extension
Overview
Ollama Deep Researcher is a Desktop Extension (DXT) that enables advanced topic research using web search and LLM synthesis, powered by a local MCP server. It supports configurable research parameters, status tracking, and resource access, and is designed for seamless integration with the DXT ecosystem.
- Research any topic using web search APIs and LLMs (Ollama, DeepSeek, etc.)
- Configure max research loops, LLM model, and search API
- Track status of ongoing research
- Access research results as resources via MCP protocol
Features
- Implements the MCP protocol over stdio for local, secure operation
- Defensive programming: error handling, timeouts, and validation
- Logging and debugging via stderr
- Compatible with DXT host environments
Directory Structure
.
├── manifest.json # DXT manifest (see MANIFEST.md for spec)
├── src/
│ ├── index.ts # MCP server entrypoint (Node.js, stdio transport)
│ └── assistant/ # Python research logic
│ └── run_research.py
├── README.md # This documentation
└── ...
Installation & Setup
-
Clone the repository and install dependencies:
git clone <your-repo-url> cd mcp-server-ollama-deep-researcher npm install
-
Install Python dependencies for the assistant:
cd src/assistant pip install -r requirements.txt # or use pyproject.toml/uv if preferred
-
Set required environment variables for web search APIs:
- For Tavily:
TAVILY_API_KEY
- For Perplexity:
PERPLEXITY_API_KEY
- Example:
export TAVILY_API_KEY=your_tavily_key export PERPLEXITY_API_KEY=your_perplexity_key
- For Tavily:
-
Build the TypeScript server (if needed):
npm run build
-
Run the extension locally for testing:
node dist/index.js # Or use the DXT host to load the extension per DXT documentation
Usage
- Research a topic:
- Use the
research
tool with{ "topic": "Your subject" }
- Use the
- Get research status:
- Use the
get_status
tool
- Use the
- Configure research parameters:
- Use the
configure
tool with any of:maxLoops
,llmModel
,searchApi
- Use the
Manifest
See manifest.json
for the full DXT manifest, including tool schemas and resource templates. Follows DXT MANIFEST.md.
Logging & Debugging
- All server logs and errors are output to
stderr
for debugging. - Research subprocesses are killed after 5 minutes to prevent hangs.
- Invalid requests and configuration errors return clear, structured error messages.
Security & Best Practices
- All tool schemas are validated before execution.
- API keys are required for web search APIs and are never logged.
- MCP protocol is used over stdio for local, secure communication.
Testing & Validation
- Validate the extension by loading it in a DXT-compatible host.
- Ensure all tool calls return valid, structured JSON responses.
- Check that the manifest loads and the extension registers as a DXT.
Troubleshooting
- Missing API key: Ensure
TAVILY_API_KEY
orPERPLEXITY_API_KEY
is set in your environment. - Python errors: Check Python dependencies and logs in
stderr
. - Timeouts: Research subprocesses are limited to 5 minutes.
References
© 2025 Your Name or Organization. Licensed under MIT.