HPC-MCP
A server that provides Model Control Protocol (MCP) tools for High Performance Computing, designed to integrate with Large Language Models in IDEs like Cursor and VSCode for debugging and other HPC tasks.
README Documentation
hpc-mcp :zap::computer:
This project provides MCP tools for HPC. These are designed to integrate with LLMs. My initial plan is to integrate with LLMs called from IDEs such as cursor and vscode.
Quick Start Guide :rocket:
This project uses uv for dependency management and installation. If you don't have uv installed, follow installation instructions on their website.
Once we have uv
installed we can install the dependencies and run the tests with the following
command:
uv run --dev pytest
Adding the MCP Server
Cursor
- Open Cursor and go to settings.
- Click
Tools & Integrations
- Click
Add Custom MCP
[!NOTE] This will open your system-wide MCP settings (
$HOME/.cursor/mcp.json
). If you prefer to set this on a project-by-project basis, then you can create a local configuration using<path/to/project/root>/.cursor/mcp.json
.
- Add the following configuration:
{
"mcpServers": {
"hpc-mcp": {
"command": "uv",
"args": [
"--directory",
"<path/to>/hpc-mcp",
"run",
"src/debug.py"
]
}
}
}
VSCode
- Open command palette (Ctrl+Shift+p) and select
MCP: Add Server...
- Choose the option
command (stdio)
since the server will be run locally - Type the command to run the MCP server:
uv --directory <path/to>/hpc-mcp run src/debug.py
- Select reasonable name for the server e.g. "HpcMcp" (camel case is a convention)
- Select whether to add the server locally or globally.
- You can tune the settings by opening
setting.json
(global settings) or.vscode/setting.json
(workspace settings)
Zed
- Open Zed and go to settings.
- Open general settings
CTRL-ALT-C
- Under section Model Context Protocol (MCP) Servers click
Add Custom Server
- Add the following text (changing the
<path/to>/hpc-mcp
to your actual path)
{
/// The name of your MCP server
"hpc-mcp": {
/// The command which runs the MCP server
"command": "uv",
/// The arguments to pass to the MCP server
"args": [
"--directory",
"<path/to>/hpc-mcp",
"run",
"src/debug.py"
],
/// The environment variables to set
"env": {}
}
}
Test the MCP Server
Test the MCP using our simple example
- open terminal
cd example/simple
- build the example using
make
- this should generate
segfault.exe
- then type the following prompt into your IDE LLM agent
"debug a crash in the program examples/simple/segfault.exe"
- this should ask your permission to run
debug_crash
MCP tool - accept and you should get a response like the following
Running local LLMs with Ollama
To run the hpc-mcp
MCP tool with a local Ollama model use the Zed text editor. It should
automatically detect local running ollama models and make them available. As long as you have
installed the hpc-mcp
MCP server in zed (see instructions here) it
should be available to your models. For more info on ollama integration with zed see zed's
documentation.
[!NOTE] Not all models support calling of MCP tools. I managed to have success with
qwen3:latest
.
Core Dependencies
python
uv
fastmcp