Prediction markets are information markets. The edge belongs to whoever processes more signals, faster, with better synthesis — not whoever clicks the interface first.
Most prediction market participants do their research manually: reading news, checking market prices, estimating probabilities, and deciding position sizes one trade at a time. The process is slow, inconsistent, and impossible to scale across dozens of active markets simultaneously. By the time a manual researcher has synthesized three news sources into a probability estimate for one market, an automated agent has done the same for twenty.
This guide covers the OpenClaw Finance & Trading use case category, with a complete configuration walkthrough for the Polymarket Autopilot — an autonomous agent that monitors prediction markets, researches underlying events, generates probability estimates grounded in current information, and tracks open positions against evolving market prices.
The configuration applies to any prediction market platform with a public API. Polymarket is the reference implementation throughout this guide because it has documented API endpoints and active liquidity — but the architecture pattern transfers directly to Manifold Markets, Kalshi, and similar platforms.
Everything in this guide is for informational and educational purposes only. This content does not constitute financial or investment advice. Prediction market trading carries financial risk. Always conduct your own research and consult a qualified financial professional before making trading decisions.
Before connecting the agent to live markets: Open AI Studio, paste your research agent system prompt, and test it against a real resolved market — one where the outcome is already known. Verify that the probability estimate the agent generates is reasonable relative to what actually happened. This validation step costs nothing and takes under 15 minutes. Get your API key at wisgate.ai/hall/tokens, trial credits included.
What the Polymarket Autopilot Agent Does
Before the architecture and configuration, a concrete picture of what this agent actually produces.
The Polymarket Autopilot is a multi-stage research and monitoring agent. Given a list of active prediction markets, it runs the following pipeline on a configured schedule:
- Market ingestion — fetches active market questions, current YES/NO prices, volume, and closing dates from the Polymarket API
- Event research — for each market, searches for current news, official data sources, and expert commentary relevant to the underlying question
- Probability estimation — synthesizes the research into a calibrated probability estimate with confidence level and key uncertainty factors
- Edge calculation — compares the agent's probability estimate against the current market price to identify potential edges (cases where the agent's estimate differs meaningfully from the market)
- Position tracking — if you hold positions in any monitored markets, tracks their current mark-to-market value and flags approaching closing dates
- Daily briefing — consolidates all of the above into a structured daily report delivered to your chosen output channel
The agent does not place trades. It produces research and decision support — the trading action is always human-initiated. This design is intentional: automated trade execution on prediction markets requires careful legal review that varies by jurisdiction, and the research-only scope keeps the agent architecture significantly simpler.
OpenClaw Configuration
Step 1 — Locate and Open the Configuration File
OpenClaw stores its configuration in a JSON file in your home directory. Open your terminal and edit the file at:
Using nano:
nano ~/.clawdbot/clawdbot.json
Step 2 — Add the WisGate Provider to Your Models Section
Copy and paste the following configuration into the models section of your clawdbot.json. This defines WisGate as a custom provider and registers Claude Opus with your preferred model settings.
"models": {
"mode": "merge",
"providers": {
"moonshot": {
"baseUrl": "https://api.wisgate.ai/v1",
"apiKey": "YOUR-WISGATE-API-KEY",
"api": "openai-completions",
"models": [
{
"id": "claude-opus-4-6",
"name": "Claude Opus 4.6",
"reasoning": false,
"input": ["text"],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 256000,
"maxTokens": 8192
}
]
}
}
}
Note: Replace
YOUR-WISGATE-API-KEYwith your key from wisgate.ai/hall/tokens. The"mode": "merge"setting adds WisGate's models alongside your existing providers without replacing them. To add additional models, duplicate the model entry block and update the"id"and"name"fields with the correct model IDs from wisgate.ai/models.
Step 3 — Save, Exit, and Restart OpenClaw
If using nano:
- Press
Ctrl + Oto write the file → pressEnterto confirm - Press
Ctrl + Xto exit the editor
Restart OpenClaw:
- Press
Ctrl + Cto stop the current session - Relaunch with:
openclaw tui
Once restarted, the WisGate provider and your configured Claude models will appear in the model selector.
LLM Financial Analysis Agent: Why This Category Uses Opus Exclusively
The Finance & Trading category is the one where model tier selection has the clearest justification. Research output here is acted upon — directly or indirectly — in decisions that have financial consequences.
The error cost analysis:
In a productivity automation, a bad output means a poorly written email or a missed task. In a financial research agent, a bad output means a misjudged probability estimate that informs a position in a real market. The asymmetry in error cost justifies a single model selection rule for this entire category: claude-opus-4-6 for all reasoning, synthesis, and probability estimation steps.
There is one exception. The market ingestion step — fetching API data and reformatting it for the next stage — is a mechanical parsing task with no synthesis requirement. claude-haiku-4-5-20251001 is appropriate for this step, at a lower per-token cost. Confirm all pricing from https://wisgate.ai/models before calculating production costs.
Why prediction markets specifically:
Prediction markets are an interesting workload for LLM agents because they have a ground truth. Every market resolves to YES or NO. That means agent performance is measurable: over time, you can compare the agent's probability estimates against actual outcomes and calculate calibration error. No other financial research context offers this clean a feedback loop for agent evaluation.
The Four-Agent Pipeline Architecture
The Polymarket Autopilot runs as a four-agent pipeline. Each agent has a defined input, a defined output, and a specific model assignment.
| Agent | Model | Input | Output |
|---|---|---|---|
| Market Fetcher | claude-haiku-4-5-20251001 | Raw Polymarket API response | Structured market list: question, YES price, NO price, volume, closing date |
| Research Agent | claude-opus-4-6 | One market question + search results | Research brief: key facts, uncertainty factors, data quality notes |
| Probability Estimator | claude-opus-4-6 | Research brief + current market price | Probability estimate (%), confidence level, edge vs. market, key assumptions |
| Report Compiler | claude-opus-4-6 | All estimator outputs + position data | Daily briefing: market summaries, edge flags, position tracker, date alerts |
Each agent runs as a sequential OpenClaw conversation. The output of each stage is passed as user input to the next. The pipeline does not require a vector store or persistent memory — state is passed forward explicitly at each handoff.
OpenClaw API Finance Automation: System Prompt Design for Each Agent
System prompt quality is the primary determinant of output quality for financial research agents. Each agent below has a complete prompt structure you can adapt directly.
Agent 1 — Market Fetcher
This agent handles a mechanical task: parse the Polymarket API response and return a clean, structured list. Haiku is the correct tier.
You are a data parser for prediction market data.
Input: raw JSON from the Polymarket API containing active market data.
For each market, extract and return:
- market_id
- question (full text)
- yes_price (current, 0.00–1.00)
- no_price (current, 0.00–1.00)
- volume_usd (total traded volume)
- closing_date (ISO 8601)
- category (if available)
Return as a JSON array. No analysis. No commentary. No preamble.
If a field is missing from the API response, return null for that field.
What this prompt enforces: strict structured output with no hallucinated fields, explicit null handling for missing data, and no synthesis that belongs in a later stage.
Agent 2 — Research Agent
The Research Agent receives one market question at a time and synthesizes available information into a structured research brief. This is the most input-variable agent in the pipeline — the quality and recency of the search results passed in directly determines output quality.
You are a prediction market research analyst.
Your output is informational only and does not constitute financial advice.
Input: a prediction market question and current search results related to the
underlying event.
For each market question, produce a research brief containing:
1. Event summary (2–3 sentences): what is the underlying event, when does it resolve,
and what is the current known state?
2. Key evidence FOR resolution as YES (up to 5 bullet points, with source noted)
3. Key evidence FOR resolution as NO (up to 5 bullet points, with source noted)
4. Uncertainty factors: what information is missing, contested, or likely to change
before the closing date?
5. Data quality note: rate the quality of available information as High / Medium / Low,
with one sentence of justification.
Citation rule: every factual claim must note its source. If a claim cannot be
sourced from the provided search results, do not include it.
Never fabricate sources.
What this prompt enforces: balanced evidence presentation, explicit uncertainty quantification, source citation requirements, and a hard rule against fabricated citations — the same rule used in the Research & Learning category, applied here to financial context.
Agent 3 — Probability Estimator
This is the most consequential agent in the pipeline. It synthesizes the research brief into a probability estimate and calculates the edge versus the current market price.
You are a probability estimation assistant for prediction markets.
Your outputs are for informational purposes only and do not constitute
financial or investment advice.
Input: a research brief from the Research Agent and the current YES price
for this market (expressed as a probability, 0.00–1.00).
Produce a probability estimate containing:
1. Estimated probability of YES resolution (%): your estimate based solely
on the research brief. Express as a percentage (e.g., 67%).
2. Confidence level: High / Medium / Low
- High: strong evidence, low uncertainty, clear resolution criteria
- Medium: mixed evidence or meaningful uncertainty factors
- Low: limited evidence, high uncertainty, or ambiguous resolution criteria
3. Key assumptions: the 2–3 most important assumptions your estimate depends on.
If any assumption is violated, state how the estimate would change.
4. Edge vs. market:
- Market implied probability: [current YES price × 100]%
- Your estimate: [your estimate]%
- Difference: [your estimate minus market] percentage points
- Flag as EDGE if |difference| > 10 percentage points AND confidence is
High or Medium
- Flag as NO EDGE if difference < 10 percentage points
- Flag as LOW CONFIDENCE if confidence is Low regardless of difference
5. Disclaimer: append "This estimate is for informational purposes only and
does not constitute financial or investment advice." to every output.
Do not recommend a trading action. Do not state whether to buy or sell.
The 10-percentage-point threshold: this is a configurable parameter, not a fixed rule. Adjust it based on your own assessment of meaningful edge in the markets you monitor. Tighter markets may warrant a higher threshold; more illiquid markets may warrant lower. The threshold is a configuration variable in the system prompt — not a financial recommendation.
Agent 4 — Report Compiler
The Report Compiler receives all Probability Estimator outputs for the day's monitored markets plus any open position data, and generates the daily briefing.
You are a daily briefing compiler for a prediction market research agent.
All content is informational only and does not constitute financial advice.
Input:
- A list of probability estimates from the Probability Estimator (one per market)
- Open position data: market_id, direction (YES/NO), entry price, current price,
quantity
Produce a daily briefing in the following format:
## Prediction Market Daily Briefing — [DATE]
### Markets with Flagged Edge
[List only markets flagged as EDGE. For each: question, your estimate vs. market,
confidence level, key assumption]
### All Monitored Markets
[Full list. For each: question, your estimate, market price, edge flag,
confidence level, closing date]
### Open Positions
[For each position: question, direction, entry price, current price,
unrealized P&L in percentage points, days to closing]
### Closing Soon (Next 7 Days)
[Any market closing within 7 days — flag clearly regardless of edge status]
### Data Quality Flags
[Any market where the Research Agent rated data quality as Low]
---
*This briefing is generated by an automated research agent and is for
informational purposes only. It does not constitute financial or investment advice.
All probability estimates are model-generated and may be incorrect.*
OpenClaw Use Cases: Scheduling and Trigger Design
The Polymarket Autopilot runs on a schedule that balances information freshness against API call cost. Two scheduling patterns are appropriate for different use cases.
Pattern 1 — Daily Morning Briefing (Recommended Starting Point)
Run the full four-agent pipeline once per day, before your active trading session. This pattern minimizes API call volume and is the correct starting point for most developers configuring this agent for the first time.
Recommended trigger: once daily at 07:00 local time.
Pipeline sequence per daily run:
- Fetch current market data for your watchlist (1× Haiku call)
- Research each market question (N× Opus calls, where N = watchlist size)
- Estimate probability for each market (N× Opus calls)
- Compile daily report (1× Opus call)
Total calls per day: 1 Haiku + (2N + 1) Opus, where N is your watchlist size.
Pattern 2 — Event-Triggered Update
For markets approaching their closing date or with significant volume spikes, run an additional single-market update outside the daily schedule. This requires monitoring market data for trigger conditions and is a more advanced configuration.
Trigger conditions:
- Market closing within 48 hours
- Volume spike > 3× 7-day average in a single hour
- New major news article detected for the underlying event
Cost per triggered update: 1 Opus (research) + 1 Opus (estimate) = 2 Opus calls per triggered market.
OpenClaw Use Cases: Model Routing and Cost Reference
| Agent | Model | Calls per Daily Run (10-market watchlist) | Rationale |
|---|---|---|---|
| Market Fetcher | claude-haiku-4-5-20251001 | 1 | Mechanical parsing — no synthesis required |
| Research Agent | claude-opus-4-6 | 10 (one per market) | Source synthesis and uncertainty assessment require highest reliability |
| Probability Estimator | claude-opus-4-6 | 10 (one per market) | Probability calibration; errors have financial consequence |
| Report Compiler | claude-opus-4-6 | 1 | Synthesis across all estimates; output is read and acted upon |
Annual cost projection (10-market watchlist, daily runs):
- Daily Opus calls: 21 (10 research + 10 estimate + 1 compile)
- Daily Haiku calls: 1
- Annual Opus calls: 21 × 365 = 7,665
- Annual Haiku calls: 365
Confirm per-token pricing for both claude-opus-4-6 and claude-haiku-4-5-20251001 from https://wisgate.ai/models. Multiply confirmed per-token rates by your average input and output token length per call to calculate annual cost. For a 10-market watchlist with average research documents of 2,000 tokens each, the Research Agent is the largest cost driver — calculate it first.
Watchlist scaling:
| Watchlist Size | Daily Opus Calls | Annual Opus Calls |
|---|---|---|
| 5 markets | 11 | 4,015 |
| 10 markets | 21 | 7,665 |
| 20 markets | 41 | 14,965 |
| 50 markets | 101 | 36,865 |
Confirm pricing from https://wisgate.ai/models and multiply by your watchlist size before deploying at scale.
OpenClaw Use Cases: Validation Protocol Before Live Monitoring
Before connecting the agent to live Polymarket data and running it on a real watchlist, complete this validation sequence in AI Studio.
Validation Step 1 — Research Agent grounding test
Find a recently resolved Polymarket market (one where the outcome is known). Paste the market question and news articles from the week before resolution as inputs to the Research Agent system prompt. Verify that:
- The research brief is balanced — evidence for both YES and NO is represented
- All factual claims have source notes
- The uncertainty factors section identifies what was actually uncertain
Validation Step 2 — Probability Estimator calibration test
Pass the Research Agent output from Step 1 to the Probability Estimator, along with the market price from the day the research was gathered. Compare the agent's probability estimate against the actual resolution outcome. Repeat this test across 5 resolved markets. The agent's estimates do not need to be perfect — they need to be reasonable and directionally consistent with the research brief.
Validation Step 3 — Edge threshold sensitivity test
Adjust the edge threshold in the Probability Estimator system prompt (the 10-percentage-point default) and observe how many markets get flagged as EDGE across your historical test set. A threshold that flags every market or no markets needs adjustment.
Validation Step 4 — Report Compiler format check
Pass synthetic estimator outputs to the Report Compiler and verify:
- All required sections appear in the output
- The disclaimer appears in the report
- Open position tracking is populated correctly when position data is provided
- Markets flagged as "Closing Soon" are correctly identified
Only proceed to live market monitoring after all four validation steps pass.
OpenClaw Use Cases: Finance & Trading — Limitations and Design Boundaries
This section is not optional reading. Before deploying any financial research agent, understand what this pipeline does not do and cannot do reliably.
What the agent cannot do:
- Real-time monitoring: the pipeline runs on a schedule; it does not stream live market data. Prices between runs are not reflected in the agent's edge calculations.
- Structured data analysis: the Research Agent synthesizes text-based sources. It does not analyze time-series price data, calculate statistical correlations, or run quantitative models.
- Legal compliance review: prediction market regulations vary by jurisdiction. The agent has no mechanism to assess whether a given market or position is permissible in your location.
- Execution: the agent produces research output only. It does not place, modify, or cancel any market positions.
What the agent does well:
- Synthesizing text-based news and commentary into structured research briefs at a pace and scale that manual research cannot match
- Maintaining consistent output format across dozens of market questions per run
- Tracking position mark-to-market and closing dates without manual monitoring
- Flagging where its probability estimate differs materially from the market price — a signal for further human review, not an automated trading signal
The feedback loop that makes this agent improvable:
Because prediction markets resolve to binary outcomes, every estimate the agent produces has a ground truth. Build a simple log: for each market the agent covers, record the estimate at the time, the market price at the time, and the resolution outcome. Over 50–100 markets, this log tells you whether the agent is systematically overconfident or underconfident, and in which market categories. Use that data to refine the Probability Estimator system prompt. This is the capability that distinguishes a research agent from a static analysis template.
OpenClaw Use Cases: Finance & Trading — Where to Start
One complete prediction market research agent configuration for OpenClaw via WisGate. Four agents, one unified key, one base URL, two model tiers.
The minimum viable starting configuration:
Start with a watchlist of 5 markets. Run the daily briefing pattern for two weeks before expanding. Validate the Research Agent's output quality manually for the first 10 markets it covers. Only scale the watchlist after you have confirmed that the agent's research briefs are grounded, balanced, and useful for your decision-making process.
The single most important system prompt rule:
The Probability Estimator's disclaimer — "This estimate is for informational purposes only and does not constitute financial or investment advice" — must appear in every output. Do not remove it. Do not make it optional. It must be in the system prompt as a hard output requirement, not a suggestion.
For the complete 36-case OpenClaw library across all 6 categories, return to the [OpenClaw Use Cases pillar page →].
Start with five markets and one API key. Get your WisGate key at wisgate.ai/hall/tokens — trial credits included, no commitment before your first pipeline run. Open AI Studio, load the Research Agent system prompt, paste a real resolved market question with its pre-resolution news articles, and verify the research brief quality before connecting to live data. The four-agent pipeline is complete and documented above — the only remaining step is selecting your initial five-market watchlist and running the validation protocol. Configure the pipeline, run the validation, and deploy your first daily briefing before your next active trading session.
All per-token pricing requires confirmation from wisgate.ai/models before publication. Insert confirmed rates for claude-opus-4-6 and claude-haiku-4-5-20251001 into all cost tables before this article goes live. Nothing in this article constitutes financial or investment advice. Prediction market participation carries financial risk and may be subject to legal restrictions that vary by jurisdiction. Readers are responsible for understanding and complying with applicable laws in their location before participating in any prediction market.