Publishing technical articles with accompanying YouTube content is a four-day process when the stages run sequentially: research blocks writing, writing blocks thumbnail production, and the whole pipeline stalls if any step takes longer than expected. The bottleneck isn't any individual stage — it's the sequencing.
The fix is architectural. Research, writing scaffolding, and thumbnail concepting don't depend on each other in the first three hours. They can run in parallel. This tutorial — part of the OpenClaw Creative & Building use cases — configures three OpenClaw agents across three dedicated Discord channels, each calling a different model tier via WisGate, all under one API key.
By the end of this tutorial you'll have a three-agent content factory running on WisGate — research grounded in live web data, article drafts from a long-context writing agent, and thumbnail concepts at $0.058/image — all coordinated through Discord with a human-readable audit trail at every step. Test each agent's output individually at wisgate.ai/studio/image before connecting the parallel pipeline. Get your key at wisgate.ai/hall/tokens.
AI Content Factory Automation: Why Three Agents Beat One Sequential Pipeline
Three parallel agents, three Discord channels:
| Discord Channel | Agent | Model | Output |
|---|---|---|---|
#research-queue | Research Agent | claude-sonnet-4-5 + grounding | Sourced research brief |
#drafts | Writing Agent | claude-sonnet-4-5 | Full article draft in Markdown |
#assets | Thumbnail Agent | gemini-3.1-flash-image-preview | 16:9 thumbnail concept at $0.058 |
Discord is the coordination layer by design — not by convenience. Each channel is a persistent, human-readable message log. The research agent posts its brief to #research-queue; the writing agent reads it and posts to #drafts; the thumbnail agent reads the approved title from #drafts and posts to #assets. A human can inspect, approve, or redirect at any stage in the same interface the agents use. No state file to maintain, no message queue infrastructure — the Discord message history is the pipeline state.
Related pipeline patterns: YouTube Content Pipeline tutorial and Goal-Driven Autonomous Tasks.
OpenClaw Multi-Agent Configuration: WisGate Setup
Step 1 — Open the configuration file
OpenClaw stores its configuration in a JSON file in your home directory. Open your terminal and edit:
nano ~/.openclaw/openclaw.json
Step 2 — Add the WisGate provider to your models section
Copy and paste the following into your models section:
"models": {
"mode": "merge",
"providers": {
"moonshot": {
"baseUrl": "https://api.wisgate.ai/v1",
"apiKey": "WISGATE-API-KEY",
"api": "openai-completions",
"models": [
{
"id": "claude-sonnet-4-5",
"name": "Claude Sonnet 4.5",
"reasoning": false,
"input": ["text"],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 256000,
"maxTokens": 8192
}
]
}
}
}
Replace WISGATE-API-KEY with your key from wisgate.ai/hall/tokens. Full model list and pricing: wisgate.ai/models.
Step 3 — Save and restart
Ctrl + O→Enterto saveCtrl + Xto exitCtrl + Cto stop the current session, then runopenclaw tui
Create a separate OpenClaw conversation context for each agent and set the model selector to claude-sonnet-4-5 for both the Research and Writing agents. The Thumbnail Agent runs on the Gemini-native endpoint — programmatically or via AI Studio — not through OpenClaw's chat interface.
Note: OpenClaw was previously known as ClawdBot and MoltBot. These steps apply to all versions.
LLM Parallel Agent Pipeline: Research and Writing API Calls
Research Agent — grounding enabled (Gemini-native endpoint):
curl -s -X POST \
"https://wisgate.ai/v1beta/models/gemini-3.1-flash-image-preview:generateContent" \
-H "x-goog-api-key: $WISDOM_GATE_KEY" \
-H "Content-Type: application/json" \
-d '{
"contents": [{"parts": [{"text": "Research topic: [INSERT TOPIC]\n\nYou are a content research agent. Produce a sourced research brief: core argument, 4 supporting evidence points, competitor coverage gap, 3 audience questions this piece must answer, and 3 title options (question / how-to / contrarian). Ground all claims in current sources. Flag any unverified claims. Max 500 words. Clean Markdown."}]}],
"tools": [{"google_search": {}}],
"generationConfig": {"responseModalities": ["TEXT"]}
}' | jq -r '.candidates[0].content.parts[0].text' > research_brief.md
Grounding applies only to the Research Agent — it retrieves live web context before the brief is written. Writing and Thumbnail agents do not use grounding.
Writing Agent — reads from #research-queue, posts to #drafts (OpenAI-compatible endpoint):
curl -s -X POST \
"https://api.wisgate.ai/v1/chat/completions" \
-H "Authorization: Bearer $WISDOM_GATE_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "claude-sonnet-4-5",
"messages": [
{
"role": "system",
"content": "You are a technical content writer. Given a research brief, produce a full article draft: title, meta description (max 155 chars), opening hook (2–3 sentences, specific problem), 4–6 H2 sections with code examples where relevant, conclusion with one actionable takeaway. Developer-peer tone. No filler. 1500–2000 words. Clean Markdown."
},
{
"role": "user",
"content": "Write the article based on this research brief:\n\n[PASTE BRIEF FROM #research-queue]"
}
],
"max_tokens": 4096
}' | jq -r '.choices[0].message.content' > draft.md
Thumbnail Agent — reads approved title from #drafts, posts to #assets (Gemini-native endpoint):
curl -s -X POST \
"https://wisgate.ai/v1beta/models/gemini-3.1-flash-image-preview:generateContent" \
-H "x-goog-api-key: $WISDOM_GATE_KEY" \
-H "Content-Type: application/json" \
-d '{
"contents": [{"parts": [{"text": "YouTube thumbnail for a technical developer blog post. Title: [INSERT APPROVED TITLE]. Style: bold text overlay, dark tech aesthetic, high contrast. Aspect ratio 16:9. No stock photo faces."}]}],
"generationConfig": {
"responseModalities": ["TEXT", "IMAGE"],
"imageConfig": {"aspectRatio": "16:9", "imageSize": "2K"}
}
}' \
| jq -r '.candidates[0].content.parts[] | select(.inlineData) | .inlineData.data' \
| head -1 | base64 --decode > thumbnail_concept.png
Consistent 20-second generation across 0.5K–4K base64 output at $0.058/image — $0.010 below the $0.068 Google official rate. Confirm Nano Banana 2 pricing from https://wisgate.ai/models.
OpenClaw Use Cases: Cost Per Content Piece
| Agent | Model | Endpoint | Est. Input | Est. Output | Image cost |
|---|---|---|---|---|---|
| Research | Sonnet + grounding | Gemini-native | ~400 tokens | ~800 tokens | — |
| Writing | Sonnet | OpenAI-compatible | ~1,200 tokens | ~3,000 tokens | — |
| Thumbnail | Nano Banana 2 | Gemini-native | ~150 tokens | — | $0.058 |
Cost comparison — confirm all per-token pricing from https://wisgate.ai/models before calculating; insert confirmed figures:
| Strategy | Text model | Per-piece text cost | Thumbnail | Per-piece total |
|---|---|---|---|---|
| Three-model factory | Sonnet × 2 | Confirm + calculate | $0.058 | Calculate |
| Single-model Opus | Opus × 2 | Confirm + calculate | $0.058 | Calculate |
| Saving per piece | — | — | — | Calculate delta |
Research and writing on structured, well-defined tasks don't require Opus-level reasoning. The per-piece saving across 52 content pieces annually compounds into a budget-relevant figure — state the specific dollar amount once pricing is confirmed from wisgate.ai/models.
OpenClaw Use Cases: Three Agents, One Key, Zero Sequential Blocking
All three agent configurations are documented above. All three API calls are runnable as-is. One WisGate key covers both the OpenAI-compatible endpoint for text agents and the Gemini-native endpoint for grounding and image generation.
Create the three Discord channels. Paste each system prompt into its OpenClaw conversation context. Validate each agent independently in AI Studio. Then trigger all three in parallel on the next content topic — research, writing scaffolding, and thumbnail concepting all start within the same minute.
The only remaining step is your API key. Get it at wisgate.ai/hall/tokens — trial credits included, one key for all three endpoints. Before triggering the parallel pipeline, validate each agent's output individually at wisgate.ai/studio/image. Start with the Research Agent: paste a topic, enable grounding, and confirm brief quality before connecting the full factory.