The Creative & Building category produces artifacts — code you can run, content you can ship, commits you can push. The output is a deliverable, not a digest.
That distinction changes the architecture. Social media automations run one agent against one input and return text. Creative pipelines run multiple agents against evolving context, produce intermediate artifacts, and persist state across multi-hour runs that outlast a single context window. Getting this right requires a different configuration approach — and a different model routing strategy.
OpenClaw (previously known as ClawdBot and MoltBot) enables this through multi-agent orchestration: one agent plans, others execute in sequence or parallel, and a state file written to disk persists work across context resets and overnight runs. The 4 cases this page covers:
- Goal-Driven Autonomous Tasks — brain-dump goals; agent plans, schedules, and builds overnight
- YouTube Content Pipeline — topic scouting, research compilation, and brief generation
- Multi-Agent Content Factory — research, writing, and thumbnail agents in Discord channels
- Autonomous Game Dev Pipeline — backlog to git commit with "Bugs First" policy enforcement
The WisGate advantage is concrete here: these pipelines route planning steps to Claude Opus, execution steps to Sonnet, mechanical output steps to Haiku, and image generation to Nano Banana 2 — all under one key, one base URL, one billing account. No multi-provider key management. No separate Anthropic and Google accounts.
Validate your first pipeline agent before wiring the rest. Open AI Studio and test your planner system prompt — paste your goal list or backlog and confirm the task decomposition output is structured and actionable before connecting executor agents. Get your API key at wisgate.ai/hall/tokens — trial credits included. A working planner agent is the foundation every case in this article builds on.
LLM Multi-Agent Pipeline: The Architecture Pattern Behind All 4 Creative Cases
Before the individual case configurations, establish the shared architecture. All 4 cases use the same three-layer pattern — the cases differ in the specific agents, not the underlying structure.
| Layer | Role | Model | Why This Tier |
|---|---|---|---|
| Planner Agent | Decomposes goal into task list; writes STATE file | claude-opus-4-5 | Wrong plans cascade into wrong artifacts at every downstream step — highest reasoning tier required |
| Executor Agent(s) | Carries out individual scoped tasks | claude-sonnet-4-5 | Well-defined, bounded tasks don't require Opus-level reasoning |
| Output Agent | Generates final artifact: code, content, image, commit message | Sonnet or Nano Banana 2 | Text output → Sonnet; image output → gemini-3.1-flash-image-preview |
State Management: The Non-Negotiable Requirement
Multi-hour pipelines cannot rely on a single context window. A pipeline with no state persistence fails at the first context reset and produces no resumable work. The standard pattern across all 4 cases:
A STATE.yaml or STATE.json file is written to disk after each completed task. It contains: pipeline ID, original goal or backlog, and a task array with fields for id, description, type, status, dependencies, and output file path. Each executor reads STATE at startup, identifies the next task with status: pending and all dependencies marked status: completed, executes, and writes status: completed plus the output file path before exiting. This loop continues until all tasks complete or one is flagged status: blocked.
Why this matters: A pipeline that doesn't write STATE after every task is not resumable. If the process is interrupted at 2 AM, a non-persistent pipeline discards all completed work. A STATE-persisting pipeline resumes from the last completed task.
Why Multi-Model Routing Is Architecturally Correct — Not Just a Cost Decision
- Opus for every step: overpays for scoped execution tasks that Sonnet handles correctly; no output quality gain for bounded tasks
- Sonnet for planning: inferior task decomposition produces a plan where tasks are ambiguous, atomic boundaries are wrong, or dependencies are missing — every executor downstream inherits these errors
- Any text model for image steps: architecturally wrong; text models don't generate images; use Nano Banana 2 via the Gemini-native endpoint
WisGate's unified key covers both the OpenAI-compatible endpoint for Claude models and the Gemini-native endpoint for Nano Banana 2. Same key. Different endpoint URL per model type.
OpenClaw Configuration
Step 1 — Locate and Open the Configuration File
OpenClaw stores its configuration in a JSON file in your home directory. Open your terminal and edit the file at:
Using nano:
nano ~/.clawdbot/clawdbot.json
Step 2 — Add the WisGate Provider to Your Models Section
Copy and paste the following configuration into the models section of your clawdbot.json. This defines WisGate as a custom provider and registers Claude Opus with your preferred model settings.
"models": {
"mode": "merge",
"providers": {
"moonshot": {
"baseUrl": "https://api.wisgate.ai/v1",
"apiKey": "YOUR-WISGATE-API-KEY",
"api": "openai-completions",
"models": [
{
"id": "claude-opus-4-6",
"name": "Claude Opus 4.6",
"reasoning": false,
"input": ["text"],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 256000,
"maxTokens": 8192
}
]
}
}
}
Note: Replace
YOUR-WISGATE-API-KEYwith your key from wisgate.ai/hall/tokens. The"mode": "merge"setting adds WisGate's models alongside your existing providers without replacing them. To add additional models, duplicate the model entry block and update the"id"and"name"fields with the correct model IDs from wisgate.ai/models.
Step 3 — Save, Exit, and Restart OpenClaw
If using nano:
- Press
Ctrl + Oto write the file → pressEnterto confirm - Press
Ctrl + Xto exit the editor
Restart OpenClaw:
- Press
Ctrl + Cto stop the current session - Relaunch with:
openclaw tui
Once restarted, the WisGate provider and your configured Claude models will appear in the model selector.
Image generation note: Nano Banana 2 (
gemini-3.1-flash-image-preview) uses the Gemini-native endpoint (https://wisgate.ai/v1beta/models/) and must be called programmatically — OpenClaw's chat interface uses the OpenAI-compatible endpoint and cannot call image generation directly. Test image generation at wisgate.ai/studio/image before integrating into a pipeline.
Multi-model cost structure per pipeline run (confirm all per-token and per-image pricing from https://wisgate.ai/models before publishing production cost estimates):
| Agent Role | Model | Image Gen |
|---|---|---|
| Planner / Backlog Manager | Claude Opus | — |
| Executor / Creative output | Claude Sonnet | — |
| Mechanical output | Claude Haiku | — |
| Thumbnail / Visual assets | Nano Banana 2 | $0.058/image |
At $0.058/image via WisGate versus $0.068/image at the Google official rate, the delta is $0.010/image — $100/month at 10,000 images, $1,200/year. Relevant at content factory volume.
OpenClaw Use Cases — Case 1: Goal-Driven Autonomous Tasks
What it does: A developer inputs a list of high-level goals into OpenClaw. The planner agent decomposes each goal into a prioritized, atomic task schedule. Executor agents run overnight, building mini-apps, generating scripts, or producing structured documents for each task. The developer reviews completed artifacts in the morning — not mid-execution.
Why it matters: Every developer has a backlog of small tools and utilities they intend to build when time allows. That time rarely arrives. This pipeline converts that backlog into a nightly execution queue — goals in, artifacts out, no context switching during working hours.
Agent Configuration
| Agent | Model | System Prompt Focus |
|---|---|---|
| Planner | claude-opus-4-5 | Decompose each goal into atomic tasks; each completable in one agent call; mark overnight_safe: true/false; output as JSON task array |
| Executor | claude-sonnet-4-5 | Complete one assigned task; return complete artifact (code, document, or structured output); if blocked, return BLOCKED: [specific reason] |
STATE.yaml Structure
After the Planner runs, it writes a STATE.yaml to the working directory. The file contains: pipeline_id, goal, status (in_progress / complete / blocked), and a tasks array. Each task entry includes: id, description, type (code / document / research), overnight_safe, depends_on, status, and output_file.
Each executor reads this file at startup, selects the first task with status: pending and all depends_on entries at status: completed, executes, and updates status: completed with the output_file path before exiting. Never start a new executor without reading the current STATE first.
Planner System Prompt Pattern
You are an autonomous task planner for a software developer.
Input: a list of development goals.
For each goal, produce an ordered task list. Each task must:
- Be completable in a single agent call
- Specify: type, overnight_safe flag, estimated minutes, depends_on (task IDs or [])
- For code tasks: specify language, expected input/output, acceptance criteria
Return as JSON matching the STATE.yaml schema. No preamble.
How to Run
Set OpenClaw's model to claude-opus-4-5. Paste the Planner system prompt. Input your goal list. Review the task plan output — verify that each task is atomic and all dependencies are logical. Then switch to claude-sonnet-4-5 for executor calls: one OpenClaw conversation per task, passing the task description from STATE as user input.
Cost per pipeline run: 1× Opus (planning) + N× Sonnet (one per task). For a 5-task goal, confirm per-call pricing from https://wisgate.ai/models and calculate total. Compare against the developer time each task would otherwise require.
[Link to: Full Goal-Driven Autonomous Tasks configuration →]
AI Content Creation API — Case 2: YouTube Content Pipeline
What it does: Automates YouTube pre-production across three sequential steps: a Scout agent identifies trending topic candidates with differentiation angles, a Researcher agent builds a sourced brief for the top candidate, and a Brief Writer produces a production-ready content brief with title options, outline, and hook. A Status Tracker updates production stage after each step.
Why it matters: YouTube creators — particularly developer-focused channels — consistently bottleneck at research and ideation. Finding a topic that has search demand, isn't over-covered by larger channels, and can be approached from a unique technical angle typically takes 2–4 hours manually. This pipeline does it in minutes.
Agent Configuration
| Agent | Model | Role |
|---|---|---|
| Scout | claude-sonnet-4-5 | Find 8–10 trending topic candidates; evaluate search signal and competitor coverage gap; return ranked list with differentiation angles |
| Researcher | claude-sonnet-4-5 | Expand the top candidate into a sourced research brief: key technical claims, data points, competitor angle summary |
| Brief Writer | claude-sonnet-4-5 | Convert research brief into a content brief: 3 title options, full section outline, opening hook, CTA approach |
| Status Tracker | claude-haiku-4-5-20251001 | Update STATE.json production stage; return next action |
How to Run in OpenClaw
Run Scout, Researcher, and Brief Writer as sequential OpenClaw conversations with claude-sonnet-4-5, passing the output of each as the user input to the next. Switch to claude-haiku-4-5-20251001 for the Status Tracker — it's a fixed-schema update with no creative reasoning requirement.
Thumbnail Generation Extension
Once a brief is approved, trigger a Nano Banana 2 image generation call programmatically using the approved video title as the prompt input. At $0.058/image via WisGate, thumbnail concept drafts have a concrete cost that can be calculated and compared. Consistent 20-second generation time across 0.5K–4K Base64 output makes this usable as a scheduled pipeline step.
curl -s -X POST \
"https://wisgate.ai/v1beta/models/gemini-3.1-flash-image-preview:generateContent" \
-H "x-goog-api-key: $WISDOM_GATE_KEY" \
-H "Content-Type: application/json" \
-d '{
"contents": [{"parts": [{"text": "YouTube thumbnail concept: bold text, dark tech aesthetic, high contrast. Title: [YOUR APPROVED TITLE HERE]"}]}],
"generationConfig": {
"responseModalities": ["TEXT", "IMAGE"],
"imageConfig": {"aspectRatio": "16:9", "imageSize": "2K"}
}
}' \
| jq -r '.candidates[0].content.parts[] | select(.inlineData) | .inlineData.data' \
| head -1 | base64 --decode > thumbnail_concept.png
Test thumbnail generation manually at wisgate.ai/studio/image before integrating into the pipeline.
Cost per content piece: 3× Sonnet (Scout + Research + Brief) + 1× Haiku (Status Tracker) + 1× NB2 image at $0.058. Confirm Sonnet and Haiku pricing from https://wisgate.ai/models.
[Link to: Full YouTube Content Pipeline configuration →]
OpenClaw Use Cases — Case 3: Multi-Agent Content Factory
What it does: Three agents operate in parallel across dedicated Discord channels: a Research agent monitors #research-queue for topic assignments, a Writing agent reads approved briefs from #research-queue and posts drafts to #drafts, and a Thumbnail agent reads approved drafts from #drafts and posts visual concepts to #assets. A Reviewer agent handles lightweight routing decisions.
Why Discord as the coordination layer: Discord channels act as persistent, human-readable message queues. The agent produces output; the human inspects it in the same interface. Approval, revision requests, and escalation happen without modifying pipeline logic — the human just responds in the channel. This creates a production workflow where every agent action is auditable and every human decision is logged.
Agent Configuration
| Agent | Discord Channel | Model | Output |
|---|---|---|---|
| Research | #research-queue | claude-sonnet-4-5 | Sourced research brief with differentiation angle |
| Writer | #drafts | claude-sonnet-4-5 | Full article draft in Markdown, 1,500–2,000 words |
| Thumbnail | #assets | gemini-3.1-flash-image-preview | 16:9 image at $0.058 each |
| Reviewer | #approvals | claude-haiku-4-5-20251001 | Routing decision: approve / request revision / escalate |
How to Run in OpenClaw
Configure OpenClaw as the interface for the Research and Writer agents using claude-sonnet-4-5. Each agent run is one OpenClaw conversation with a fixed system prompt and the relevant Discord channel content pasted as user input. The Reviewer uses claude-haiku-4-5-20251001 — routing decisions are structured and unambiguous, well within Haiku's capability.
The Thumbnail agent calls the Gemini-native endpoint programmatically using the approved draft title. Same WisGate API key; different base URL.
Writer Agent System Prompt Pattern
You are a technical content writer for a developer blog.
Input: research brief from the Research agent.
Output: full article draft in clean Markdown.
Requirements:
- 1,500–2,000 words; developer-peer tone
- H1 title + H2 section structure
- Code blocks for all implementation examples
- Internal link placeholders as [LINK: topic]
- 3 title options at top; 150-char meta description
- Opening hook: 2 sentences establishing the specific problem
- No filler; every paragraph adds technical value
Cost per content piece: Research + Writing (2× Sonnet) + Routing (Haiku) + Thumbnail (NB2 at $0.058). Confirm Sonnet and Haiku pricing from https://wisgate.ai/models and calculate the total per piece at your target weekly volume.
[Link to: Full Multi-Agent Content Factory configuration →]
OpenClaw Use Cases — Case 4: Autonomous Game Dev Pipeline
What it does: Manages the full development lifecycle for an educational game project: a Backlog Manager selects the next task using the "Bugs First" policy, an Implementation agent generates complete working code for the selected task, a Documentation agent writes the CHANGELOG entry, and a Commit agent formats the git commit message. The developer reviews and applies — they don't write any of it.
Why it matters: Educational game projects have a high ratio of repetitive implementation tasks to novel architecture decisions. This pipeline automates the repetitive majority — feature additions, bug fixes, documentation updates — so the developer's time is spent on design and architecture decisions that actually require human judgment.
The "Bugs First" Policy — Implement as a Hard Rule
Before selecting any feature task, the Backlog Manager checks the backlog.json array for any entry with type: bug and status: open. If any exists, that becomes the next task — regardless of feature priority scores. No exceptions.
This is not a preference. It must be written into the Backlog Manager's system prompt as a hard constraint, not a guideline. Autonomous pipelines that ship new features onto broken foundations will consistently produce unusable software.
Agent Configuration
| Agent | Model | Role |
|---|---|---|
| Backlog Manager | claude-opus-4-5 | Apply Bugs First policy; select next task; return as JSON with reasoning |
| Implementation | claude-sonnet-4-5 | Generate complete, production-ready code for the selected task |
| Documentation | claude-haiku-4-5-20251001 | Write CHANGELOG entry and inline comments for the implementation |
| Commit | claude-haiku-4-5-20251001 | Format git commit message from task description and implementation summary |
How to Run in OpenClaw
- Set model to
claude-opus-4-5; paste Backlog Manager system prompt; inputbacklog.jsoncontent - Review selected task — confirm it is the correct "Bugs First" selection
- Switch model to
claude-sonnet-4-5; pass selected task to Implementation agent - Switch to
claude-haiku-4-5-20251001; pass implementation output to Documentation agent - Use Commit agent (Haiku) to format the final git commit message from task description + docs summary
- Apply implementation and commit, or pipe OpenClaw's output to an automation script
Backlog Manager System Prompt Pattern
You are a backlog manager for a software development pipeline.
Input: backlog.json — array of tasks with fields: id, type (bug/feature),
priority (1–5), status (open/in_progress/complete), description.
Policy (enforce strictly, no exceptions):
1. If any task has type=bug AND status=open → select the highest-priority bug
2. If no open bugs → select the highest-priority feature with status=open
Output: single selected task as JSON. Include: id, type, priority, description.
No explanation. No preamble. No alternatives.
Cost per Pipeline Run
1× Opus (backlog selection) + 1× Sonnet (implementation) + 2× Haiku (documentation + commit message). Confirm pricing for all three tiers from https://wisgate.ai/models. Compare against the equivalent cost of routing all steps through Opus — the saving per run is the arithmetic case for the three-tier routing approach.
[Link to: Full Autonomous Game Dev Pipeline configuration →]
OpenClaw Use Cases: Creative Category — Model Routing Reference
Consolidated routing table for all 4 cases. Use this when adapting configurations to your own project type, content format, or pipeline depth.
| Case | Planner | Executor | Output Agent | Image |
|---|---|---|---|---|
| Goal-Driven Autonomous Tasks | claude-opus-4-5 | claude-sonnet-4-5 | claude-sonnet-4-5 | Optional NB2 |
| YouTube Content Pipeline | — | claude-sonnet-4-5 ×3 | claude-haiku-4-5-20251001 (tracker) | NB2 (thumbnail) |
| Multi-Agent Content Factory | — | claude-sonnet-4-5 ×2 | claude-haiku-4-5-20251001 (routing) | NB2 (thumbnail) |
| Autonomous Game Dev Pipeline | claude-opus-4-5 | claude-sonnet-4-5 | claude-haiku-4-5-20251001 ×2 | — |
The routing principle across all 4 cases:
- Opus — planning and policy enforcement only; anything where wrong output from this step produces compounding wrong output downstream
- Sonnet — creative execution where quality of output matters; tasks that are well-scoped but require nuanced language or code
- Haiku — mechanical output steps with fixed schemas: documentation, commit messages, status updates, routing decisions
- Nano Banana 2 — all image generation at $0.058/image with consistent 20-second generation time; always the Gemini-native endpoint
Confirm all Claude tier per-token pricing from https://wisgate.ai/models. For each pipeline, calculate the cost delta between three-tier routing and Opus-for-all-steps at your expected weekly run volume. The arithmetic is the justification — not the principle.
OpenClaw Use Cases: Creative & Building — Where to Start
Four complete multi-agent pipeline configurations for OpenClaw via WisGate. Each applies the same three-layer planner → executor → output architecture with model assignments optimized per role. All four run under one WisGate key, one base URL, and one billing account.
The single principle to carry forward: Write STATE after every completed task. Read STATE at every executor startup. A pipeline that doesn't persist STATE is not a production pipeline — it's a demo that works until the first interruption.
Where to start: Goal-Driven Autonomous Tasks has the lowest setup friction. The input is your existing goal list. The output is artifacts and work product. The STATE pattern you implement for Case 1 transfers directly to Case 4. Start there, ship one overnight run, then extend.
For the complete 36-case OpenClaw library across all 6 categories, return to the [OpenClaw Use Cases pillar page →].
Your planner agent is one API call away. Get your WisGate key at wisgate.ai/hall/tokens — trial credits included, no commitment before your first pipeline run. Before connecting any executor agents, test your planner system prompt at wisgate.ai/studio/image — paste your goal list or backlog and verify the task decomposition output is atomic, correctly flagged, and structurally complete. Switching between Opus, Sonnet, and Haiku is one model ID parameter change in OpenClaw's model selector. All four pipelines on this page are configured and ready to run. Pick the case that matches your backlog and start the planner call tonight.
All per-token cost figures require confirmation from wisgate.ai/models before publication. Insert confirmed rates into all cost tables before this article goes live. Image generation pricing of $0.058/image and $0.068/image (official rate) are confirmed WisGate product figures. Claude model pricing is subject to change.