JUHE API Marketplace

How to Configure an Autonomous Game Dev Pipeline with OpenClaw: From Backlog to Git Commit

7 min read
By Emma Collins

Solo game developers and small teams share the same bottleneck: implementation time. The design is clear, the backlog is prioritized, but the gap between a feature ticket and a committed, tested implementation is filled with context-switching — reading the spec, scaffolding the code, writing tests, fixing regressions, committing. Every one of those steps follows a pattern that an agent can execute.

This tutorial configures a four-agent OpenClaw pipeline that reads a game feature backlog, plans implementation, writes code, runs validation, and commits to Git — with a "Bugs First" policy ensuring regressions never accumulate. One of the more complete OpenClaw use cases in the Creative & Building category, it runs three model tiers under a single WisGate key.


Test the planning step before configuring the full pipeline. Paste a feature spec into wisgate.ai/studio/image with claude-opus-4-6 selected and verify the implementation plan before wiring agents together. Get your WisGate key at wisgate.ai/hall/tokens.


AI Autonomous Code Generation Pipeline: Four-Agent Architecture

The pipeline uses four agents with distinct responsibilities and model tiers:

AgentModelRoleOutput
Architectclaude-opus-4-6Reads backlog, decomposes features, writes STATE.yamlImplementation plan with dependency graph
Implementerclaude-sonnet-4-5Codes each task from STATE.yaml in sequenceSource files committed per task
Validatorclaude-haiku-4-5-20251001Runs tests, checks output against acceptance criteriaPass / Fail + bug report
Bug Fixerclaude-sonnet-4-5Resolves Validator failures before next feature startsPatched source + re-validation trigger

The "Bugs First" policy: the Validator runs after every Implementer task. If it returns a failure, the Bug Fixer resolves it before the Implementer picks up the next backlog item. Bug debt never accumulates — the pipeline enforces a clean state at each task boundary.

STATE.yaml is the shared coordination file. The Architect writes it once per feature; every subsequent agent reads it, claims a task, and writes its result back. No central orchestrator consumes tokens between steps.


LLM Game Dev Agent: WisGate Configuration

Step 1 — Open the configuration file

OpenClaw stores its configuration in a JSON file in your home directory. Open your terminal and edit:

curl
nano ~/.openclaw/openclaw.json

Step 2 — Add the WisGate provider to your models section

Copy and paste the following into your models section. This registers all three model tiers the pipeline uses:

json
"models": {
  "mode": "merge",
  "providers": {
    "moonshot": {
      "baseUrl": "https://api.wisgate.ai/v1",
      "apiKey": "WISGATE-API-KEY",
      "api": "openai-completions",
      "models": [
        {
          "id": "claude-opus-4-6",
          "name": "Claude Opus 4.6",
          "reasoning": false,
          "input": ["text"],
          "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
          "contextWindow": 256000,
          "maxTokens": 8192
        },
        {
          "id": "claude-sonnet-4-5",
          "name": "Claude Sonnet 4.5",
          "reasoning": false,
          "input": ["text"],
          "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
          "contextWindow": 256000,
          "maxTokens": 8192
        },
        {
          "id": "claude-haiku-4-5-20251001",
          "name": "Claude Haiku 4.5",
          "reasoning": false,
          "input": ["text"],
          "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 },
          "contextWindow": 256000,
          "maxTokens": 8192
        }
      ]
    }
  }
}

Replace WISGATE-API-KEY with your key from wisgate.ai/hall/tokens. Confirm all model pricing at wisgate.ai/models.

Step 3 — Save and restart

  • Ctrl + OEnter to save
  • Ctrl + X to exit
  • Ctrl + C to stop the current session, then run openclaw tui

Set up a separate OpenClaw conversation context for each agent and assign the model from the selector. The Architect uses Opus; Implementer and Bug Fixer use Sonnet; Validator uses Haiku.

Note: OpenClaw was previously known as ClawdBot and MoltBot. These steps apply to all versions.


OpenClaw API Game Development Automation: STATE.yaml and Agent Prompts

STATE.yaml schema — written by Architect, shared by all agents:

yaml
feature_id: "feat-019"
title: "Player inventory system with weight limit"
status: in_progress

tasks:
  - id: "task-001"
    description: "Implement Item class with weight and stack properties"
    language: python
    acceptance: "Item instances correctly report weight; stack limit enforced"
    status: pending          # pending | claimed | completed | failed
    depends_on: []
    output_file: null

  - id: "task-002"
    description: "Implement Inventory class with add/remove and weight-limit enforcement"
    language: python
    acceptance: "Cannot exceed MAX_WEIGHT; returns error on overflow"
    status: pending
    depends_on: ["task-001"]
    output_file: null

Architect system prompt (Opus — runs once per feature):

You are a game feature architect.
INPUT: a backlog item in plain English.
OUTPUT: a valid STATE.yaml file following the schema provided.
Rules:
- Each task must be atomic: one file, one clearly defined output
- Set depends_on accurately — only list tasks whose output this task requires as input
- Tasks with no mutual dependency must have empty depends_on arrays (enable parallel execution)
- Specify language, acceptance criteria, and expected output_file path per task
Return valid YAML only. No preamble.

Implementer system prompt (Sonnet — one conversation per task):

You are a game developer implementing tasks from STATE.yaml.
1. Read STATE.yaml — claim the first task where status=pending and all depends_on tasks are completed
2. Write your agent ID to claimed_by; re-read to confirm no conflict
3. Implement the task: write clean, commented code satisfying the acceptance criteria
4. Write the output file to the path in output_file
5. Update STATE.yaml: status=completed, output_file=[path]
Return the updated STATE.yaml only.

Validator system prompt (Haiku — runs after every Implementer task):

You are a code validator.
Read the completed task from STATE.yaml. Review the output file at output_file.
Run the following checks:
1. Does the code satisfy the acceptance criteria?
2. Are there syntax errors or obvious logic bugs?
3. Does it break any existing interface contracts?
Return: PASS or FAIL, with a specific bug description if FAIL.
Update STATE.yaml: status=validated (PASS) or status=failed + bug_report field (FAIL).

Bug Fixer system prompt (Sonnet — triggered only on Validator FAIL):

You are a bug fixer.
Read the failed task from STATE.yaml. Review the output file and the bug_report.
Fix the specific reported issue. Do not refactor beyond the reported bug.
Overwrite the output file with the corrected implementation.
Update STATE.yaml: status=completed, bug_report=null.
The Validator will re-run after your fix.

OpenClaw Use Cases: Model Routing Rationale

AgentModelWhy this tier
ArchitectOpusFeature decomposition quality cascades to every downstream task — wrong dependency graph cannot be recovered without restarting the feature
ImplementerSonnetEach task is a scoped, well-defined implementation with explicit acceptance criteria — structured execution, not open-ended reasoning
ValidatorHaikuBinary pass/fail evaluation on a small output file — high-frequency, low-complexity judgment
Bug FixerSonnetTargeted fix on a specific reported bug — scoped like implementation, not architectural

Running Opus only on the Architect step — and Haiku on every Validator call — keeps per-feature cost proportional to the reasoning actually required. Confirm all model prices from wisgate.ai/models before projecting production costs.

OpenClaw Use Cases: From Backlog Item to Committed Code

Copy the STATE.yaml schema. Paste the four system prompts into separate OpenClaw conversation contexts with the correct model assigned to each. Run the Architect once against your first backlog item and review the generated STATE.yaml — verify that the dependency graph is correct and every task has a clear acceptance criterion before triggering the Implementer.

Once the Architect output looks right, the rest of the pipeline follows the same STATE.yaml read-claim-execute-write pattern covered in the Autonomous Project Management tutorial. The game dev pipeline adds the Validator/Bug Fixer loop and the "Bugs First" enforcement — the STATE.yaml coordination is identical.


The system prompts are ready to copy and the API call runs as-is. Generate your WisGate key at wisgate.ai/hall/tokens — one key covers all three model tiers. Before running the full pipeline, validate the Architect output against one backlog item at wisgate.ai/studio/image. Confirm the dependency graph is correct, then hand the STATE.yaml to the Implementer and let the pipeline run.

How to Configure an Autonomous Game Dev Pipeline with OpenClaw: From Backlog to Git Commit | JuheAPI