JUHE API Marketplace

How to Build a Daily Reddit Digest Agent with OpenClaw and WisGate API

7 min read
By Emma Collins

You follow eight subreddits. Every morning you open Reddit, tab through r/LocalLLaMA, r/MachineLearning, r/devops, and five others, trying to catch anything worth reading before the day starts. Most of it is noise. The signal — a new paper, a production incident writeup, a library release — is buried somewhere in the middle.

The process takes 20–40 minutes. The coverage is inconsistent depending on when you check. Some mornings you miss things. Some mornings you spend 30 minutes reading nothing useful.

The fix: configure OpenClaw to pull the top posts from each subreddit every morning, score them against your personal preference profile, and deliver a ranked digest to a Slack channel or local file before 8 AM. You open one message instead of eight tabs. This tutorial builds exactly that agent — part of the broader OpenClaw Social Media Use Cases category.


By the end of this tutorial you'll have a live Reddit digest agent running via WisGate — pulling top posts from your chosen subreddits, scoring them by personal preference, and delivering a structured summary on a daily cron schedule. Test the summarization step first at wisgate.ai/studio/image before wiring the trigger. Get your API key at wisgate.ai/hall/tokens.


What the Daily Reddit Digest Agent Does

Inputs:

  • Subreddit list: LocalLLaMA, MachineLearning, programming, devops, ExperiencedDevs, sysadmin, selfhosted, netsec
  • Personal preference profile: topics, keywords, and post types you care about
  • Time window: top posts from the last 24 hours, up to 5 per subreddit

Processing:

  1. Fetch post metadata from Reddit's public JSON API — no authentication required for basic use
  2. Pass posts to claude-haiku-4-5-20251001 via WisGate for scoring and summarization
  3. Score each post against your preference profile (1–3 relevance score)
  4. Return a ranked digest grouped by subreddit, sorted by score descending

Output: structured plain-text digest saved to a local file or posted to a Slack webhook, delivered by a daily cron job at your configured time.

Components needed: OpenClaw configured with WisGate, a cron trigger, and the YAML and API call in this tutorial.

OpenClaw API Reddit Automation: WisGate Configuration

Step 1 — Locate and Open the Configuration File

OpenClaw stores its configuration in a JSON file in your home directory. Open your terminal and edit the file at:

Using nano:

curl
nano ~/.openclaw/openclaw.json

Step 2 — Add the WisGate Provider to Your Models Section

Copy and paste the following into the models section of your openclaw.json:

json
"models": {
  "mode": "merge",
  "providers": {
    "moonshot": {
      "baseUrl": "https://api.wisgate.ai/v1",
      "apiKey": "YOUR-WISGATE-API-KEY",
      "api": "openai-completions",
      "models": [
        {
          "id": "claude-haiku-4-5-20251001",
          "name": "Claude Haiku 4.5",
          "reasoning": false,
          "input": ["text"],
          "cost": {
            "input": 0,
            "output": 0,
            "cacheRead": 0,
            "cacheWrite": 0
          },
          "contextWindow": 256000,
          "maxTokens": 8192
        }
      ]
    }
  }
}

Replace YOUR-WISGATE-API-KEY with your key from wisgate.ai/hall/tokens. Haiku is the correct tier for this use case — fixed-schema summarization with a defined output format, high call frequency, low per-call cost. Confirm pricing from https://wisgate.ai/models.

Step 3 — Save, Exit, and Restart OpenClaw

  1. Press Ctrl + OEnter to save
  2. Press Ctrl + X to exit nano
  3. Press Ctrl + C to stop the current session, then run:
curl
openclaw tui

Note: OpenClaw was previously known as ClawdBot and MoltBot — these steps apply to all versions.


The OpenClaw Agent YAML Configuration

This is the complete agent definition. Copy it into your OpenClaw agents directory or paste it into the agent configuration panel.

yaml
# reddit-digest-agent.yaml
name: Daily Reddit Digest
description: Fetches top Reddit posts, scores by preference, returns ranked digest
schedule: "0 7 * * *"   # Daily at 07:00 — adjust to your timezone

model:
  provider: custom
  base_url: "https://api.wisgate.ai/v1"
  model_id: "claude-haiku-4-5-20251001"
  api_key: "${WISDOM_GATE_KEY}"           # Set as environment variable

config:
  subreddits:
    - LocalLLaMA
    - MachineLearning
    - programming
    - devops
    - ExperiencedDevs
    - sysadmin
    - selfhosted
    - netsec
  posts_per_subreddit: 5
  time_window: "day"           # Reddit sort: top posts from last 24h
  output: "slack_webhook"      # Options: slack_webhook | local_file | stdout
  output_path: "./digests/reddit_${DATE}.txt"

system_prompt: |
  You are a Reddit digest assistant for a software developer.
  For each post provided, return exactly:
  - Subreddit: r/[name]
  - Title: [post title]
  - Summary: [1–2 sentences, max 40 words, technical substance only]
  - Score: [1 = skip | 2 = skim | 3 = read now]
  - URL: [post URL]

  Preference profile:
  - Score 3: original research, library releases, production incident writeups,
    architecture decisions, security disclosures, AI/ML papers
  - Score 2: tutorials, opinion pieces with data, community surveys
  - Score 1: memes, reposts, vague questions, duplicate coverage

  Group output by subreddit. Sort within each group by Score descending.
  Return plain structured text. No preamble. No commentary.

max_tokens: 2048

To customize: edit the subreddits list and the preference profile scoring rules in system_prompt. The scoring rules are the primary lever for digest quality — spend your tuning time here before changing anything else.

OpenClaw Use Cases: Annual Cost of a Daily Claude Haiku Digest Agent

This is where daily automation compounds. One run is trivial. 365 runs per year is a real cost — and the difference between routing through WisGate versus direct Claude API access adds up over a year.

Per-run token estimate:

  • Input: 8 subreddits × 5 posts × ~150 tokens/post = ~6,000 input tokens
  • System prompt: ~200 tokens
  • Output: digest summary ~800 tokens
  • Total per run: ~7,000 tokens

Annual volume: 1 run/day × 365 days = 365 runs/year

  • Annual input tokens: ~6,200 × 365 = ~2,263,000 tokens
  • Annual output tokens: ~800 × 365 = ~292,000 tokens

Cost comparison — confirm all per-token pricing from https://wisgate.ai/models before publishing; insert confirmed figures into the table below:

ProviderInput (per 1M tokens)Output (per 1M tokens)Annual input costAnnual output costAnnual total
WisGateConfirmConfirmCalculateCalculateCalculate
Direct Claude APIConfirmConfirmCalculateCalculateCalculate
Annual savingState delta

Once confirmed, state the annual saving as a specific dollar figure and frame it concretely: at this volume, the saving covers [N months of a developer tool subscription / N cups of coffee / equivalent developer hours] — make the number land.


OpenClaw Use Cases: From Reddit Tabs to a Scheduled Digest in One Session

The YAML is ready to copy. The API call is ready to run. The cron trigger is already in the YAML — just set your timezone offset and activate.

Your checklist:

  1. Copy the YAML → edit your subreddit list
  2. Update the preference scoring rules in system_prompt
  3. Validate output in AI Studio with a sample post batch
  4. Activate the schedule — first digest runs tomorrow at 07:00

The second digest runs the next day, at no extra effort.

The first digest runs tomorrow morning. Get your WisGate key at wisgate.ai/hall/tokens — trial credits included, no commitment. Copy the YAML from Section 4, edit your subreddit list and scoring rules, then test the API call at wisgate.ai/studio/image with a few real post titles before activating the schedule. Your single next action: generate the API key.


*All per-token pricing requires confirmation from wisgate.ai/models before publishing the cost comparison table. Insert confirmed Haiku rates and calculate the annual delta bef

How to Build a Daily Reddit Digest Agent with OpenClaw and WisGate API | JuheAPI