Vibe Coding Model Hub

Call Any AI Model from CLI: curl + WisGate API in 3 Steps

10 min buffer
By Liam Walker

If you already know how to send a curl request, you are very close to your first curl + WisGate API call. The workflow is simple: sign up, get an API key, point your script at a new base URL, and run the same request from your terminal. That means less setup friction and less rewriting.

Start here if you want to keep your existing scripts and make one base URL change to begin calling AI models from the command line with WisGate. You can test text generation, image generation, and streaming output without rebuilding your workflow.

The path is only three steps:

  1. Sign up on wisgate.ai and get your API key.
  2. Replace the base URL with https://api.wisgate.ai.
  3. Fire your first curl request.

What You Need Before You Start

Before you run anything, make sure you have four basics in place: a WisGate account, an API key, curl installed, and a rough idea of how API requests are structured. That is enough to get moving. You do not need to learn a new SDK first, and you do not need to rewrite your whole command-line workflow.

This tutorial is designed for a quick first request, not a long platform tour. If you already have an OpenAI-compatible script, the main change is the base URL. If you are starting from scratch, you can still follow along by copying the examples below and changing only the model, prompt, or output format you want.

Keep one thing in mind: WisGate is an API platform for accessing top-tier image, video, and coding models through a cost-efficient routing platform, so the terminal workflow stays focused on requests, responses, and output handling. That is the right mental model for this guide. You are not setting up a new app stack. You are making an API call from the command line.

Step 1 — Sign Up on WisGate and Get Your API Key

Go to wisgate.ai and create your account. Once you are in, find the API key area in the dashboard and copy a key for authentication. You will use that key in your curl requests as the bearer token.

This step matters because the key is what tells WisGate who you are and which requests should be billed to your account. Treat it like a password. Do not paste it into public code, and do not commit it to a repository. For local testing, store it in an environment variable so you can reuse it safely across multiple requests.

A simple shell setup looks like this:

export WISGATE_API_KEY="your_api_key_here"

From there, your requests can read the key from the environment instead of hardcoding it. That is a small habit, but it makes CLI testing cleaner and easier to repeat. It also keeps your scripts closer to what you would run in lightweight production testing.

Where to Find Your API Key

You will typically copy the key from your account dashboard after signing up. Put it in the Authorization header of each curl request, usually in the format shown below. The key should stay private, because anyone with access to it can send requests on your behalf.

A typical header pattern looks like this:

-H "Authorization: Bearer $WISGATE_API_KEY"

That one line is the authentication piece you will reuse throughout the rest of this tutorial. If you only remember one thing from this section, remember to keep the API key private and load it into your terminal session before testing.

Step 2 — Replace Your Base URL with https://api.wisgate.ai

Once your account is ready, the next move is simple: replace your base URL with https://api.wisgate.ai and keep the rest of your request structure familiar. That is the part that makes this tutorial practical for developers who already have scripts. You are not starting over. You are swapping the endpoint.

WisGate uses an OpenAI-compatible endpoint, which means many existing scripts can keep the same request payload shape, headers, and model selection style. In practice, this often becomes a one-line change in your code or shell script. Point the client at a new base URL, rerun the request, and check the response. That is a much smaller adjustment than rebuilding your workflow around a new API format.

Why OpenAI-Compatible Endpoints Matter

OpenAI compatibility is useful because it lowers the cost of adoption. If your script already sends structured messages, model names, and standard headers, the request pattern probably feels familiar right away. That means your automation can keep working with minimal changes, which is especially handy for terminal-based testing, internal tooling, and small production checks.

For example, if you already have a shell script that calls an OpenAI-style endpoint, you can often keep the request body intact and just update the host to https://api.wisgate.ai. The result is less debugging, fewer moving parts, and faster experimentation from the command line. For developers, that is the kind of compatibility that saves time on day one.

Step 3 — Fire Your First curl Request

Now it is time to send a real request. Start with a simple text generation call, then try another model, then move to image generation and streaming. That order helps you confirm that authentication, endpoint routing, and payload formatting all work before you branch into more advanced CLI workflows.

The examples below assume you have exported your API key and replaced the base URL with https://api.wisgate.ai. If you are testing from scratch, run the commands in order and change only the prompt text if you want to try your own inputs. The goal is a first successful API call, not a perfect final script.

Use the same request structure repeatedly. That is the point of an OpenAI-compatible endpoint. Once one model works, the rest of your command-line testing becomes much easier.

Text Generation Example: GPT-5

Here is a simple GPT-5 text generation request you can run from the terminal:

curl https://api.wisgate.ai/v1/chat/completions \
  -H "Authorization: Bearer $WISGATE_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-5",
    "messages": [
      {"role": "user", "content": "Write a short, friendly explanation of what an API key does."}
    ],
    "temperature": 0.7
  }'

This is a good first test because it confirms the full text-generation path: authentication, endpoint routing, request payload, and model selection. If the command returns a response, your CLI setup is working. From there, you can swap in a different prompt or automate the call inside a shell script.

Text Generation Example: Claude Opus 4.6

If you want to try another model, keep the request format the same and change only the model name and prompt. Here is a Claude Opus 4.6 example:

curl https://api.wisgate.ai/v1/chat/completions \
  -H "Authorization: Bearer $WISGATE_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "claude-opus-4.6",
    "messages": [
      {"role": "user", "content": "Summarize the benefits of keeping existing scripts with an OpenAI-compatible endpoint."}
    ],
    "temperature": 0.5
  }'

This is useful when you want to compare model behavior without changing the rest of your setup. The command line stays consistent, which makes side-by-side testing easier. That is exactly why OpenAI-compatible APIs are so useful for developers who already have working scripts.

Image Generation Example: Nano Banana 2

You can also use the same general CLI pattern for image generation. Here is a Nano Banana 2 example that sends a prompt and receives an image response:

curl https://api.wisgate.ai/v1/images/generations \
  -H "Authorization: Bearer $WISGATE_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "nano-banana-2",
    "prompt": "A clean terminal screenshot showing a developer testing an AI API call, dark theme, blue accents, realistic command line layout"
  }'

For CLI users, image generation is often about quick prototyping. You can test prompt quality, check that the model responds correctly, and feed the result into a lightweight workflow without leaving the terminal. If your automation already handles JSON responses, this can slot in with minimal extra code.

Streaming Response Example

Streaming is helpful when you want output to arrive token by token instead of waiting for the full completion. That matters in terminal workflows because you can watch the response appear in real time. Here is a streaming request pattern:

curl https://api.wisgate.ai/v1/chat/completions \
  -H "Authorization: Bearer $WISGATE_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-5",
    "messages": [
      {"role": "user", "content": "Explain streaming responses in one paragraph."}
    ],
    "stream": true
  }'

When you set streaming to true, you can wire the output into scripts that display partial responses, measure latency, or build a more responsive terminal experience.

How WisGate Pricing Helps CLI Users Save

Pricing matters when you are testing from the command line, because CLI usage can be bursty. You might run ten experiments in five minutes, then stop for the day. A pay-as-you-go setup fits that pattern better than a subscription you only partially use. You pay for the requests you actually send, which is easier to justify for experimentation, internal tools, and lightweight production checks.

WisGate model pricing is typically 20%–50% lower than official pricing on the WisGate Models page. That is a useful comparison point when you are deciding whether to keep a job in your terminal or move it to a larger production workflow. If you want to compare options before choosing a model, check the models page first. It is the right place to review current pricing and available model choices.

When Pay-As-You-Go Makes More Sense Than a Subscription

Pay-as-you-go makes sense when your usage changes from week to week or when you are still validating prompts, outputs, and integrations. A subscription can be fine for steady, high-volume traffic, but it can also leave you paying for capacity you do not actually use. For terminal workflows, that waste adds up fast if you are only running short tests.

With WisGate, the economics are easier to map to real usage. Try a request, inspect the output, and only keep going if it fits your needs. That keeps experimentation practical and helps avoid subscription costs that do not match your actual CLI usage pattern.

Troubleshooting Common curl Setup Issues

If your first request does not work, check the usual suspects first. The most common issue is API key placement. Make sure the Authorization header uses the bearer format and that your environment variable is populated. Next, check the endpoint. A small typo in https://api.wisgate.ai can produce a confusing failure, especially if you are copying commands between terminals or scripts.

Also inspect your JSON payload. A missing quote, a bad comma, or the wrong model name can break the request before it reaches the API. If you are debugging quickly, strip the request down to the minimum fields and add pieces back one at a time. That approach is usually faster than guessing. For streaming, make sure you are ready to handle incremental output, since some terminals and wrappers display it differently than a normal completion.

Recap: The Fastest Path to Your First AI API Call

The path is straightforward: sign up at wisgate.ai, get your API key, replace the base URL with https://api.wisgate.ai, and run your first curl request. If you already have OpenAI-compatible scripts, you usually only need a one-line change to start testing with WisGate. After that, check https://wisgate.ai/models for pricing and model options, then keep building from the terminal.

Tags:API Integration CLI Tutorial Developer Tools