🎯 Try it now: Transform customer photos into virtual hair and beauty styles with Nano Banana 2's image-to-image API on WisGate — no ML expertise required. Start your free trial at wisgate.ai →
Building a virtual hair try-on feature sounds like a months-long project — custom model training, GPU infrastructure, a dedicated ML team. With the right API, the core functionality fits in a single Python file. This tutorial walks through a complete hair and beauty restyling workflow using the Nano Banana 2 image-to-image endpoint, accessed through the AI image generation API on WisGate, in 12 lines of code.
The Virtual Hair & Beauty Try-On Challenge
Most teams hit three walls when building a virtual try-on feature.
Cost at scale. Existing SaaS platforms charge per seat or per session. A campaign with 10,000 customer interactions can exceed four figures before you've shipped anything. That math rarely works for early-stage products or lean teams.
Technical complexity. DIY approaches using open-source diffusion models require GPU provisioning, model weight management, and careful prompt engineering just to get consistent output. That's a lot of infrastructure to maintain for one feature.
Inconsistent API performance. Calling image generation APIs directly through an official provider often means cold-start delays, unpredictable response times, and timeout errors — all of which create poor user experiences in a live product.
There's a middle path: a stable, high-quality image-to-image API at a predictable per-image cost, with response times you can actually build around.
Introducing Nano Banana 2's Image-to-Image Endpoint
Nano Banana 2 is the image generation model at the core of WisGate's image API. Its image-to-image mode accepts a base photo and a text prompt, then produces a modified version applying your described style changes while preserving the subject's facial structure and overall composition.
For hair and beauty applications, this means you describe a new hair color, cut, or complete restyle in plain language — and receive a photorealistic output without fine-tuning anything.
The endpoint is fully compatible with the standard Gemini API request format. If your team already uses the official Gemini SDK or google-generativeai Python library, switching to WisGate requires changing exactly two values: the base URL and the API key. No refactoring, no new SDK to learn.
Average processing time sits at roughly 8 seconds per image for standard outputs, with consistent 20-second turnaround for higher-resolution base64 outputs from 0.5K up to 4K — a response window you can build a real user-facing feature around.
Step-by-Step: Building Your Try-On in 12 Lines of Python with WisGate API
Quick Start
Simply replace the Base URL and API Key in the official SDK or requests to use it:
- Base URL:
https://api.wisgate.ai(replacegenerativelanguage.googleapis.com) - API Key: Replace
$GEMINI_API_KEYwith your$WISDOM_GATE_KEY
That's the entire migration. Everything else — request format, response structure, model parameters — stays identical.
Input: Customer Photo and Style Description
The image-to-image endpoint takes two inputs: a base64-encoded customer photo (JPEG or PNG) and a text prompt describing the desired style. Keep prompts specific — describe the cut, the color, and the finish.
Example style prompts that produce reliable results:
"Restyle this hair: beachy waves, platinum blonde, shoulder length, sun-kissed highlights"
"Short pixie cut, deep auburn, natural texture, side-swept fringe"
"Full restyle: sleek straight bob, jet black, blunt cut at the chin, high shine finish"
"Soft curtain bangs, warm caramel balayage, long layers, centre part"
A few observations from testing: including the finish (matte, glossy, natural texture) meaningfully affects output. Specific color names work better than general descriptors — "copper penny balayage" produces more consistent results than "reddish highlights." For full restyling, leading with "Full restyle:" signals to the model that major structural changes are expected, not just color adjustments.
Image-to-Image Generation
You can upload an input image along with a text prompt to generate a modified new image.
cURL example:
curl -s -X POST \
"https://api.wisgate.ai/v1beta/models/gemini-3-pro-image-preview:generateContent" \
-H "x-goog-api-key: $WISDOM_GATE_KEY" \
-H "Content-Type: application/json" \
-d '{
"contents": [{
"role": "user",
"parts": [
{ "text": "Restyle this hair: beachy waves, platinum blonde, shoulder length, sun-kissed highlights" },
{
"inline_data": {
"mime_type": "image/jpeg",
"data": "BASE64_DATA_HERE"
}
}
]
}],
"generationConfig": {
"responseModalities": ["TEXT", "IMAGE"]
}
}'
Python — complete 12-line working example:
import requests, base64, json
img_b64 = base64.b64encode(open("customer.jpg", "rb").read()).decode()
payload = {
"contents": [{"role": "user", "parts": [
{"text": "Restyle this hair: beachy waves, platinum blonde, shoulder length"},
{"inline_data": {"mime_type": "image/jpeg", "data": img_b64}}
]}],
"generationConfig": {"responseModalities": ["TEXT", "IMAGE"]}
}
resp = requests.post("https://api.wisgate.ai/v1beta/models/gemini-3-pro-image-preview:generateContent",
headers={"x-goog-api-key": "YOUR_WISDOM_GATE_KEY", "Content-Type": "application/json"}, json=payload)
parts = resp.json()["candidates"][0]["content"]["parts"]
img_data = next(p["inlineData"]["data"] for p in parts if "inlineData" in p)
open("restyled.jpg", "wb").write(base64.b64decode(img_data))
That's the full pipeline: read the photo → build the payload with your style prompt → call the WisGate endpoint → extract the base64 image from the response → write to disk. No SDK installation, no model weights, no GPU required.
Output: Restyled Image and Performance
The API returns a base64-encoded image inside the response JSON. Extract it, decode it, and write it to disk — or pipe it directly into your app's image storage or CDN. Output quality is consistent across requests; repeated calls with the same prompt produce reliably similar results.
| Resolution | Average Response Time |
|---|---|
| Standard (1K) | ~8 seconds |
| High-res (0.5K–4K base64) | ~20 seconds (consistent) |
Pricing and Cost Efficiency of WisGate's API
At scale, per-image pricing is what separates workable unit economics from a budget problem.
| Provider | Cost per Image | 1,000 Images |
|---|---|---|
| WisGate | $0.058 | $58.00 |
| Official Rate | $0.068 | $68.00 |
| Typical SaaS Virtual Try-On | $0.15–$0.40+ | $150–$400+ |
The $0.01 difference per image becomes meaningful at volume. A salon chain running 50,000 try-ons per month saves $500 monthly compared to the official rate. Compared to dedicated SaaS virtual try-on platforms, WisGate's per-image pricing is substantially lower while giving your team direct API access and no feature restrictions.
One point worth addressing directly: the lower price does not mean lower output quality. WisGate routes requests to the same underlying Nano Banana 2 model with the same generation parameters. The output is identical to what you'd get from the official endpoint — the difference is infrastructure efficiency, not model capability.
Non-technical team members can also test prompts directly in WisGate AI Studio — no code needed. Product managers, salon owners, and marketing staff can iterate on style descriptions and evaluate output quality before any development work begins.
Getting Started with WisGate for Your AI Hair & Beauty Try-On
Getting from zero to a working try-on takes about five minutes:
1. Create an account at wisgate.ai. The free trial gives you enough credits to run a working prototype without entering payment details.
2. Get your API key from the dashboard. It slots directly into the x-goog-api-key header — the same header format used by the official Gemini API. No new authentication flow to learn.
Full Nano Banana 2 model documentation is available at wisgate.ai/models/gemini-3.1-flash-image-preview.
Wrapping Up
Virtual hair and beauty try-on doesn't have to be technically complex or expensive to build. The Nano Banana 2 image-to-image endpoint handles the model-level work — you write the integration, define the style prompts, and ship a feature your customers will use. At $0.058 per image and consistent 8-second response times, the economics work whether you're validating a prototype or running at salon-chain scale.
The 12 lines of Python in this tutorial are the complete working core of that feature. Everything else — the UI, storage layer, user flow — you build on top of what you already know.
🚀 Sign up for a free trial at wisgate.ai to start building your AI-powered hair and beauty try-on experience today.