Integrating with Applications: Real-World Scenarios
The Scenario API gives you programmatic access to AI-powered image generation, video creation, custom model training, and asset management. This guide walks through five real-world integration patterns with architecture guidance, code examples, and best practices to help you ship production-ready integrations.
Table of Contents
- Before You Start
- Available Models
- 1. Game Asset Pipeline — Batch Generation with Style Consistency
- 2. E-Commerce — Dynamic Product Visuals
- 3. Content Platform — Automated Illustration Generation
- 4. User-Generated Content — Personalized Avatar & Art Creation
- 5. Design Automation — Rapid Visual Iteration with Image Editing Models
- Production Integration Patterns
- API Quick Reference
- What's Next?
Before You Start
All examples use the following base configuration:
# Base URL
https://api.cloud.scenario.com/v1
# Authentication: Basic Auth (Base64-encoded API_KEY:API_SECRET)
Authorization: Basic <base64(API_KEY:API_SECRET)># Example with curl
curl https://api.cloud.scenario.com/v1/models \
-H "Authorization: Basic $(echo -n 'your-api-key:your-api-secret' | base64)"You can generate API key pairs from your Scenario dashboard under Settings > API Keys. You will generate an API Key and an API Secret.
All generation endpoints use the unified path /generate/custom/{modelId}. You pass the model ID in the URL and the generation parameters (prompt, images, settings) in the request body. The available API parameters for each model are listed on the model's page in the Scenario web app — check there for supported inputs, defaults, and constraints specific to the model you're using.
Uploading Assets
When passing images to generation endpoints, you reference them by asset ID. Upload images first via the Assets API to get an asset ID. See Uploading Assets for details.
Available Models
Scenario provides a wide catalog of models accessible via /generate/custom/{modelId}. Browse available models by category:
| Category | Description | Documentation |
|---|---|---|
| Image Generation | Third-party and Scenario-trained models for text-to-image and image-to-image | Image Models |
| Video Generation | Text-to-video and image-to-video models (Runway, Luma, Kling, etc.) | Video Models |
| 3D Generation | Text-to-3D and image-to-3D model generation | 3D Models |
| Audio Generation | Text-to-audio and music generation | Audio Models |
| Background Removal | Remove backgrounds from images automatically | Background Removal |
| Vectorization | Convert raster images to vector format | Vectorization |
| Image Upscale | Enhance image resolution and quality | Image Upscale |
| Video Upscale | Enhance video resolution and quality | Video Upscale |
You can also train your own custom models to capture a specific art style, character, or concept. See Training Custom Models for details.
1. Game Asset Pipeline — Batch Generation with Style Consistency
The Problem
Game studios need hundreds of consistent assets — character portraits, item icons, environment textures — that all share the same art style. Manual creation is slow and expensive. Outsourcing risks style drift.
Architecture
Game Editor / CI/CD → Your Backend (job queue) → Scenario API (generate) → Asset CDN (deliver)
Step 1: Train a Custom Model on Your Art Style
Upload 10–30 reference images of your game's art style, then kick off training:
import requests
from base64 import b64encode
API_KEY = "your-api-key"
API_SECRET = "your-api-secret"
BASE_URL = "https://api.cloud.scenario.com/v1"
AUTH = b64encode(f"{API_KEY}:{API_SECRET}".encode()).decode()
HEADERS = {"Authorization": f"Basic {AUTH}", "Content-Type": "application/json"}
# Create a new model
response = requests.post(f"{BASE_URL}/models", headers=HEADERS, json={
"name": "pixel-dungeon-style",
"trainingSubjectType": "style"
})
model_id = response.json()["model"]["id"]
# Upload training images (repeat for each image)
with open("reference_art_01.png", "rb") as f:
requests.post(
f"{BASE_URL}/models/{model_id}/training-images",
headers={"Authorization": f"Basic {AUTH}"},
files={"image": f}
)
# Start training
requests.put(f"{BASE_URL}/models/{model_id}/train", headers=HEADERS)Step 2: Generate Assets in Batch
Once trained, generate assets using your custom model. The modelId in the URL is the base model (e.g., the Flux variant you trained on), and your trained model ID is passed in the request body as a parameter:
def generate_asset(prompt, base_model_id, trained_model_id, num_images=4):
"""Generate game assets with consistent style using a trained LoRA."""
response = requests.post(
f"{BASE_URL}/generate/custom/{base_model_id}",
headers=HEADERS,
json={
"prompt": prompt,
"modelId": trained_model_id, # Your trained model ID
"numOutputs": num_images,
"seed": 42 # Fix seed for reproducibility during iteration
}
)
return response.json()
# Generate a batch of game assets
asset_prompts = [
"iron sword icon, game item, top-down view, transparent background",
"healing potion icon, red liquid, glass bottle, game item",
"wooden shield icon, game item, top-down view",
"fire spell icon, magical flames, game UI element",
]
BASE_MODEL = "flux-2-dev" # The base model your LoRA was trained on
TRAINED_MODEL = "your-trained-model-id" # The trained model ID from Step 1
jobs = []
for prompt in asset_prompts:
result = generate_asset(prompt, base_model_id=BASE_MODEL, trained_model_id=TRAINED_MODEL)
jobs.append(result["job"]["jobId"])
print(f"Started job: {result['job']['jobId']} for: {prompt[:50]}")Step 3: Poll for Completion
Generation is asynchronous. Poll the job status until it completes:
import time
def wait_for_job(job_id, timeout=120):
"""Poll until job completes."""
start = time.time()
while time.time() - start < timeout:
response = requests.get(f"{BASE_URL}/jobs/{job_id}", headers=HEADERS)
job = response.json()["job"]
if job["status"] == "succeeded":
return job
elif job["status"] == "failed":
raise Exception(f"Job failed: {job.get('error', 'Unknown error')}")
time.sleep(2)
raise TimeoutError(f"Job {job_id} did not complete within {timeout}s")
# Collect all generated assets
for job_id in jobs:
completed = wait_for_job(job_id)
for asset in completed.get("assets", []):
print(f"Asset ready: {asset['id']} — {asset['url']}")Best Practices for Game Pipelines
- Train once, generate many: A single trained model can produce thousands of consistent assets.
- Use seeds: Fix the
seedparameter during iteration to get reproducible results. Randomize in production for variety. - Use collections: Group related assets (e.g., "Forest Tileset", "UI Icons") via the Collections API for easy management.
2. E-Commerce — Dynamic Product Visuals
The Problem
E-commerce platforms need product images in multiple styles, contexts, and formats — lifestyle shots, seasonal themes, A/B test variants — without scheduling a photoshoot for each variation.
Architecture
Product DB (new product) → Image Worker → Scenario API (generate) → Your App (display variants)
Generate Product Variants
Upload your product photo first to get an asset ID (see Uploading Assets), then transform it into new contexts:
const SCENARIO_API = "https://api.cloud.scenario.com/v1";
const API_KEY = "your-api-key";
const API_SECRET = "your-api-secret";
const AUTH = Buffer.from(`${API_KEY}:${API_SECRET}`).toString("base64");
async function generateProductVariant(modelId, assetId, context) {
const response = await fetch(
`${SCENARIO_API}/generate/custom/${modelId}`,
{
method: "POST",
headers: {
Authorization: `Basic ${AUTH}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
prompt: `professional product photo, ${context}, studio lighting, clean background`,
image: assetId, // Asset ID of your uploaded product photo
strength: 0.6, // How much to transform (0 = keep original, 1 = fully reimagine)
numOutputs: 3,
}),
}
);
return response.json();
}
// Generate seasonal variants for a product
const MODEL_ID = "your-product-model-id";
const PRODUCT_ASSET_ID = "asset_sneaker01"; // Upload your product photo first
const contexts = [
"winter holiday theme, snowflakes, cozy warm lighting",
"summer beach theme, bright natural sunlight",
"minimalist modern, white background, editorial style",
"luxury premium feel, dark background, dramatic lighting",
];
for (const context of contexts) {
const result = await generateProductVariant(
MODEL_ID,
PRODUCT_ASSET_ID,
context
);
console.log(`Job ${result.job.jobId}: ${context.slice(0, 40)}...`);
}Remove & Replace Backgrounds
Use the background removal tool, then place the product in a new scene with an image editing model:
// Step 1: Remove background
async function removeBackground(assetId) {
const response = await fetch(
`${SCENARIO_API}/generate/custom/photoroom-background-remover`,
{
method: "POST",
headers: {
Authorization: `Basic ${AUTH}`,
"Content-Type": "application/json",
},
body: JSON.stringify({ image: assetId }),
}
);
return response.json();
}
// Step 2: Generate the product in a new scene using an image editing model
async function placeInScene(modelId, assetId, scenePrompt) {
const response = await fetch(
`${SCENARIO_API}/generate/custom/${modelId}`,
{
method: "POST",
headers: {
Authorization: `Basic ${AUTH}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
prompt: scenePrompt,
image: assetId,
numOutputs: 2,
}),
}
);
return response.json();
}Best Practices for E-Commerce
- Use moderate strength (0.4–0.7) when passing a reference image: preserves product identity while changing context.
- Cache generated variants: Store asset IDs and URLs in your product database to avoid regenerating.
- A/B test with
numOutputs: Generate 3–4 variants per product, then measure click-through rates. - Use
?dryRun=true: Preview the credit cost of a generation before committing.
3. Content Platform — Automated Illustration Generation
The Problem
Content platforms (blogs, news, storytelling apps) need relevant visuals for every piece of content. Stock photos feel generic. Manual illustration is too slow for content velocity.
Architecture
CMS / Editor (publish) → NLP Extract (keywords) → Scenario API (generate) → Article (with art)
Extract Keywords and Generate Illustrations
import requests
import json
from base64 import b64encode
API_KEY = "your-api-key"
API_SECRET = "your-api-secret"
BASE_URL = "https://api.cloud.scenario.com/v1"
AUTH = b64encode(f"{API_KEY}:{API_SECRET}".encode()).decode()
HEADERS = {"Authorization": f"Basic {AUTH}", "Content-Type": "application/json"}
def generate_article_illustration(model_id, title, summary, style="editorial illustration"):
"""Generate an illustration based on article content."""
# Build a descriptive prompt from article metadata
prompt = (
f"{style}, {title}, "
f"depicting: {summary[:200]}, "
f"professional illustration, high quality, vivid colors"
)
response = requests.post(
f"{BASE_URL}/generate/custom/{model_id}",
headers=HEADERS,
json={
"prompt": prompt,
"numOutputs": 3,
}
)
return response.json()
# Example: Auto-illustrate a blog post
result = generate_article_illustration(
model_id="your-editorial-model-id",
title="The Future of Renewable Energy",
summary="Solar panels and wind turbines powering a modern sustainable city",
style="flat vector illustration, modern tech style"
)
print(f"Job started: {result['job']['jobId']}")Auto-Caption Generated Images for SEO
Use the caption endpoint to generate alt text automatically:
def generate_caption(asset_id):
"""Generate an SEO-friendly caption for a generated image."""
response = requests.post(f"{BASE_URL}/generate/caption", headers=HEADERS, json={
"assetId": asset_id
})
return response.json()
# After generation completes, caption the best result
caption_result = generate_caption("asset_abc123")
alt_text = caption_result.get("caption", "")
print(f"Alt text: {alt_text}")Best Practices for Content Platforms
- Create style-specific models: Train separate models for "tech blog," "lifestyle," "news editorial" etc. to match your brand.
- Generate at publish time: Hook into your CMS publish event to trigger generation, with a fallback placeholder.
- Caption everything: Use the caption API for accessibility and SEO.
4. User-Generated Content — Personalized Avatar & Art Creation
The Problem
Social apps and gaming platforms want users to create personalized content — avatars, profile art, custom stickers — without exposing raw AI complexity.
Architecture
Mobile App (user picks style + prompt) → Your API (validates, rate limits) → Scenario API (generates) → User Gallery
Expose a Curated Generation Interface
Never expose raw Scenario API access to end users. Wrap it in your own API with guardrails:
// your-backend/routes/generate-avatar.js
import express from "express";
const SCENARIO_API = "https://api.cloud.scenario.com/v1";
const API_KEY = process.env.SCENARIO_API_KEY;
const API_SECRET = process.env.SCENARIO_API_SECRET;
const AUTH = Buffer.from(`${API_KEY}:${API_SECRET}`).toString("base64");
// Pre-approved styles users can pick from
const STYLE_PRESETS = {
anime: { modelId: "model-anime-portrait-id" },
pixel: { modelId: "model-pixel-art-id" },
fantasy: { modelId: "model-fantasy-portrait-id" },
};
export async function generateAvatar(req, res) {
const { style, description } = req.body;
const userId = req.user.id;
// Validate style
const preset = STYLE_PRESETS[style];
if (!preset) {
return res.status(400).json({ error: "Invalid style. Choose: anime, pixel, or fantasy." });
}
// Sanitize user input (basic example — use a content filter in production)
const safeDescription = description
.replace(/[^a-zA-Z0-9\s,.-]/g, "")
.slice(0, 200);
// Rate limit check (implement per your needs)
const dailyCount = await getUserDailyGenerationCount(userId);
if (dailyCount >= 10) {
return res.status(429).json({ error: "Daily generation limit reached." });
}
// Generate via Scenario
const response = await fetch(
`${SCENARIO_API}/generate/custom/${preset.modelId}`,
{
method: "POST",
headers: {
Authorization: `Basic ${AUTH}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
prompt: `portrait avatar, ${safeDescription}, centered, clean background`,
numOutputs: 4, // Give users choices
}),
}
);
const result = await response.json();
return res.json({
jobId: result.job.jobId,
message: "Generating your avatar — check back in a few seconds!",
});
}Best Practices for User-Facing Apps
- Never expose your API key client-side: Always proxy through your backend.
- Use style presets: Let users choose from curated styles rather than typing raw prompts.
- Sanitize input: Filter user text for prompt injection and inappropriate content.
- Rate limit per user: Prevent abuse — 5–20 generations per day is typical for free tiers.
- Use
?dryRun=trueto preview costs before committing to generation. - Generate multiple outputs: Return 3–4 options so users can pick their favorite.
5. Design Automation — Rapid Visual Iteration with Image Editing Models
The Problem
Design agencies need to rapidly iterate on visual concepts for clients — exploring different aesthetics, refining details, and restyling existing assets. Manual iteration through Photoshop is slow and expensive.
Architecture
Design / Asset Upload → Your Backend (orchestrate) → Scenario API (image editing) → Client Review App
Use Image Editing Models to Restyle and Refine
Image editing models (like Gemini 3.1) let you transform existing visuals with natural language instructions. Upload your source design (see Uploading Assets), then describe the changes you want:
import requests
from base64 import b64encode
API_KEY = "your-api-key"
API_SECRET = "your-api-secret"
BASE_URL = "https://api.cloud.scenario.com/v1"
AUTH = b64encode(f"{API_KEY}:{API_SECRET}".encode()).decode()
HEADERS = {"Authorization": f"Basic {AUTH}", "Content-Type": "application/json"}
def generate_design_variations(model_id, source_asset_id, style_instructions):
"""Generate multiple design directions from a single source design."""
results = []
for instruction in style_instructions:
response = requests.post(
f"{BASE_URL}/generate/custom/{model_id}",
headers=HEADERS,
json={
"prompt": instruction,
"referenceImages": [source_asset_id], # Array of asset IDs
"numOutputs": 2,
}
)
result = response.json()
results.append({
"instruction": instruction,
"jobId": result["job"]["jobId"]
})
return results
# Generate 4 different design directions from one source design
variations = generate_design_variations(
model_id="google-gemini-3-1-flash", # Image editing model
source_asset_id="asset_landingPageV3", # Upload source design first
style_instructions=[
"Restyle this to a minimalist tech startup look with white space and sans-serif typography",
"Make this bold and colorful with gradient backgrounds, playful consumer brand feel",
"Convert to dark mode with neon accents, futuristic SaaS dashboard aesthetic",
"Restyle with warm earth tones, organic shapes, lifestyle brand editorial feel",
]
)
for v in variations:
print(f"Instruction: {v['instruction'][:50]}... → Job: {v['jobId']}")Iterate on a Selected Direction
Once a client picks a direction, continue refining with follow-up edit instructions:
def refine_design(model_id, asset_id, refinement_instruction):
"""Refine a selected design with natural language instructions."""
response = requests.post(
f"{BASE_URL}/generate/custom/{model_id}",
headers=HEADERS,
json={
"referenceImages": [asset_id],
"prompt": refinement_instruction,
"numOutputs": 3,
}
)
return response.json()
# Client likes the minimalist direction, wants tweaks
refine_design(
model_id="google-gemini-3-1-flash",
asset_id="asset_selectedDesign123",
refinement_instruction="Make the colors softer, add subtle drop shadows, and round the corners"
)Best Practices for Design Workflows
- Start broad, then refine: Generate multiple style directions first, then iterate on the client's favorite.
- Use image editing models for modifications: Models like Gemini 3.1 understand natural language edit instructions — no need to describe the full image, just the changes.
- Chain edits: Apply incremental refinements rather than trying to get everything right in one prompt.
- Organize with collections: Create a collection per client/project to keep generated concepts organized.
Production Integration Patterns
Error Handling & Retries
import time
import requests
from requests.exceptions import HTTPError
def scenario_request(method, path, retries=3, **kwargs):
"""Make a Scenario API request with retry logic."""
url = f"https://api.cloud.scenario.com/v1{path}"
for attempt in range(retries):
try:
response = requests.request(method, url, headers=HEADERS, **kwargs)
response.raise_for_status()
return response.json()
except HTTPError as e:
status = e.response.status_code
if status == 429:
# Rate limited — back off exponentially
wait = 2 ** attempt
print(f"Rate limited. Retrying in {wait}s...")
time.sleep(wait)
elif status >= 500:
# Server error — retry
wait = 2 ** attempt
print(f"Server error {status}. Retrying in {wait}s...")
time.sleep(wait)
else:
# Client error (400, 401, 403, 404) — don't retry
error_body = e.response.json()
raise Exception(
f"API error {status}: {error_body.get('message', 'Unknown error')}"
)
raise Exception(f"Failed after {retries} retries")
# Usage
result = scenario_request("POST", "/generate/custom/your-model-id", json={
"prompt": "a fantasy sword icon",
"numOutputs": 1
})Cost Management
Use dryRun to estimate costs before committing:
# Preview cost without generating — dryRun is a URL query parameter
dry_run = scenario_request("POST", "/generate/custom/your-model-id?dryRun=true", json={
"prompt": "a fantasy landscape, epic, detailed",
"numOutputs": 4,
})
estimated_cost = dry_run.get("billing", {}).get("cost", 0)
print(f"This generation would cost {estimated_cost} credits")API Quick Reference
| Task | Endpoint | Key Parameters |
|---|---|---|
| Upload asset | POST /assets | image (file) |
| Generate images | POST /generate/custom/{modelId} | prompt, numOutputs |
| Generate with reference image | POST /generate/custom/{modelId} | prompt, image (asset ID), strength |
| Image editing | POST /generate/custom/{modelId} | prompt, referenceImages (array of asset IDs) |
| Remove background | POST /generate/custom/photoroom-background-remover | image (asset ID) |
| Upscale | POST /generate/custom/{upscaleModelId} | image (asset ID) |
| Caption image | POST /generate/caption | assetId |
| Generate video | POST /generate/custom/{videoModelId} | prompt, image (asset ID) |
| Check job status | GET /jobs/{jobId} | — |
| List assets | GET /assets | pageSize, privacy, tags |
| Create collection | POST /collections | name |
| Train model | PUT /models/{modelId}/train | parameters (optional training settings) |
What's Next?
- Quick Start Guide — Make your first API call in 5 minutes.
- Uploading Assets — How to upload images to get asset IDs for generation.
- Training Custom Models — Deep dive into model training for style consistency.
- Workflows & Apps — Chain multiple generation steps into automated pipelines.
- API Reference — Full OpenAPI specification.
Updated 8 days ago