Training Models

Scenario lets you train custom LoRA models on top of state-of-the-art base architectures. Once trained, your model captures your specific style, character, or concept and can be used for generation and editing.


Choosing a training type

When creating a model, you set its type to one of the supported training types. This determines which base model is used during training and which inference models the resulting LoRA is compatible with.

Standard training types produce LoRAs for text-to-image and image-to-image generation. You provide a set of example images.

Edit training types (those ending in -edit-lora) produce LoRAs for instruction-following image editing. Instead of individual images, you provide before/after pairs with an instruction describing the change.


FLUX.2 Models

FLUX.2 Dev

Training type: flux.2-dev-lora

High-quality text-to-image and image-to-image model. Best choice when output fidelity is the priority. Generates at 28 inference steps by default.

FLUX.2 Dev Edit

Training type: flux.2-dev-edit-lora

Edit variant of FLUX.2 Dev. Train with before/after image pairs to teach the model how to apply edits (e.g. changing style, adding elements, recoloring). Requires at least 2 image pairs.


FLUX.2 Klein — distilled vs. base

The Klein family comes in two flavors:

  • Distilled (non-base): Optimized for speed. Generates in ~4 steps at guidance 1.0. Lower cost, faster iteration.
  • Base: Higher quality output using ~28 steps and guidance 4.0, closer to FLUX.2 Dev in quality.
Training typeVariantStepsProfile
flux.2-klein-4b-loraDistilled, 4b params~4Fast
flux.2-klein-9b-loraDistilled, 9b params~4Fast, larger model
flux.2-klein-base-4b-loraBase, 4b params~28Higher quality
flux.2-klein-base-9b-loraBase, 9b params~28Higher quality, larger model

Edit variants follow the same pattern and require image pairs:

Training typeVariant
flux.2-klein-4b-edit-loraDistilled 4b edit
flux.2-klein-9b-edit-loraDistilled 9b edit
flux.2-klein-base-4b-edit-loraBase 4b edit
flux.2-klein-base-9b-edit-loraBase 9b edit

Qwen Image Models

Qwen Image

Training type: qwen-image-lora

Cost-effective text-to-image and image-to-image model. Good choice for high-volume use cases where budget matters.

Qwen Image 2512

Training type: qwen-image-2512-lora

Updated Qwen Image checkpoint with improved resolution support.

Qwen Image Edit variants

Edit variants for instruction-following image editing. All require before/after image pairs.

Training typeCheckpoint
qwen-image-edit-loraBase edit model
qwen-image-edit-2509-loraSeptember 2025
qwen-image-edit-2511-loraNovember 2025 (latest)

Z-Image Models

Z-Image

Training type: zimage-lora

Highest quality text-to-image and image-to-image model. Also supports ControlNet (edge, depth, pose) for advanced structural control during inference.

Z-Image Turbo

Training type: zimage-turbo-lora

Fast variant optimized for low step counts (default: 9 steps). Use when generation speed matters more than maximum fidelity.

Z-Image De-Turbo

Training type: zimage-de-turbo-lora

A variant between Z-Image and Z-Image Turbo in speed/quality trade-off. Compatible with both Z-Image and Z-Image Turbo inference models.


Training images

Recommended: 5–15 images Maximum: 50 images

Use clean, consistent images that clearly represent the subject or style you want to capture. Quality and consistency matter more than quantity.

For edit models, provide before/after image pairs with a text instruction per pair. A minimum of 2 pairs is required.


Training flow

1. Create a model

curl -X POST "https://api.scenario.com/v1/models?projectId=<projectId>" \
  -H "Authorization: Basic <base64(key:secret)>" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "My Character",
    "type": "flux.2-dev-lora"
  }'
import requests
import base64

credentials = base64.b64encode(b"<key>:<secret>").decode()
headers = {
    "Authorization": f"Basic {credentials}",
    "Content-Type": "application/json",
}

response = requests.post(
    "https://api.scenario.com/v1/models",
    params={"projectId": "<projectId>"},
    headers=headers,
    json={
        "name": "My Character",
        "type": "flux.2-dev-lora",
    },
)
model_id = response.json()["model"]["id"]
{
  "model": {
    "id": "model_xxxxxxxxxxxx",
    "name": "My Character",
    "type": "flux.2-dev-lora",
    "status": "draft"
  }
}

2. Upload training images

Upload images one at a time or in batches by asset ID (up to 10 per request):

# Single image via base64
curl -X POST "https://api.scenario.com/v1/models/<modelId>/training-images?projectId=<projectId>" \
  -H "Authorization: Basic <base64(key:secret)>" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "image-01.jpg",
    "data": "data:image/jpeg;base64,<base64data>"
  }'

# Batch by asset IDs
curl -X POST "https://api.scenario.com/v1/models/<modelId>/training-images?projectId=<projectId>" \
  -H "Authorization: Basic <base64(key:secret)>" \
  -H "Content-Type: application/json" \
  -d '{
    "assetIds": ["asset_aaa", "asset_bbb", "asset_ccc"]
  }'
import base64

# Single image via base64
with open("image-01.jpg", "rb") as f:
    image_data = base64.b64encode(f.read()).decode()

requests.post(
    f"https://api.scenario.com/v1/models/{model_id}/training-images",
    params={"projectId": "<projectId>"},
    headers=headers,
    json={
        "name": "image-01.jpg",
        "data": f"data:image/jpeg;base64,{image_data}",
    },
)

# Batch by asset IDs
requests.post(
    f"https://api.scenario.com/v1/models/{model_id}/training-images",
    params={"projectId": "<projectId>"},
    headers=headers,
    json={"assetIds": ["asset_aaa", "asset_bbb", "asset_ccc"]},
)

2b. Upload image pairs (edit models only)

For edit training types, provide before/after pairs with an instruction instead of individual images:

curl -X PUT "https://api.scenario.com/v1/models/<modelId>/training-images/pairs?projectId=<projectId>" \
  -H "Authorization: Basic <base64(key:secret)>" \
  -H "Content-Type: application/json" \
  -d '[
    {
      "sourceId": "asset_before_01",
      "targetId": "asset_after_01",
      "instruction": "make the background a snowy forest"
    },
    {
      "sourceId": "asset_before_02",
      "targetId": "asset_after_02",
      "instruction": "add rain and dark clouds"
    }
  ]'
requests.put(
    f"https://api.scenario.com/v1/models/{model_id}/training-images/pairs",
    params={"projectId": "<projectId>"},
    headers=headers,
    json=[
        {
            "sourceId": "asset_before_01",
            "targetId": "asset_after_01",
            "instruction": "make the background a snowy forest",
        },
        {
            "sourceId": "asset_before_02",
            "targetId": "asset_after_02",
            "instruction": "add rain and dark clouds",
        },
    ],
)

3. Start training

curl -X PUT "https://api.scenario.com/v1/models/<modelId>/train?projectId=<projectId>" \
  -H "Authorization: Basic <base64(key:secret)>" \
  -H "Content-Type: application/json" \
  -d '{
    "parameters": {
      "seed": 123456789
    }
  }'
response = requests.put(
    f"https://api.scenario.com/v1/models/{model_id}/train",
    params={"projectId": "<projectId>"},
    headers=headers,
    json={
        "parameters": {
            "seed": 123456789,
        }
    },
)
print(response.json())
{
  "model": {
    "id": "model_xxxxxxxxxxxx",
    "status": "training",
    "trainingProgress": {
      "stage": "queued-for-train",
      "position": 2,
      "progress": 0
    }
  },
  "job": {
    "id": "job_xxxxxxxxxxxx",
    "type": "flux-model-training"
  },
  "creativeUnits": {
    "estimatedCreativeUnits": 150
  }
}

4. Poll for completion

curl "https://api.scenario.com/v1/models/<modelId>?projectId=<projectId>" \
  -H "Authorization: Basic <base64(key:secret)>"
import time

while True:
    response = requests.get(
        f"https://api.scenario.com/v1/models/{model_id}",
        params={"projectId": "<projectId>"},
        headers=headers,
    )
    status = response.json()["model"]["status"]
    print(f"Status: {status}")

    if status in ("trained", "failed"):
        break

    time.sleep(10)

The model.status field transitions through:

drafttrainingtrained (success) or failed

trainingProgress.stage provides finer-grained status during training.


5. Cancel training (optional)

curl -X POST "https://api.scenario.com/v1/models/<modelId>/train/action?projectId=<projectId>" \
  -H "Authorization: Basic <base64(key:secret)>" \
  -H "Content-Type: application/json" \
  -d '{ "action": "cancel" }'
requests.post(
    f"https://api.scenario.com/v1/models/{model_id}/train/action",
    params={"projectId": "<projectId>"},
    headers=headers,
    json={"action": "cancel"},
)

Training parameters

ParameterTypeDescription
seednumberFor reproducibility
learningRatenumberMin: 0.00001 · Max: 0.001 · Default: 0.00005
ranknumberLoRA rank · Min: 2 · Max: 128 · Default: 64
batchSizenumberMin: 1 · Max: 8 · Default: 1
nbEpochsnumberMin: 1 · Max: 100 · Default: 10
nbRepeatsnumberMin: 1 · Max: 100 · Default: 20
samplePromptsstring[]Up to 4 prompts — sample images generated at each epoch so you can monitor progress
sampleSourceImagesstring[]Edit models only: asset IDs to use as source images for sample generation during training

Compatibility

A LoRA trained on one base model family cannot be used with a different family. For example, a flux.2-dev-lora can only be used with FLUX.2 Dev inference — not with Qwen or Z-Image models.

Within the Z-Image family, LoRAs trained with zimage-lora, zimage-turbo-lora, and zimage-de-turbo-lora are all compatible with both Z-Image and Z-Image Turbo inference models.