Skip to content
Get started

Update

models.update(strmodel_id, ModelUpdateParams**kwargs) -> ModelUpdateResponse
PUT/models/{modelId}

Update the given modelId

ParametersExpand Collapse
model_id: str
original_assets: Optional[bool]

If set to true, returns the original asset without transformation

class_slug: Optional[str]

The slug of the class you want to use (ex: “characters-npcs-mobs-characters”). Set to null to unset the class

concepts: Optional[Iterable[Concept]]

The concepts is required for composition models. With one or more loras

Only applicable to Flux based models (and older SD1.5 and SDXL models)

model_id: str

The model ID (example: “model_eyVcnFJcR92BxBkz7N6g5w”)

scale: float

The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2.

maximum2
minimum-2
model_epoch: Optional[str]

The epoch of the model (example: “000001”) Only available for Flux Lora Trained models

epoch: Optional[str]

The epoch of the model. Only available for flux.1-lora and flux.1-kontext-lora based models.

The epoch can only be set if the model has epochs and is in status “trained”.

The default epoch (if not set) is the final model epoch (latest).

Set to null to unset the epoch.

name: Optional[str]

The model’s name (ex: “Cinematic Realism”).

If not set, the model’s name will be automatically generated when starting training based on training data.

maxLength64
negative_prompt_embedding: Optional[str]

Add a negative prompt embedding to every model’s generation

parameters: Optional[Parameters]

The parameters to use for the model’s training

age: Optional[str]

Age group of the voice (for professional cloning)

Only available for ElevenLabs voice training

batch_size: Optional[float]

The batch size Less steps, and will increase the learning rate

Only available for Flux LoRA training

maximum4
minimum1
class_prompt: Optional[str]

The prompt to specify images in the same class as provided instance images

Only available for SD15 training

clone_type: Optional[str]

Type of voice cloning: “instant” (fast) or “professional” (higher quality, requires captcha)

Only available for ElevenLabs voice training

concept_prompt: Optional[str]

The prompt with identifier specifying the instance (or subject) of the class (example: “a daiton dog”)

Default value varies depending on the model type:

  • For SD1.5: “daiton” if no class is associated with the model
  • For SDXL: “daiton”
  • For Flux: ""
gender: Optional[str]

Gender of the voice (for professional cloning)

Only available for ElevenLabs voice training

language: Optional[str]

Language of the audio samples (ISO 639-1 code)

Only available for ElevenLabs voice training

learning_rate: Optional[float]

Initial learning rate (after the potential warmup period)

Default value varies depending on the model type:

  • For SD1.5 and SDXL: 0.000005
  • For Flux: 0.0001
exclusiveMinimum
minimum0
learning_rate_text_encoder: Optional[float]

Initial learning rate (after the potential warmup period) for the text encoder

Maximum [Flux LoRA: 0.001] Default [SDXL: 0.00005 | Flux LoRA: 0.00001] Minimum [SDXL: 0 | Flux LoRA: 0.000001]

exclusiveMinimum
maximum0.001
minimum0
learning_rate_unet: Optional[float]

Initial learning rate (after the potential warmup period) for the UNet

Only available for SDXL LoRA training

exclusiveMinimum
minimum0
lr_scheduler: Optional[Literal["constant", "constant-with-warmup", "cosine", 3 more]]

The scheduler type to use (default: “constant”)

Only available for SD15 and SDXL LoRA training

One of the following:
"constant"
"constant-with-warmup"
"cosine"
"cosine-with-restarts"
"linear"
"polynomial"
max_train_steps: Optional[float]

Maximum number of training steps to execute (default: varies depending on the model type)

For SDXL LoRA training, please use numTextTrainSteps and numUNetTrainSteps instead

Default value varies depending on the model type:

  • For SD1.5: round((number of training images * 225) / 3)
  • For SDXL: number of training images * 175
  • For Flux: number of training images * 100

Maximum value varies depending on the model type:

  • For SD1.5 and SDXL: [0, 40000]
  • For Flux: [0, 10000]
maximum40000
minimum0
nb_epochs: Optional[float]

The number of epochs to train for

Only available for Flux LoRA training

maximum30
minimum1
nb_repeats: Optional[float]

The number of times to repeat the training

Only available for Flux LoRA training

maximum30
minimum1
num_text_train_steps: Optional[float]

The number of training steps for the text encoder

Only available for SDXL LoRA training

maximum40000
minimum0
num_u_net_train_steps: Optional[float]

The number of training steps for the UNet

Only available for SDXL LoRA training

maximum40000
minimum0
optimize_for: Optional[Literal["likeness"]]

Optimize the model training task for a specific type of input images. The available values are:

  • “likeness”: optimize training for likeness or portrait (targets specific transformer blocks)
  • “all”: train all transformer blocks
  • “none”: train no specific transformer blocks

This parameter controls which double and single transformer blocks are trained during the LoRA training process.

Only available for Flux LoRA training

prior_loss_weight: Optional[float]

The weight of prior preservation loss

Only available for SD15 and SDXL LoRA training

exclusiveMinimum
maximum1.7976931348623157
minimum0
random_crop: Optional[bool]

Whether to random crop or center crop images before resizing to the working resolution

Only available for SD15 and SDXL LoRA training

random_crop_ratio: Optional[float]

Ratio of random crops

Only available for SD15 and SDXL LoRA training

maximum1
minimum0
random_crop_scale: Optional[float]

Scale of random crops

Only available for SD15 and SDXL LoRA training

maximum1
minimum0
rank: Optional[float]

The dimension of the LoRA update matrices

Only available for SDXL (deprecated), Flux LoRA and Musubi training

Default value varies depending on the model type:

  • For SDXL (deprecated): 64
  • For Flux: 16
  • For Musubi: 64

Each trainer enforces its own tighter limit (Flux LoRA: [2; 64], Musubi: [2; 128])

maximum128
minimum2
remove_background_noise: Optional[bool]

Whether to remove background noise from audio samples before cloning. When enabled, each sample must be at least 5 seconds long.

Only available for ElevenLabs voice training

sample_prompts: Optional[Sequence[str]]

The prompts to use for each epoch Only available for Flux LoRA training

sample_source_images: Optional[Sequence[str]]

The sample prompt images (AssetIds) paired with samplePrompts Only available for Flux LoRA training Must be the same length as samplePrompts

scale_lr: Optional[bool]

Whether to scale the learning rate

Note: Legacy parameter, will be ignored

Only available for SD15 and SDXL LoRA training

seed: Optional[float]

Used to reproduce previous results. Default: randomly generated number.

Only available for SD15 and SDXL LoRA training

maximum9007199254740991
minimum0
text_encoder_training_ratio: Optional[float]

Whether to train the text encoder or not

Example: For 100 steps and a value of 0.2, it means that the text encoder will be trained for 20 steps and then the UNet for 80 steps

Note: Legacy parameter, please use numTextTrainSteps and numUNetTrainSteps

Only available for SD15 and SDXL LoRA training

maximum0.99
minimum0
validation_frequency: Optional[float]

Validation frequency. Cannot be greater than maxTrainSteps value

Only available for SD15 and SDXL LoRA training

minimum0
validation_prompt: Optional[str]

Validation prompt

Only available for SD15 and SDXL LoRA training

voice_description: Optional[str]

Description of the voice characteristics

Only available for ElevenLabs voice training

wandb_key: Optional[str]

The Weights And Bias key to use for logging. The maximum length is 40 characters

prompt_embedding: Optional[str]

Add a prompt embedding to every model’s generation

short_description: Optional[str]

The model’s short description (ex: “This model generates highly detailed cinematic scenes.”).

If not set, the model’s short description will be automatically generated when starting training based on training data.

maxLength256
thumbnail: Optional[str]

The AssetId of the image you want to use as a thumbnail for the model (example: “asset_GTrL3mq4SXWyMxkOHRxlpw”). Set to null to unset the thumbnail

type: Optional[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]

The model’s type (ex: “flux.1-lora”).

The type can only be changed if the model has the “new” status.

One of the following:
"custom"
"elevenlabs-voice"
"flux.1"
"flux.1-composition"
"flux.1-kontext-dev"
"flux.1-kontext-lora"
"flux.1-krea-dev"
"flux.1-krea-lora"
"flux.1-lora"
"flux.1-pro"
"flux.1.1-pro-ultra"
"flux.2-dev-edit-lora"
"flux.2-dev-lora"
"flux.2-klein-4b-edit-lora"
"flux.2-klein-4b-lora"
"flux.2-klein-9b-edit-lora"
"flux.2-klein-9b-lora"
"flux.2-klein-base-4b-edit-lora"
"flux.2-klein-base-4b-lora"
"flux.2-klein-base-9b-edit-lora"
"flux.2-klein-base-9b-lora"
"flux1.1-pro"
"gpt-image-1"
"qwen-image-2512-lora"
"qwen-image-edit-2509-lora"
"qwen-image-edit-2511-lora"
"qwen-image-edit-lora"
"qwen-image-lora"
"sd-1_5"
"sd-1_5-composition"
"sd-1_5-lora"
"sd-xl"
"sd-xl-composition"
"sd-xl-lora"
"zimage-de-turbo-lora"
"zimage-lora"
"zimage-turbo-lora"
ReturnsExpand Collapse
class ModelUpdateResponse:
model: Model
id: str

The model ID (example: “model_eyVcnFJcR92BxBkz7N6g5w”)

capabilities: List[Literal["3d23d", "audio2audio", "audio2video", 29 more]]

List of model capabilities (example: [“txt2img”, “img2img”, “txt2img_ip_adapter”, …])

One of the following:
"3d23d"
"audio2audio"
"audio2video"
"controlnet"
"controlnet_img2img"
"controlnet_inpaint"
"controlnet_inpaint_ip_adapter"
"controlnet_ip_adapter"
"controlnet_reference"
"controlnet_texture"
"img23d"
"img2img"
"img2img_ip_adapter"
"img2img_texture"
"img2txt"
"img2video"
"inpaint"
"inpaint_ip_adapter"
"outpaint"
"reference"
"reference_texture"
"txt23d"
"txt2audio"
"txt2img"
"txt2img_ip_adapter"
"txt2img_texture"
"txt2txt"
"txt2video"
"video23d"
"video2audio"
"video2img"
"video2video"
collection_ids: List[str]

A list of CollectionId this model belongs to

created_at: str

The model creation date as an ISO string (example: “2023-02-03T11:19:41.579Z”)

custom: bool

Whether the model is a custom model and can be used only with POST /generate/custom/{modelId} endpoint

example_asset_ids: List[str]

List of all example asset IDs setup by the model owner

privacy: Literal["private", "public", "unlisted"]

The privacy of the model (default: private)

One of the following:
"private"
"public"
"unlisted"
source: Literal["civitai", "huggingface", "other", "scenario"]

The source of the model

One of the following:
"civitai"
"huggingface"
"other"
"scenario"
status: Literal["copying", "failed", "new", 3 more]

The model status

One of the following:
"copying"
"failed"
"new"
"trained"
"training"
"training-canceled"
tags: List[str]

The associated tags (example: [“sci-fi”, “landscape”])

training_images_number: float

The total number of training images

type: Literal["custom", "elevenlabs-voice", "flux.1", 34 more]

The model type (example: “flux.1-lora”)

One of the following:
"custom"
"elevenlabs-voice"
"flux.1"
"flux.1-composition"
"flux.1-kontext-dev"
"flux.1-kontext-lora"
"flux.1-krea-dev"
"flux.1-krea-lora"
"flux.1-lora"
"flux.1-pro"
"flux.1.1-pro-ultra"
"flux.2-dev-edit-lora"
"flux.2-dev-lora"
"flux.2-klein-4b-edit-lora"
"flux.2-klein-4b-lora"
"flux.2-klein-9b-edit-lora"
"flux.2-klein-9b-lora"
"flux.2-klein-base-4b-edit-lora"
"flux.2-klein-base-4b-lora"
"flux.2-klein-base-9b-edit-lora"
"flux.2-klein-base-9b-lora"
"flux1.1-pro"
"gpt-image-1"
"qwen-image-2512-lora"
"qwen-image-edit-2509-lora"
"qwen-image-edit-2511-lora"
"qwen-image-edit-lora"
"qwen-image-lora"
"sd-1_5"
"sd-1_5-composition"
"sd-1_5-lora"
"sd-xl"
"sd-xl-composition"
"sd-xl-lora"
"zimage-de-turbo-lora"
"zimage-lora"
"zimage-turbo-lora"
updated_at: str

The model last update date as an ISO string (example: “2023-02-03T11:19:41.579Z”)

access_restrictions: Optional[Literal[0, 100, 25, 2 more]]

The access restrictions of the model 0: Free plan 25: Creator plan 50: Pro plan 75: Team plan 100: Enterprise plan

One of the following:
0
100
25
50
75
author_id: Optional[str]

The author user ID (example: “user_VFhihHKMRZyDDnZAJwLb2Q”)

class_: Optional[ModelClass]

The class of the model

category: str

The category slug of the class (example: “art-style”)

concept_prompt: str

The concept prompt of the class (example: “a sks character design”)

model_id: str

The model ID of the class (example: “stable-diffusion-v1-5”)

name: str

The class name (example: “Character Design”)

prompt: str

The class prompt (example: “a character design”)

slug: str

The class slug (example: “art-style-character-design”)

status: Literal["published", "unpublished"]

The class status (only published classes are listed, but unpublished classes can still appear in existing models)

One of the following:
"published"
"unpublished"
thumbnails: List[str]

Some example images URLs to showcase the class

compliant_model_ids: Optional[List[str]]

List of base model IDs compliant with the model (example: [“flux.1-dev”, “flux.1-schnell”]) This attribute is mainly used for Flux LoRA models

concepts: Optional[List[ModelConcept]]

The concepts is required for the type model: composition

model_id: str

The model ID (example: “model_eyVcnFJcR92BxBkz7N6g5w”)

scale: float

The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2.

maximum2
minimum-2
model_epoch: Optional[str]

The epoch of the model (example: “000001”) Only available for Flux Lora Trained models

epoch: Optional[str]

The epoch of the model. Only available for Flux Lora Trained models. If not set, uses the final model epoch (latest)

epochs: Optional[List[ModelEpoch]]

The epochs of the model. Only available for Flux Lora Trained models.

epoch: str

The epoch hash to identify the epoch

assets: Optional[List[ModelEpochAsset]]

The assets of the epoch if sample prompts as been supplied during training

asset_id: str

The AssetId of the image during training (example: “asset_GTrL3mq4SXWyMxkOHRxlpw”)

url: str

The url of the asset

inputs: Optional[List[ModelInput]]

The inputs of the model. Only used for custom models. To retrieve this list, get it by modelId with GET /models/{modelId}

name: str

The name that must be user to call the model through the API

type: Literal["boolean", "file", "file_array", 7 more]

The data type of the input

One of the following:
"boolean"
"file"
"file_array"
"inputs_array"
"model"
"model_array"
"number"
"number_array"
"string"
"string_array"
allowed_values: Optional[List[object]]

The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown.

background_behavior: Optional[Literal["opaque", "transparent"]]

Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`.

One of the following:
"opaque"
"transparent"
color: Optional[bool]

Whether the input is a color or not. Only available for `string` input type.

cost_impact: Optional[bool]

Whether this input affects the model’s cost calculation

default: Optional[object]

The default value for the input

description: Optional[str]

Help text displayed in the UI to provide additional information about the input

group: Optional[str]

Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI.

hint: Optional[str]

Hint text displayed in the UI as a tooltip to guide the user

inputs: Optional[List[Dict[str, object]]]

The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs.

kind: Optional[Literal["3d", "audio", "document", 4 more]]

The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix

One of the following:
"3d"
"audio"
"document"
"image"
"image-hdr"
"json"
"video"
label: Optional[str]

The label displayed in the UI for this input

mask_from: Optional[str]

The name of the file input field to use as the mask source

max: Optional[float]

The maximum allowed value. Only available for `number` and `array` input types.

max_length: Optional[float]

The maximum allowed length for `string` inputs. Also applies to each item in `string_array`.

max_size: Optional[float]

The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time.

min: Optional[float]

The minimum allowed value. Only available for `number` and array input types.

min_length: Optional[float]

The minimum allowed length for string inputs. Also applies to each item in `string_array`.

model_types: Optional[List[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]]

The allowed model types for this input. Example: `[“flux.1-lora”]`. Only available for `model_array` input type.

One of the following:
"custom"
"elevenlabs-voice"
"flux.1"
"flux.1-composition"
"flux.1-kontext-dev"
"flux.1-kontext-lora"
"flux.1-krea-dev"
"flux.1-krea-lora"
"flux.1-lora"
"flux.1-pro"
"flux.1.1-pro-ultra"
"flux.2-dev-edit-lora"
"flux.2-dev-lora"
"flux.2-klein-4b-edit-lora"
"flux.2-klein-4b-lora"
"flux.2-klein-9b-edit-lora"
"flux.2-klein-9b-lora"
"flux.2-klein-base-4b-edit-lora"
"flux.2-klein-base-4b-lora"
"flux.2-klein-base-9b-edit-lora"
"flux.2-klein-base-9b-lora"
"flux1.1-pro"
"gpt-image-1"
"qwen-image-2512-lora"
"qwen-image-edit-2509-lora"
"qwen-image-edit-2511-lora"
"qwen-image-edit-lora"
"qwen-image-lora"
"sd-1_5"
"sd-1_5-composition"
"sd-1_5-lora"
"sd-xl"
"sd-xl-composition"
"sd-xl-lora"
"zimage-de-turbo-lora"
"zimage-lora"
"zimage-turbo-lora"
parent: Optional[bool]

Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types.

For `file_array`, the parent asset is the first item in the array.

placeholder: Optional[str]

Placeholder text for the input. Only available for ‘string’ input type.

prompt: Optional[bool]

Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type.

prompt_spark: Optional[bool]

Whether the input is used with prompt spark. Only available for `string` input type.

required: Optional[ModelInputRequired]

Set of rules that describes when this input is required:

  • `always`: Input is always required
  • `ifNotDefined`: Input is required when another specified input is not defined
  • `ifDefined`: Input is required when another specified input is defined
  • `conditionalValues`: Input is required when another input has a specific value

By default, the input is not required.

always: Optional[bool]

Whether the input is always required

conditional_values: Optional[object]

Makes this input required when another input has a specific value:

  • Key: name of the input to check
  • Value: operation and allowed values that trigger the requirement
if_defined: Optional[object]

Makes this input required when another input is defined:

  • Key: name of the input that must be defined
  • Value: message to display when this input is required
if_not_defined: Optional[object]

Makes this input required when another input is not defined:

  • Key: name of the input that must be undefined
  • Value: message to display when this input is required
step: Optional[float]

The step increment for numeric inputs. Only available for `number` input type.

minimum1
model_keyword: Optional[str]

The model keyword, this is a legacy parameter, please use conceptPrompt in parameters

name: Optional[str]

The model name (example: “Cinematic Realism”)

negative_prompt_embedding: Optional[str]

Fine-tune the model’s inferences with negative prompt embedding

owner_id: Optional[str]

The owner ID (example: “team_VFhihHKMRZyDDnZAJwLb2Q”)

parameters: Optional[ModelParameters]

The parameters of the model

age: Optional[str]

Age group of the voice (for professional cloning)

Only available for ElevenLabs voice training

batch_size: Optional[float]

The batch size Less steps, and will increase the learning rate

Only available for Flux LoRA training

maximum4
minimum1
class_prompt: Optional[str]

The prompt to specify images in the same class as provided instance images

Only available for SD15 training

clone_type: Optional[str]

Type of voice cloning: “instant” (fast) or “professional” (higher quality, requires captcha)

Only available for ElevenLabs voice training

concept_prompt: Optional[str]

The prompt with identifier specifying the instance (or subject) of the class (example: “a daiton dog”)

Default value varies depending on the model type:

  • For SD1.5: “daiton” if no class is associated with the model
  • For SDXL: “daiton”
  • For Flux: ""
gender: Optional[str]

Gender of the voice (for professional cloning)

Only available for ElevenLabs voice training

language: Optional[str]

Language of the audio samples (ISO 639-1 code)

Only available for ElevenLabs voice training

learning_rate: Optional[float]

Initial learning rate (after the potential warmup period)

Default value varies depending on the model type:

  • For SD1.5 and SDXL: 0.000005
  • For Flux: 0.0001
exclusiveMinimum
minimum0
learning_rate_text_encoder: Optional[float]

Initial learning rate (after the potential warmup period) for the text encoder

Maximum [Flux LoRA: 0.001] Default [SDXL: 0.00005 | Flux LoRA: 0.00001] Minimum [SDXL: 0 | Flux LoRA: 0.000001]

exclusiveMinimum
maximum0.001
minimum0
learning_rate_unet: Optional[float]

Initial learning rate (after the potential warmup period) for the UNet

Only available for SDXL LoRA training

exclusiveMinimum
minimum0
lr_scheduler: Optional[Literal["constant", "constant-with-warmup", "cosine", 3 more]]

The scheduler type to use (default: “constant”)

Only available for SD15 and SDXL LoRA training

One of the following:
"constant"
"constant-with-warmup"
"cosine"
"cosine-with-restarts"
"linear"
"polynomial"
max_train_steps: Optional[float]

Maximum number of training steps to execute (default: varies depending on the model type)

For SDXL LoRA training, please use numTextTrainSteps and numUNetTrainSteps instead

Default value varies depending on the model type:

  • For SD1.5: round((number of training images * 225) / 3)
  • For SDXL: number of training images * 175
  • For Flux: number of training images * 100

Maximum value varies depending on the model type:

  • For SD1.5 and SDXL: [0, 40000]
  • For Flux: [0, 10000]
maximum40000
minimum0
nb_epochs: Optional[float]

The number of epochs to train for

Only available for Flux LoRA training

maximum30
minimum1
nb_repeats: Optional[float]

The number of times to repeat the training

Only available for Flux LoRA training

maximum30
minimum1
num_text_train_steps: Optional[float]

The number of training steps for the text encoder

Only available for SDXL LoRA training

maximum40000
minimum0
num_u_net_train_steps: Optional[float]

The number of training steps for the UNet

Only available for SDXL LoRA training

maximum40000
minimum0
optimize_for: Optional[Literal["likeness"]]

Optimize the model training task for a specific type of input images. The available values are:

  • “likeness”: optimize training for likeness or portrait (targets specific transformer blocks)
  • “all”: train all transformer blocks
  • “none”: train no specific transformer blocks

This parameter controls which double and single transformer blocks are trained during the LoRA training process.

Only available for Flux LoRA training

prior_loss_weight: Optional[float]

The weight of prior preservation loss

Only available for SD15 and SDXL LoRA training

exclusiveMinimum
maximum1.7976931348623157
minimum0
random_crop: Optional[bool]

Whether to random crop or center crop images before resizing to the working resolution

Only available for SD15 and SDXL LoRA training

random_crop_ratio: Optional[float]

Ratio of random crops

Only available for SD15 and SDXL LoRA training

maximum1
minimum0
random_crop_scale: Optional[float]

Scale of random crops

Only available for SD15 and SDXL LoRA training

maximum1
minimum0
rank: Optional[float]

The dimension of the LoRA update matrices

Only available for SDXL (deprecated), Flux LoRA and Musubi training

Default value varies depending on the model type:

  • For SDXL (deprecated): 64
  • For Flux: 16
  • For Musubi: 64

Each trainer enforces its own tighter limit (Flux LoRA: [2; 64], Musubi: [2; 128])

maximum128
minimum2
remove_background_noise: Optional[bool]

Whether to remove background noise from audio samples before cloning. When enabled, each sample must be at least 5 seconds long.

Only available for ElevenLabs voice training

sample_prompts: Optional[List[str]]

The prompts to use for each epoch Only available for Flux LoRA training

sample_source_images: Optional[List[str]]

The sample prompt images (AssetIds) paired with samplePrompts Only available for Flux LoRA training Must be the same length as samplePrompts

scale_lr: Optional[bool]

Whether to scale the learning rate

Note: Legacy parameter, will be ignored

Only available for SD15 and SDXL LoRA training

seed: Optional[float]

Used to reproduce previous results. Default: randomly generated number.

Only available for SD15 and SDXL LoRA training

maximum9007199254740991
minimum0
text_encoder_training_ratio: Optional[float]

Whether to train the text encoder or not

Example: For 100 steps and a value of 0.2, it means that the text encoder will be trained for 20 steps and then the UNet for 80 steps

Note: Legacy parameter, please use numTextTrainSteps and numUNetTrainSteps

Only available for SD15 and SDXL LoRA training

maximum0.99
minimum0
validation_frequency: Optional[float]

Validation frequency. Cannot be greater than maxTrainSteps value

Only available for SD15 and SDXL LoRA training

minimum0
validation_prompt: Optional[str]

Validation prompt

Only available for SD15 and SDXL LoRA training

voice_description: Optional[str]

Description of the voice characteristics

Only available for ElevenLabs voice training

wandb_key: Optional[str]

The Weights And Bias key to use for logging. The maximum length is 40 characters

parent_model_id: Optional[str]

The id of the parent model

performance_stats: Optional[ModelPerformanceStats]

Aggregated performance stats

variants: List[ModelPerformanceStatsVariant]

Performance metrics per variant

capability: str

The generation capability (example: “txt2img”, “img2video”, “txt2audio”)

computed_at: str

When these stats were last computed (ISO date)

variant_key: str

Unique variant identifier (example: “txt2img:1K”, “img2video:2K”, “txt2audio”)

arena_score: Optional[ModelPerformanceStatsVariantArenaScore]

External quality score from arena.ai leaderboard

arena_category: str

Arena category (example: “text_to_image”, “image_to_video”)

arena_model_name: str

Model name on arena.ai

fetched_at: str

When this score was last fetched (ISO date)

rank: float

Rank in the arena category

rating: float

ELO rating

rating_lower: float

ELO rating confidence interval lower bound

rating_upper: float

ELO rating confidence interval upper bound

votes: float

Number of human votes

cost_per_asset_max_cu: Optional[float]

Maximum cost per output asset (CU)

cost_per_asset_min_cu: Optional[float]

Minimum cost per output asset (CU)

cost_per_asset_p50_cu: Optional[float]

Median cost per output asset (CU)

inference_latency_p50_sec: Optional[float]

Inference latency P50 per output asset (seconds)

inference_latency_p75_sec: Optional[float]

Inference latency P75 per output asset (seconds)

resolution: Optional[str]

The resolution bucket (example: “0.5K”, “1K”, “2K”, “4K”)

total_latency_p50_sec: Optional[float]

Total latency P50 per output asset, including queue time (seconds)

total_latency_p75_sec: Optional[float]

Total latency P75 per output asset, including queue time (seconds)

default: Optional[str]

Default variant key for quick model comparison

prompt_embedding: Optional[str]

Fine-tune the model’s inferences with prompt embedding

short_description: Optional[str]

The model short description (example: “This model generates highly detailed cinematic scenes.”)

soft_deletion_on: Optional[str]

The date when the model will be soft deleted (only for Free plan)

thumbnail: Optional[ModelThumbnail]

A thumbnail for your model

asset_id: str

The AssetId of the image used as a thumbnail for your model (example: “asset_GTrL3mq4SXWyMxkOHRxlpw”)

url: str

The url of the image used as a thumbnail for your model

training_image_pairs: Optional[List[ModelTrainingImagePair]]

Array of training image pairs

instruction: Optional[str]

The instruction for the image pair, source to target

source_id: Optional[str]

The source asset ID (must be a training asset)

target_id: Optional[str]

The target asset ID (must be a training asset)

training_images: Optional[List[ModelTrainingImage]]

The URLs of the first 3 training images of the model. To retrieve the full set of images, get it by modelId

id: str

The training image ID (example: “asset_GTrL3mq4SXWyMxkOHRxlpw”)

automatic_captioning: str

Automatic captioning of the image

created_at: str

The training image upload date as an ISO string (example: “2023-02-03T11:19:41.579Z”)

description: str

Description for the image

download_url: str

The URL of the image

name: str

The original file name of the image (example: “my-training-image.jpg”)

training_progress: Optional[ModelTrainingProgress]

Additional information about the training progress of the model

stage: Literal["pending", "pending-captcha", "queued-for-train", 2 more]

The stage of the request

One of the following:
"pending"
"pending-captcha"
"queued-for-train"
"running-train"
"starting-train"
updated_at: float

Timestamp in milliseconds of the last time the training progress was updated

position: Optional[float]

Position of the job in the queue (ie. the number of job in the queue before this one)

progress: Optional[float]

The progress of the job

maximum1
minimum0
remaining_time_ms: Optional[float]

The remaining time in milliseconds

started_at: Optional[float]

The timestamp in millisecond marking the start of the process

training_stats: Optional[ModelTrainingStats]

Additional information about the model’s training

ended_at: Optional[str]

The training end time as an ISO date string

queue_duration: Optional[float]

The training queued duration in seconds

started_at: Optional[str]

The training start time as an ISO date string

train_duration: Optional[float]

The training duration in seconds

ui_config: Optional[ModelUiConfig]

The UI configuration for the model

input_properties: Optional[Dict[str, ModelUiConfigInputProperties]]

Configuration for the input properties

collapsed: Optional[bool]
loras_component: Optional[ModelUiConfigLorasComponent]

Configuration for the loras component

label: str

The label of the component

model_input: str

The input name of the model (model_array)

scale_input: str

The input name of the scale (number_array)

model_id_input: Optional[str]

The input model id (example: a composition or a single LoRA modelId) If specified, the model id will be attached to the output asset as a metadata If the model-decomposer parser is specified on it, modelInput and scaleInput will be automatically populated

presets: Optional[List[ModelUiConfigPreset]]

Configuration for the presets

fields: List[str]
presets: object
resolution_component: Optional[ModelUiConfigResolutionComponent]

Configuration for the resolution component

height_input: str

The input name of the height

label: str

The label of the component

presets: List[ModelUiConfigResolutionComponentPreset]

The resolution presets

height: float
label: str
width: float
width_input: str

The input name of the width

selects: Optional[Dict[str, object]]

Configuration for the selects

trigger_generate: Optional[ModelUiConfigTriggerGenerate]

Configuration for the trigger generate button

label: str
after: Optional[str]

The ‘name’ of the input where the trigger generate button will be displayed (after the input). Do not specify both position and after.

position: Optional[Literal["bottom", "top"]]

The position of the trigger generate button. If position specified, the button will be displayed at the specified position. Do not specify both position and after.

One of the following:
"bottom"
"top"
user_id: Optional[str]

(Deprecated) The user ID (example: “user_VFhihHKMRZyDDnZAJwLb2Q”)

Update

import os
from scenario_sdk import Scenario

client = Scenario(
    api_key=os.environ.get("SCENARIO_SDK_API_KEY"),  # This is the default and can be omitted
    api_secret=os.environ.get("SCENARIO_SDK_API_SECRET"),  # This is the default and can be omitted
)
model = client.models.update(
    model_id="modelId",
)
print(model.model)
{
  "model": {
    "id": "id",
    "capabilities": [
      "3d23d"
    ],
    "collectionIds": [
      "string"
    ],
    "createdAt": "createdAt",
    "custom": true,
    "exampleAssetIds": [
      "string"
    ],
    "privacy": "private",
    "source": "civitai",
    "status": "copying",
    "tags": [
      "string"
    ],
    "trainingImagesNumber": 0,
    "type": "custom",
    "updatedAt": "updatedAt",
    "accessRestrictions": 0,
    "authorId": "authorId",
    "class": {
      "category": "category",
      "conceptPrompt": "conceptPrompt",
      "modelId": "modelId",
      "name": "name",
      "prompt": "prompt",
      "slug": "slug",
      "status": "published",
      "thumbnails": [
        "string"
      ]
    },
    "compliantModelIds": [
      "string"
    ],
    "concepts": [
      {
        "modelId": "modelId",
        "scale": -2,
        "modelEpoch": "modelEpoch"
      }
    ],
    "epoch": "epoch",
    "epochs": [
      {
        "epoch": "epoch",
        "assets": [
          {
            "assetId": "assetId",
            "url": "url"
          }
        ]
      }
    ],
    "inputs": [
      {
        "name": "name",
        "type": "boolean",
        "allowedValues": [
          {}
        ],
        "backgroundBehavior": "opaque",
        "color": true,
        "costImpact": true,
        "default": {},
        "description": "description",
        "group": "group",
        "hint": "hint",
        "inputs": [
          {
            "foo": "bar"
          }
        ],
        "kind": "3d",
        "label": "label",
        "maskFrom": "maskFrom",
        "max": 0,
        "maxLength": 0,
        "maxSize": 0,
        "min": 0,
        "minLength": 0,
        "modelTypes": [
          "custom"
        ],
        "parent": true,
        "placeholder": "placeholder",
        "prompt": true,
        "promptSpark": true,
        "required": {
          "always": true,
          "conditionalValues": {},
          "ifDefined": {},
          "ifNotDefined": {}
        },
        "step": 1
      }
    ],
    "modelKeyword": "modelKeyword",
    "name": "name",
    "negativePromptEmbedding": "negativePromptEmbedding",
    "ownerId": "ownerId",
    "parameters": {
      "age": "age",
      "batchSize": 1,
      "classPrompt": "classPrompt",
      "cloneType": "cloneType",
      "conceptPrompt": "conceptPrompt",
      "gender": "gender",
      "language": "language",
      "learningRate": 1,
      "learningRateTextEncoder": 0.0005,
      "learningRateUnet": 1,
      "lrScheduler": "constant",
      "maxTrainSteps": 0,
      "nbEpochs": 1,
      "nbRepeats": 1,
      "numTextTrainSteps": 0,
      "numUNetTrainSteps": 0,
      "optimizeFor": "likeness",
      "priorLossWeight": 1,
      "randomCrop": true,
      "randomCropRatio": 0,
      "randomCropScale": 0,
      "rank": 2,
      "removeBackgroundNoise": true,
      "samplePrompts": [
        "string"
      ],
      "sampleSourceImages": [
        "string"
      ],
      "scaleLr": true,
      "seed": 0,
      "textEncoderTrainingRatio": 0,
      "validationFrequency": 0,
      "validationPrompt": "validationPrompt",
      "voiceDescription": "voiceDescription",
      "wandbKey": "wandbKey"
    },
    "parentModelId": "parentModelId",
    "performanceStats": {
      "variants": [
        {
          "capability": "capability",
          "computedAt": "computedAt",
          "variantKey": "variantKey",
          "arenaScore": {
            "arenaCategory": "arenaCategory",
            "arenaModelName": "arenaModelName",
            "fetchedAt": "fetchedAt",
            "rank": 0,
            "rating": 0,
            "ratingLower": 0,
            "ratingUpper": 0,
            "votes": 0
          },
          "costPerAssetMaxCU": 0,
          "costPerAssetMinCU": 0,
          "costPerAssetP50CU": 0,
          "inferenceLatencyP50Sec": 0,
          "inferenceLatencyP75Sec": 0,
          "resolution": "resolution",
          "totalLatencyP50Sec": 0,
          "totalLatencyP75Sec": 0
        }
      ],
      "default": "default"
    },
    "promptEmbedding": "promptEmbedding",
    "shortDescription": "shortDescription",
    "softDeletionOn": "softDeletionOn",
    "thumbnail": {
      "assetId": "assetId",
      "url": "url"
    },
    "trainingImagePairs": [
      {
        "instruction": "instruction",
        "sourceId": "sourceId",
        "targetId": "targetId"
      }
    ],
    "trainingImages": [
      {
        "id": "id",
        "automaticCaptioning": "automaticCaptioning",
        "createdAt": "createdAt",
        "description": "description",
        "downloadUrl": "downloadUrl",
        "name": "name"
      }
    ],
    "trainingProgress": {
      "stage": "pending",
      "updatedAt": 0,
      "position": 0,
      "progress": 0,
      "remainingTimeMs": 0,
      "startedAt": 0
    },
    "trainingStats": {
      "endedAt": "endedAt",
      "queueDuration": 0,
      "startedAt": "startedAt",
      "trainDuration": 0
    },
    "uiConfig": {
      "inputProperties": {
        "foo": {
          "collapsed": true
        }
      },
      "lorasComponent": {
        "label": "label",
        "modelInput": "modelInput",
        "scaleInput": "scaleInput",
        "modelIdInput": "modelIdInput"
      },
      "presets": [
        {
          "fields": [
            "string"
          ],
          "presets": {}
        }
      ],
      "resolutionComponent": {
        "heightInput": "heightInput",
        "label": "label",
        "presets": [
          {
            "height": 0,
            "label": "label",
            "width": 0
          }
        ],
        "widthInput": "widthInput"
      },
      "selects": {
        "foo": {}
      },
      "triggerGenerate": {
        "label": "label",
        "after": "after",
        "position": "bottom"
      }
    },
    "userId": "userId"
  }
}
Returns Examples
{
  "model": {
    "id": "id",
    "capabilities": [
      "3d23d"
    ],
    "collectionIds": [
      "string"
    ],
    "createdAt": "createdAt",
    "custom": true,
    "exampleAssetIds": [
      "string"
    ],
    "privacy": "private",
    "source": "civitai",
    "status": "copying",
    "tags": [
      "string"
    ],
    "trainingImagesNumber": 0,
    "type": "custom",
    "updatedAt": "updatedAt",
    "accessRestrictions": 0,
    "authorId": "authorId",
    "class": {
      "category": "category",
      "conceptPrompt": "conceptPrompt",
      "modelId": "modelId",
      "name": "name",
      "prompt": "prompt",
      "slug": "slug",
      "status": "published",
      "thumbnails": [
        "string"
      ]
    },
    "compliantModelIds": [
      "string"
    ],
    "concepts": [
      {
        "modelId": "modelId",
        "scale": -2,
        "modelEpoch": "modelEpoch"
      }
    ],
    "epoch": "epoch",
    "epochs": [
      {
        "epoch": "epoch",
        "assets": [
          {
            "assetId": "assetId",
            "url": "url"
          }
        ]
      }
    ],
    "inputs": [
      {
        "name": "name",
        "type": "boolean",
        "allowedValues": [
          {}
        ],
        "backgroundBehavior": "opaque",
        "color": true,
        "costImpact": true,
        "default": {},
        "description": "description",
        "group": "group",
        "hint": "hint",
        "inputs": [
          {
            "foo": "bar"
          }
        ],
        "kind": "3d",
        "label": "label",
        "maskFrom": "maskFrom",
        "max": 0,
        "maxLength": 0,
        "maxSize": 0,
        "min": 0,
        "minLength": 0,
        "modelTypes": [
          "custom"
        ],
        "parent": true,
        "placeholder": "placeholder",
        "prompt": true,
        "promptSpark": true,
        "required": {
          "always": true,
          "conditionalValues": {},
          "ifDefined": {},
          "ifNotDefined": {}
        },
        "step": 1
      }
    ],
    "modelKeyword": "modelKeyword",
    "name": "name",
    "negativePromptEmbedding": "negativePromptEmbedding",
    "ownerId": "ownerId",
    "parameters": {
      "age": "age",
      "batchSize": 1,
      "classPrompt": "classPrompt",
      "cloneType": "cloneType",
      "conceptPrompt": "conceptPrompt",
      "gender": "gender",
      "language": "language",
      "learningRate": 1,
      "learningRateTextEncoder": 0.0005,
      "learningRateUnet": 1,
      "lrScheduler": "constant",
      "maxTrainSteps": 0,
      "nbEpochs": 1,
      "nbRepeats": 1,
      "numTextTrainSteps": 0,
      "numUNetTrainSteps": 0,
      "optimizeFor": "likeness",
      "priorLossWeight": 1,
      "randomCrop": true,
      "randomCropRatio": 0,
      "randomCropScale": 0,
      "rank": 2,
      "removeBackgroundNoise": true,
      "samplePrompts": [
        "string"
      ],
      "sampleSourceImages": [
        "string"
      ],
      "scaleLr": true,
      "seed": 0,
      "textEncoderTrainingRatio": 0,
      "validationFrequency": 0,
      "validationPrompt": "validationPrompt",
      "voiceDescription": "voiceDescription",
      "wandbKey": "wandbKey"
    },
    "parentModelId": "parentModelId",
    "performanceStats": {
      "variants": [
        {
          "capability": "capability",
          "computedAt": "computedAt",
          "variantKey": "variantKey",
          "arenaScore": {
            "arenaCategory": "arenaCategory",
            "arenaModelName": "arenaModelName",
            "fetchedAt": "fetchedAt",
            "rank": 0,
            "rating": 0,
            "ratingLower": 0,
            "ratingUpper": 0,
            "votes": 0
          },
          "costPerAssetMaxCU": 0,
          "costPerAssetMinCU": 0,
          "costPerAssetP50CU": 0,
          "inferenceLatencyP50Sec": 0,
          "inferenceLatencyP75Sec": 0,
          "resolution": "resolution",
          "totalLatencyP50Sec": 0,
          "totalLatencyP75Sec": 0
        }
      ],
      "default": "default"
    },
    "promptEmbedding": "promptEmbedding",
    "shortDescription": "shortDescription",
    "softDeletionOn": "softDeletionOn",
    "thumbnail": {
      "assetId": "assetId",
      "url": "url"
    },
    "trainingImagePairs": [
      {
        "instruction": "instruction",
        "sourceId": "sourceId",
        "targetId": "targetId"
      }
    ],
    "trainingImages": [
      {
        "id": "id",
        "automaticCaptioning": "automaticCaptioning",
        "createdAt": "createdAt",
        "description": "description",
        "downloadUrl": "downloadUrl",
        "name": "name"
      }
    ],
    "trainingProgress": {
      "stage": "pending",
      "updatedAt": 0,
      "position": 0,
      "progress": 0,
      "remainingTimeMs": 0,
      "startedAt": 0
    },
    "trainingStats": {
      "endedAt": "endedAt",
      "queueDuration": 0,
      "startedAt": "startedAt",
      "trainDuration": 0
    },
    "uiConfig": {
      "inputProperties": {
        "foo": {
          "collapsed": true
        }
      },
      "lorasComponent": {
        "label": "label",
        "modelInput": "modelInput",
        "scaleInput": "scaleInput",
        "modelIdInput": "modelIdInput"
      },
      "presets": [
        {
          "fields": [
            "string"
          ],
          "presets": {}
        }
      ],
      "resolutionComponent": {
        "heightInput": "heightInput",
        "label": "label",
        "presets": [
          {
            "height": 0,
            "label": "label",
            "width": 0
          }
        ],
        "widthInput": "widthInput"
      },
      "selects": {
        "foo": {}
      },
      "triggerGenerate": {
        "label": "label",
        "after": "after",
        "position": "bottom"
      }
    },
    "userId": "userId"
  }
}