# Models ## List `models.list(ModelListParams**kwargs) -> SyncModelsCursor[ModelListResponse]` **get** `/models` List all models. Supports both public access (via the `Authorization` header set to `public-auth-token`) and authenticated user access (including API keys). ### Parameters - `blacklisted: Optional[bool]` If set to true, returns the full list of models (including blacklisted models) (only available for team admins) - `collection_id: Optional[str]` When provided, only the models in the Collection will be returned. Only available when privacy=private/unlisted (note: this is different from collectionIds which is only for privacy=public) - `collection_ids: Optional[str]` List of collection ids, comma separated. Only available when privacy=public - `created_after: Optional[str]` Filter results to only return models created after the specified ISO string date (exclusive). Requires the sortBy parameter to be "createdAt". Available for both privacy=public and privacy=private/unlisted - `created_before: Optional[str]` Filter results to only return models created before the specified ISO string date (exclusive). Requires the sortBy parameter to be "createdAt". Available for both privacy=public and privacy=private/unlisted - `loaded_only: Optional[bool]` If set to true, returns the list of models currently loaded on GPU - `original_assets: Optional[bool]` If set to true, returns the original asset without transformation - `page_size: Optional[int]` The number of items to return in the response. The default value is 100, maximum value is 500, minimum value is 1 - `pagination_token: Optional[str]` A token you received in a previous request to query the next page of items - `privacy: Optional[Literal["private", "public"]]` The privacy of the models to return. The default value is `private`, possible values are `private` and `public` - `"private"` - `"public"` - `sort_by: Optional[Literal["createdAt", "updatedAt", "score"]]` Sort results by createdAt, updatedAt, or score. When privacy=public, defaults to score if not specified. When privacy=private/unlisted, supports createdAt and score (default: createdAt). When sortBy=score for privacy=private/unlisted, both privacy and status query parameters are required. - `"createdAt"` - `"updatedAt"` - `"score"` - `sort_direction: Optional[Literal["asc", "desc"]]` Sort results in ascending (asc) or descending (desc) order. Only used when sortBy is specified. Available for both privacy=public and privacy=private/unlisted. For public models, this parameter is ignored when sortBy is not specified or set to score. - `"asc"` - `"desc"` - `status: Optional[Literal["new", "training", "trained", 2 more]]` The status of the models to return. Only available when privacy=private/unlisted - `"new"` - `"training"` - `"trained"` - `"failed"` - `"deleted"` - `tags: Optional[str]` List of tags, comma separated. Only available when privacy=public - `type: Optional[Literal["sd-1_5", "sd-1_5-lora", "sd-1_5-composition", 34 more]]` List all the models of a specific type. The parameter "type" and "types" cannot be used together. Can be any of the following values: sd-1_5, sd-1_5-lora, sd-1_5-composition, sd-xl, sd-xl-lora, sd-xl-composition, flux.1, flux.1-lora, flux.1-kontext-dev, flux.1-krea-dev, flux.1-kontext-lora, flux.1-krea-lora, flux.1-composition, flux.1-pro, flux1.1-pro, flux.1.1-pro-ultra, flux.2-dev-lora, flux.2-dev-edit-lora, flux.2-klein-4b-lora, flux.2-klein-9b-lora, flux.2-klein-base-4b-lora, flux.2-klein-base-9b-lora, flux.2-klein-4b-edit-lora, flux.2-klein-9b-edit-lora, flux.2-klein-base-4b-edit-lora, flux.2-klein-base-9b-edit-lora, gpt-image-1, qwen-image-lora, qwen-image-2512-lora, qwen-image-edit-lora, qwen-image-edit-2509-lora, qwen-image-edit-2511-lora, zimage-lora, zimage-turbo-lora, zimage-de-turbo-lora, custom, elevenlabs-voice. Only available when privacy=public - `"sd-1_5"` - `"sd-1_5-lora"` - `"sd-1_5-composition"` - `"sd-xl"` - `"sd-xl-lora"` - `"sd-xl-composition"` - `"flux.1"` - `"flux.1-lora"` - `"flux.1-kontext-dev"` - `"flux.1-krea-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-lora"` - `"flux.1-composition"` - `"flux.1-pro"` - `"flux1.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-lora"` - `"flux.2-dev-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"gpt-image-1"` - `"qwen-image-lora"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `"zimage-de-turbo-lora"` - `"custom"` - `"elevenlabs-voice"` - `types: Optional[List[Literal["sd-1_5", "sd-1_5-lora", "sd-1_5-composition", 34 more]]]` List of types, comma separated. The parameter "type" and "types" cannot be used together. Can be any of the following values: sd-1_5, sd-1_5-lora, sd-1_5-composition, sd-xl, sd-xl-lora, sd-xl-composition, flux.1, flux.1-lora, flux.1-kontext-dev, flux.1-krea-dev, flux.1-kontext-lora, flux.1-krea-lora, flux.1-composition, flux.1-pro, flux1.1-pro, flux.1.1-pro-ultra, flux.2-dev-lora, flux.2-dev-edit-lora, flux.2-klein-4b-lora, flux.2-klein-9b-lora, flux.2-klein-base-4b-lora, flux.2-klein-base-9b-lora, flux.2-klein-4b-edit-lora, flux.2-klein-9b-edit-lora, flux.2-klein-base-4b-edit-lora, flux.2-klein-base-9b-edit-lora, gpt-image-1, qwen-image-lora, qwen-image-2512-lora, qwen-image-edit-lora, qwen-image-edit-2509-lora, qwen-image-edit-2511-lora, zimage-lora, zimage-turbo-lora, zimage-de-turbo-lora, custom, elevenlabs-voice. Only available when privacy=public - `"sd-1_5"` - `"sd-1_5-lora"` - `"sd-1_5-composition"` - `"sd-xl"` - `"sd-xl-lora"` - `"sd-xl-composition"` - `"flux.1"` - `"flux.1-lora"` - `"flux.1-kontext-dev"` - `"flux.1-krea-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-lora"` - `"flux.1-composition"` - `"flux.1-pro"` - `"flux1.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-lora"` - `"flux.2-dev-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"gpt-image-1"` - `"qwen-image-lora"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `"zimage-de-turbo-lora"` - `"custom"` - `"elevenlabs-voice"` - `updated_after: Optional[str]` Filter results to only return models updated after the specified ISO string date (exclusive). Requires the sortBy parameter to be "updatedAt". Only available when privacy=public - `updated_before: Optional[str]` Filter results to only return models updated before the specified ISO string date (exclusive). Requires the sortBy parameter to be "updatedAt". Only available when privacy=public ### Returns - `class ModelListResponse: …` - `id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `capabilities: List[Literal["3d23d", "audio2audio", "audio2video", 29 more]]` List of model capabilities (example: ["txt2img", "img2img", "txt2img_ip_adapter", ...]) - `"3d23d"` - `"audio2audio"` - `"audio2video"` - `"controlnet"` - `"controlnet_img2img"` - `"controlnet_inpaint"` - `"controlnet_inpaint_ip_adapter"` - `"controlnet_ip_adapter"` - `"controlnet_reference"` - `"controlnet_texture"` - `"img23d"` - `"img2img"` - `"img2img_ip_adapter"` - `"img2img_texture"` - `"img2txt"` - `"img2video"` - `"inpaint"` - `"inpaint_ip_adapter"` - `"outpaint"` - `"reference"` - `"reference_texture"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2img_ip_adapter"` - `"txt2img_texture"` - `"txt2txt"` - `"txt2video"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `collection_ids: List[str]` A list of CollectionId this model belongs to - `created_at: str` The model creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `custom: bool` Whether the model is a custom model and can be used only with POST /generate/custom/{modelId} endpoint - `example_asset_ids: List[str]` List of all example asset IDs setup by the model owner - `privacy: Literal["private", "public", "unlisted"]` The privacy of the model (default: private) - `"private"` - `"public"` - `"unlisted"` - `source: Literal["civitai", "huggingface", "other", "scenario"]` The source of the model - `"civitai"` - `"huggingface"` - `"other"` - `"scenario"` - `status: Literal["copying", "failed", "new", 3 more]` The model status - `"copying"` - `"failed"` - `"new"` - `"trained"` - `"training"` - `"training-canceled"` - `tags: List[str]` The associated tags (example: ["sci-fi", "landscape"]) - `training_images_number: float` The total number of training images - `type: Literal["custom", "elevenlabs-voice", "flux.1", 34 more]` The model type (example: "flux.1-lora") - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `updated_at: str` The model last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `access_restrictions: Optional[Literal[0, 100, 25, 2 more]]` The access restrictions of the model 0: Free plan 25: Creator plan 50: Pro plan 75: Team plan 100: Enterprise plan - `0` - `100` - `25` - `50` - `75` - `author_id: Optional[str]` The author user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") - `class_: Optional[Class]` The class of the model - `category: str` The category slug of the class (example: "art-style") - `concept_prompt: str` The concept prompt of the class (example: "a sks character design") - `model_id: str` The model ID of the class (example: "stable-diffusion-v1-5") - `name: str` The class name (example: "Character Design") - `prompt: str` The class prompt (example: "a character design") - `slug: str` The class slug (example: "art-style-character-design") - `status: Literal["published", "unpublished"]` The class status (only published classes are listed, but unpublished classes can still appear in existing models) - `"published"` - `"unpublished"` - `thumbnails: List[str]` Some example images URLs to showcase the class - `compliant_model_ids: Optional[List[str]]` List of base model IDs compliant with the model (example: ["flux.1-dev", "flux.1-schnell"]) This attribute is mainly used for Flux LoRA models - `concepts: Optional[List[Concept]]` The concepts is required for the type model: composition - `model_id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: float` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `model_epoch: Optional[str]` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `epoch: Optional[str]` The epoch of the model. Only available for Flux Lora Trained models. If not set, uses the final model epoch (latest) - `epochs: Optional[List[Epoch]]` The epochs of the model. Only available for Flux Lora Trained models. - `epoch: str` The epoch hash to identify the epoch - `assets: Optional[List[EpochAsset]]` The assets of the epoch if sample prompts as been supplied during training - `asset_id: str` The AssetId of the image during training (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the asset - `inputs: Optional[List[Input]]` The inputs of the model. Only used for custom models. To retrieve this list, get it by modelId with GET /models/{modelId} - `name: str` The name that must be user to call the model through the API - `type: Literal["boolean", "file", "file_array", 7 more]` The data type of the input - `"boolean"` - `"file"` - `"file_array"` - `"inputs_array"` - `"model"` - `"model_array"` - `"number"` - `"number_array"` - `"string"` - `"string_array"` - `allowed_values: Optional[List[object]]` The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown. - `background_behavior: Optional[Literal["opaque", "transparent"]]` Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`. - `"opaque"` - `"transparent"` - `color: Optional[bool]` Whether the input is a color or not. Only available for `string` input type. - `cost_impact: Optional[bool]` Whether this input affects the model's cost calculation - `default: Optional[object]` The default value for the input - `description: Optional[str]` Help text displayed in the UI to provide additional information about the input - `group: Optional[str]` Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI. - `hint: Optional[str]` Hint text displayed in the UI as a tooltip to guide the user - `inputs: Optional[List[Dict[str, object]]]` The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs. - `kind: Optional[Literal["3d", "audio", "document", 4 more]]` The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `label: Optional[str]` The label displayed in the UI for this input - `mask_from: Optional[str]` The name of the file input field to use as the mask source - `max: Optional[float]` The maximum allowed value. Only available for `number` and `array` input types. - `max_length: Optional[float]` The maximum allowed length for `string` inputs. Also applies to each item in `string_array`. - `max_size: Optional[float]` The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time. - `min: Optional[float]` The minimum allowed value. Only available for `number` and array input types. - `min_length: Optional[float]` The minimum allowed length for string inputs. Also applies to each item in `string_array`. - `model_types: Optional[List[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]]` The allowed model types for this input. Example: `["flux.1-lora"]`. Only available for `model_array` input type. - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `parent: Optional[bool]` Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types. For `file_array`, the parent asset is the first item in the array. - `placeholder: Optional[str]` Placeholder text for the input. Only available for 'string' input type. - `prompt: Optional[bool]` Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type. - `prompt_spark: Optional[bool]` Whether the input is used with prompt spark. Only available for `string` input type. - `required: Optional[InputRequired]` Set of rules that describes when this input is required: - `always`: Input is always required - `ifNotDefined`: Input is required when another specified input is not defined - `ifDefined`: Input is required when another specified input is defined - `conditionalValues`: Input is required when another input has a specific value By default, the input is not required. - `always: Optional[bool]` Whether the input is always required - `conditional_values: Optional[object]` Makes this input required when another input has a specific value: - Key: name of the input to check - Value: operation and allowed values that trigger the requirement - `if_defined: Optional[object]` Makes this input required when another input is defined: - Key: name of the input that must be defined - Value: message to display when this input is required - `if_not_defined: Optional[object]` Makes this input required when another input is not defined: - Key: name of the input that must be undefined - Value: message to display when this input is required - `step: Optional[float]` The step increment for numeric inputs. Only available for `number` input type. - `model_keyword: Optional[str]` The model keyword, this is a legacy parameter, please use conceptPrompt in parameters - `name: Optional[str]` The model name (example: "Cinematic Realism") - `negative_prompt_embedding: Optional[str]` Fine-tune the model's inferences with negative prompt embedding - `owner_id: Optional[str]` The owner ID (example: "team_VFhihHKMRZyDDnZAJwLb2Q") - `parameters: Optional[Parameters]` The parameters of the model - `age: Optional[str]` Age group of the voice (for professional cloning) Only available for ElevenLabs voice training - `batch_size: Optional[float]` The batch size Less steps, and will increase the learning rate Only available for Flux LoRA training - `class_prompt: Optional[str]` The prompt to specify images in the same class as provided instance images Only available for SD15 training - `clone_type: Optional[str]` Type of voice cloning: "instant" (fast) or "professional" (higher quality, requires captcha) Only available for ElevenLabs voice training - `concept_prompt: Optional[str]` The prompt with identifier specifying the instance (or subject) of the class (example: "a daiton dog") Default value varies depending on the model type: - For SD1.5: "daiton" if no class is associated with the model - For SDXL: "daiton" - For Flux: "" - `gender: Optional[str]` Gender of the voice (for professional cloning) Only available for ElevenLabs voice training - `language: Optional[str]` Language of the audio samples (ISO 639-1 code) Only available for ElevenLabs voice training - `learning_rate: Optional[float]` Initial learning rate (after the potential warmup period) Default value varies depending on the model type: - For SD1.5 and SDXL: 0.000005 - For Flux: 0.0001 - `learning_rate_text_encoder: Optional[float]` Initial learning rate (after the potential warmup period) for the text encoder Maximum [Flux LoRA: 0.001] Default [SDXL: 0.00005 | Flux LoRA: 0.00001] Minimum [SDXL: 0 | Flux LoRA: 0.000001] - `learning_rate_unet: Optional[float]` Initial learning rate (after the potential warmup period) for the UNet Only available for SDXL LoRA training - `lr_scheduler: Optional[Literal["constant", "constant-with-warmup", "cosine", 3 more]]` The scheduler type to use (default: "constant") Only available for SD15 and SDXL LoRA training - `"constant"` - `"constant-with-warmup"` - `"cosine"` - `"cosine-with-restarts"` - `"linear"` - `"polynomial"` - `max_train_steps: Optional[float]` Maximum number of training steps to execute (default: varies depending on the model type) For SDXL LoRA training, please use `numTextTrainSteps` and `numUNetTrainSteps` instead Default value varies depending on the model type: - For SD1.5: round((number of training images * 225) / 3) - For SDXL: number of training images * 175 - For Flux: number of training images * 100 Maximum value varies depending on the model type: - For SD1.5 and SDXL: [0, 40000] - For Flux: [0, 10000] - `nb_epochs: Optional[float]` The number of epochs to train for Only available for Flux LoRA training - `nb_repeats: Optional[float]` The number of times to repeat the training Only available for Flux LoRA training - `num_text_train_steps: Optional[float]` The number of training steps for the text encoder Only available for SDXL LoRA training - `num_u_net_train_steps: Optional[float]` The number of training steps for the UNet Only available for SDXL LoRA training - `optimize_for: Optional[Literal["likeness"]]` Optimize the model training task for a specific type of input images. The available values are: - "likeness": optimize training for likeness or portrait (targets specific transformer blocks) - "all": train all transformer blocks - "none": train no specific transformer blocks This parameter controls which double and single transformer blocks are trained during the LoRA training process. Only available for Flux LoRA training - `"likeness"` - `prior_loss_weight: Optional[float]` The weight of prior preservation loss Only available for SD15 and SDXL LoRA training - `random_crop: Optional[bool]` Whether to random crop or center crop images before resizing to the working resolution Only available for SD15 and SDXL LoRA training - `random_crop_ratio: Optional[float]` Ratio of random crops Only available for SD15 and SDXL LoRA training - `random_crop_scale: Optional[float]` Scale of random crops Only available for SD15 and SDXL LoRA training - `rank: Optional[float]` The dimension of the LoRA update matrices Only available for SDXL (deprecated), Flux LoRA and Musubi training Default value varies depending on the model type: - For SDXL (deprecated): 64 - For Flux: 16 - For Musubi: 64 Each trainer enforces its own tighter limit (Flux LoRA: [2; 64], Musubi: [2; 128]) - `remove_background_noise: Optional[bool]` Whether to remove background noise from audio samples before cloning. When enabled, each sample must be at least 5 seconds long. Only available for ElevenLabs voice training - `sample_prompts: Optional[List[str]]` The prompts to use for each epoch Only available for Flux LoRA training - `sample_source_images: Optional[List[str]]` The sample prompt images (AssetIds) paired with samplePrompts Only available for Flux LoRA training Must be the same length as samplePrompts - `scale_lr: Optional[bool]` Whether to scale the learning rate Note: Legacy parameter, will be ignored Only available for SD15 and SDXL LoRA training - `seed: Optional[float]` Used to reproduce previous results. Default: randomly generated number. Only available for SD15 and SDXL LoRA training - `text_encoder_training_ratio: Optional[float]` Whether to train the text encoder or not Example: For 100 steps and a value of 0.2, it means that the text encoder will be trained for 20 steps and then the UNet for 80 steps Note: Legacy parameter, please use `numTextTrainSteps` and `numUNetTrainSteps` Only available for SD15 and SDXL LoRA training - `validation_frequency: Optional[float]` Validation frequency. Cannot be greater than maxTrainSteps value Only available for SD15 and SDXL LoRA training - `validation_prompt: Optional[str]` Validation prompt Only available for SD15 and SDXL LoRA training - `voice_description: Optional[str]` Description of the voice characteristics Only available for ElevenLabs voice training - `wandb_key: Optional[str]` The Weights And Bias key to use for logging. The maximum length is 40 characters - `parent_model_id: Optional[str]` The id of the parent model - `performance_stats: Optional[PerformanceStats]` Aggregated performance stats - `variants: List[PerformanceStatsVariant]` Performance metrics per variant - `capability: str` The generation capability (example: "txt2img", "img2video", "txt2audio") - `computed_at: str` When these stats were last computed (ISO date) - `variant_key: str` Unique variant identifier (example: "txt2img:1K", "img2video:2K", "txt2audio") - `arena_score: Optional[PerformanceStatsVariantArenaScore]` External quality score from arena.ai leaderboard - `arena_category: str` Arena category (example: "text_to_image", "image_to_video") - `arena_model_name: str` Model name on arena.ai - `fetched_at: str` When this score was last fetched (ISO date) - `rank: float` Rank in the arena category - `rating: float` ELO rating - `rating_lower: float` ELO rating confidence interval lower bound - `rating_upper: float` ELO rating confidence interval upper bound - `votes: float` Number of human votes - `cost_per_asset_max_cu: Optional[float]` Maximum cost per output asset (CU) - `cost_per_asset_min_cu: Optional[float]` Minimum cost per output asset (CU) - `cost_per_asset_p50_cu: Optional[float]` Median cost per output asset (CU) - `inference_latency_p50_sec: Optional[float]` Inference latency P50 per output asset (seconds) - `inference_latency_p75_sec: Optional[float]` Inference latency P75 per output asset (seconds) - `resolution: Optional[str]` The resolution bucket (example: "0.5K", "1K", "2K", "4K") - `total_latency_p50_sec: Optional[float]` Total latency P50 per output asset, including queue time (seconds) - `total_latency_p75_sec: Optional[float]` Total latency P75 per output asset, including queue time (seconds) - `default: Optional[str]` Default variant key for quick model comparison - `prompt_embedding: Optional[str]` Fine-tune the model's inferences with prompt embedding - `short_description: Optional[str]` The model short description (example: "This model generates highly detailed cinematic scenes.") - `soft_deletion_on: Optional[str]` The date when the model will be soft deleted (only for Free plan) - `thumbnail: Optional[Thumbnail]` A thumbnail for your model - `asset_id: str` The AssetId of the image used as a thumbnail for your model (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the image used as a thumbnail for your model - `training_image_pairs: Optional[List[TrainingImagePair]]` Array of training image pairs - `instruction: Optional[str]` The instruction for the image pair, source to target - `source_id: Optional[str]` The source asset ID (must be a training asset) - `target_id: Optional[str]` The target asset ID (must be a training asset) - `training_images: Optional[List[TrainingImage]]` The URLs of the first 3 training images of the model. To retrieve the full set of images, get it by modelId - `id: str` The training image ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `automatic_captioning: str` Automatic captioning of the image - `created_at: str` The training image upload date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `description: str` Description for the image - `download_url: str` The URL of the image - `name: str` The original file name of the image (example: "my-training-image.jpg") - `training_progress: Optional[TrainingProgress]` Additional information about the training progress of the model - `stage: Literal["pending", "pending-captcha", "queued-for-train", 2 more]` The stage of the request - `"pending"` - `"pending-captcha"` - `"queued-for-train"` - `"running-train"` - `"starting-train"` - `updated_at: float` Timestamp in milliseconds of the last time the training progress was updated - `position: Optional[float]` Position of the job in the queue (ie. the number of job in the queue before this one) - `progress: Optional[float]` The progress of the job - `remaining_time_ms: Optional[float]` The remaining time in milliseconds - `started_at: Optional[float]` The timestamp in millisecond marking the start of the process - `training_stats: Optional[TrainingStats]` Additional information about the model's training - `ended_at: Optional[str]` The training end time as an ISO date string - `queue_duration: Optional[float]` The training queued duration in seconds - `started_at: Optional[str]` The training start time as an ISO date string - `train_duration: Optional[float]` The training duration in seconds - `ui_config: Optional[UiConfig]` The UI configuration for the model - `input_properties: Optional[Dict[str, UiConfigInputProperties]]` Configuration for the input properties - `collapsed: Optional[bool]` - `loras_component: Optional[UiConfigLorasComponent]` Configuration for the loras component - `label: str` The label of the component - `model_input: str` The input name of the model (model_array) - `scale_input: str` The input name of the scale (number_array) - `model_id_input: Optional[str]` The input model id (example: a composition or a single LoRA modelId) If specified, the model id will be attached to the output asset as a metadata If the model-decomposer parser is specified on it, modelInput and scaleInput will be automatically populated - `presets: Optional[List[UiConfigPreset]]` Configuration for the presets - `fields: List[str]` - `presets: object` - `resolution_component: Optional[UiConfigResolutionComponent]` Configuration for the resolution component - `height_input: str` The input name of the height - `label: str` The label of the component - `presets: List[UiConfigResolutionComponentPreset]` The resolution presets - `height: float` - `label: str` - `width: float` - `width_input: str` The input name of the width - `selects: Optional[Dict[str, object]]` Configuration for the selects - `trigger_generate: Optional[UiConfigTriggerGenerate]` Configuration for the trigger generate button - `label: str` - `after: Optional[str]` The 'name' of the input where the trigger generate button will be displayed (after the input). Do not specify both position and after. - `position: Optional[Literal["bottom", "top"]]` The position of the trigger generate button. If position specified, the button will be displayed at the specified position. Do not specify both position and after. - `"bottom"` - `"top"` - `user_id: Optional[str]` (Deprecated) The user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") ### Example ```python import os from scenario_sdk import Scenario client = Scenario( api_key=os.environ.get("SCENARIO_SDK_API_KEY"), # This is the default and can be omitted api_secret=os.environ.get("SCENARIO_SDK_API_SECRET"), # This is the default and can be omitted ) page = client.models.list() page = page.models[0] print(page.id) ``` #### Response ```json { "models": [ { "id": "id", "capabilities": [ "3d23d" ], "collectionIds": [ "string" ], "createdAt": "createdAt", "custom": true, "exampleAssetIds": [ "string" ], "privacy": "private", "source": "civitai", "status": "copying", "tags": [ "string" ], "trainingImagesNumber": 0, "type": "custom", "updatedAt": "updatedAt", "accessRestrictions": 0, "authorId": "authorId", "class": { "category": "category", "conceptPrompt": "conceptPrompt", "modelId": "modelId", "name": "name", "prompt": "prompt", "slug": "slug", "status": "published", "thumbnails": [ "string" ] }, "compliantModelIds": [ "string" ], "concepts": [ { "modelId": "modelId", "scale": -2, "modelEpoch": "modelEpoch" } ], "epoch": "epoch", "epochs": [ { "epoch": "epoch", "assets": [ { "assetId": "assetId", "url": "url" } ] } ], "inputs": [ { "name": "name", "type": "boolean", "allowedValues": [ {} ], "backgroundBehavior": "opaque", "color": true, "costImpact": true, "default": {}, "description": "description", "group": "group", "hint": "hint", "inputs": [ { "foo": "bar" } ], "kind": "3d", "label": "label", "maskFrom": "maskFrom", "max": 0, "maxLength": 0, "maxSize": 0, "min": 0, "minLength": 0, "modelTypes": [ "custom" ], "parent": true, "placeholder": "placeholder", "prompt": true, "promptSpark": true, "required": { "always": true, "conditionalValues": {}, "ifDefined": {}, "ifNotDefined": {} }, "step": 1 } ], "modelKeyword": "modelKeyword", "name": "name", "negativePromptEmbedding": "negativePromptEmbedding", "ownerId": "ownerId", "parameters": { "age": "age", "batchSize": 1, "classPrompt": "classPrompt", "cloneType": "cloneType", "conceptPrompt": "conceptPrompt", "gender": "gender", "language": "language", "learningRate": 1, "learningRateTextEncoder": 0.0005, "learningRateUnet": 1, "lrScheduler": "constant", "maxTrainSteps": 0, "nbEpochs": 1, "nbRepeats": 1, "numTextTrainSteps": 0, "numUNetTrainSteps": 0, "optimizeFor": "likeness", "priorLossWeight": 1, "randomCrop": true, "randomCropRatio": 0, "randomCropScale": 0, "rank": 2, "removeBackgroundNoise": true, "samplePrompts": [ "string" ], "sampleSourceImages": [ "string" ], "scaleLr": true, "seed": 0, "textEncoderTrainingRatio": 0, "validationFrequency": 0, "validationPrompt": "validationPrompt", "voiceDescription": "voiceDescription", "wandbKey": "wandbKey" }, "parentModelId": "parentModelId", "performanceStats": { "variants": [ { "capability": "capability", "computedAt": "computedAt", "variantKey": "variantKey", "arenaScore": { "arenaCategory": "arenaCategory", "arenaModelName": "arenaModelName", "fetchedAt": "fetchedAt", "rank": 0, "rating": 0, "ratingLower": 0, "ratingUpper": 0, "votes": 0 }, "costPerAssetMaxCU": 0, "costPerAssetMinCU": 0, "costPerAssetP50CU": 0, "inferenceLatencyP50Sec": 0, "inferenceLatencyP75Sec": 0, "resolution": "resolution", "totalLatencyP50Sec": 0, "totalLatencyP75Sec": 0 } ], "default": "default" }, "promptEmbedding": "promptEmbedding", "shortDescription": "shortDescription", "softDeletionOn": "softDeletionOn", "thumbnail": { "assetId": "assetId", "url": "url" }, "trainingImagePairs": [ { "instruction": "instruction", "sourceId": "sourceId", "targetId": "targetId" } ], "trainingImages": [ { "id": "id", "automaticCaptioning": "automaticCaptioning", "createdAt": "createdAt", "description": "description", "downloadUrl": "downloadUrl", "name": "name" } ], "trainingProgress": { "stage": "pending", "updatedAt": 0, "position": 0, "progress": 0, "remainingTimeMs": 0, "startedAt": 0 }, "trainingStats": { "endedAt": "endedAt", "queueDuration": 0, "startedAt": "startedAt", "trainDuration": 0 }, "uiConfig": { "inputProperties": { "foo": { "collapsed": true } }, "lorasComponent": { "label": "label", "modelInput": "modelInput", "scaleInput": "scaleInput", "modelIdInput": "modelIdInput" }, "presets": [ { "fields": [ "string" ], "presets": {} } ], "resolutionComponent": { "heightInput": "heightInput", "label": "label", "presets": [ { "height": 0, "label": "label", "width": 0 } ], "widthInput": "widthInput" }, "selects": { "foo": {} }, "triggerGenerate": { "label": "label", "after": "after", "position": "bottom" } }, "userId": "userId" } ], "nextPaginationToken": "nextPaginationToken" } ``` ## Create `models.create(ModelCreateParams**kwargs) -> ModelCreateResponse` **post** `/models` Create a new model ### Parameters - `original_assets: Optional[bool]` If set to true, returns the original asset without transformation - `base_model_id: Optional[str]` The ID of the base model to use as a starting point for the training (example: "flux.1-dev") Value is automatically set based on the model's type. In case of doubt leave it empty. - `class_slug: Optional[str]` The slug of the class you want to use (ex: "characters-npcs-mobs-characters"). Set to null to unset the class - `collection_ids: Optional[Sequence[str]]` List of collection IDs to add the model to - `concepts: Optional[Iterable[Concept]]` The concepts is required for composition models. With one or more loras Only applicable to Flux based models (and older SD1.5 and SDXL models) - `model_id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: float` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `model_epoch: Optional[str]` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `name: Optional[str]` The model's name (ex: "Cinematic Realism"). If not set, the model's name will be automatically generated when starting training based on training data. - `short_description: Optional[str]` The model's short description (ex: "This model generates highly detailed cinematic scenes."). If not set, the model's short description will be automatically generated when starting training based on training data. - `type: Optional[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]` The model's type (ex: "flux.1-lora"). The type can only be changed if the model has the "new" status. - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` ### Returns - `class ModelCreateResponse: …` - `model: Model` - `id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `capabilities: List[Literal["3d23d", "audio2audio", "audio2video", 29 more]]` List of model capabilities (example: ["txt2img", "img2img", "txt2img_ip_adapter", ...]) - `"3d23d"` - `"audio2audio"` - `"audio2video"` - `"controlnet"` - `"controlnet_img2img"` - `"controlnet_inpaint"` - `"controlnet_inpaint_ip_adapter"` - `"controlnet_ip_adapter"` - `"controlnet_reference"` - `"controlnet_texture"` - `"img23d"` - `"img2img"` - `"img2img_ip_adapter"` - `"img2img_texture"` - `"img2txt"` - `"img2video"` - `"inpaint"` - `"inpaint_ip_adapter"` - `"outpaint"` - `"reference"` - `"reference_texture"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2img_ip_adapter"` - `"txt2img_texture"` - `"txt2txt"` - `"txt2video"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `collection_ids: List[str]` A list of CollectionId this model belongs to - `created_at: str` The model creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `custom: bool` Whether the model is a custom model and can be used only with POST /generate/custom/{modelId} endpoint - `example_asset_ids: List[str]` List of all example asset IDs setup by the model owner - `privacy: Literal["private", "public", "unlisted"]` The privacy of the model (default: private) - `"private"` - `"public"` - `"unlisted"` - `source: Literal["civitai", "huggingface", "other", "scenario"]` The source of the model - `"civitai"` - `"huggingface"` - `"other"` - `"scenario"` - `status: Literal["copying", "failed", "new", 3 more]` The model status - `"copying"` - `"failed"` - `"new"` - `"trained"` - `"training"` - `"training-canceled"` - `tags: List[str]` The associated tags (example: ["sci-fi", "landscape"]) - `training_images_number: float` The total number of training images - `type: Literal["custom", "elevenlabs-voice", "flux.1", 34 more]` The model type (example: "flux.1-lora") - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `updated_at: str` The model last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `access_restrictions: Optional[Literal[0, 100, 25, 2 more]]` The access restrictions of the model 0: Free plan 25: Creator plan 50: Pro plan 75: Team plan 100: Enterprise plan - `0` - `100` - `25` - `50` - `75` - `author_id: Optional[str]` The author user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") - `class_: Optional[ModelClass]` The class of the model - `category: str` The category slug of the class (example: "art-style") - `concept_prompt: str` The concept prompt of the class (example: "a sks character design") - `model_id: str` The model ID of the class (example: "stable-diffusion-v1-5") - `name: str` The class name (example: "Character Design") - `prompt: str` The class prompt (example: "a character design") - `slug: str` The class slug (example: "art-style-character-design") - `status: Literal["published", "unpublished"]` The class status (only published classes are listed, but unpublished classes can still appear in existing models) - `"published"` - `"unpublished"` - `thumbnails: List[str]` Some example images URLs to showcase the class - `compliant_model_ids: Optional[List[str]]` List of base model IDs compliant with the model (example: ["flux.1-dev", "flux.1-schnell"]) This attribute is mainly used for Flux LoRA models - `concepts: Optional[List[ModelConcept]]` The concepts is required for the type model: composition - `model_id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: float` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `model_epoch: Optional[str]` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `epoch: Optional[str]` The epoch of the model. Only available for Flux Lora Trained models. If not set, uses the final model epoch (latest) - `epochs: Optional[List[ModelEpoch]]` The epochs of the model. Only available for Flux Lora Trained models. - `epoch: str` The epoch hash to identify the epoch - `assets: Optional[List[ModelEpochAsset]]` The assets of the epoch if sample prompts as been supplied during training - `asset_id: str` The AssetId of the image during training (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the asset - `inputs: Optional[List[ModelInput]]` The inputs of the model. Only used for custom models. To retrieve this list, get it by modelId with GET /models/{modelId} - `name: str` The name that must be user to call the model through the API - `type: Literal["boolean", "file", "file_array", 7 more]` The data type of the input - `"boolean"` - `"file"` - `"file_array"` - `"inputs_array"` - `"model"` - `"model_array"` - `"number"` - `"number_array"` - `"string"` - `"string_array"` - `allowed_values: Optional[List[object]]` The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown. - `background_behavior: Optional[Literal["opaque", "transparent"]]` Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`. - `"opaque"` - `"transparent"` - `color: Optional[bool]` Whether the input is a color or not. Only available for `string` input type. - `cost_impact: Optional[bool]` Whether this input affects the model's cost calculation - `default: Optional[object]` The default value for the input - `description: Optional[str]` Help text displayed in the UI to provide additional information about the input - `group: Optional[str]` Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI. - `hint: Optional[str]` Hint text displayed in the UI as a tooltip to guide the user - `inputs: Optional[List[Dict[str, object]]]` The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs. - `kind: Optional[Literal["3d", "audio", "document", 4 more]]` The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `label: Optional[str]` The label displayed in the UI for this input - `mask_from: Optional[str]` The name of the file input field to use as the mask source - `max: Optional[float]` The maximum allowed value. Only available for `number` and `array` input types. - `max_length: Optional[float]` The maximum allowed length for `string` inputs. Also applies to each item in `string_array`. - `max_size: Optional[float]` The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time. - `min: Optional[float]` The minimum allowed value. Only available for `number` and array input types. - `min_length: Optional[float]` The minimum allowed length for string inputs. Also applies to each item in `string_array`. - `model_types: Optional[List[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]]` The allowed model types for this input. Example: `["flux.1-lora"]`. Only available for `model_array` input type. - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `parent: Optional[bool]` Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types. For `file_array`, the parent asset is the first item in the array. - `placeholder: Optional[str]` Placeholder text for the input. Only available for 'string' input type. - `prompt: Optional[bool]` Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type. - `prompt_spark: Optional[bool]` Whether the input is used with prompt spark. Only available for `string` input type. - `required: Optional[ModelInputRequired]` Set of rules that describes when this input is required: - `always`: Input is always required - `ifNotDefined`: Input is required when another specified input is not defined - `ifDefined`: Input is required when another specified input is defined - `conditionalValues`: Input is required when another input has a specific value By default, the input is not required. - `always: Optional[bool]` Whether the input is always required - `conditional_values: Optional[object]` Makes this input required when another input has a specific value: - Key: name of the input to check - Value: operation and allowed values that trigger the requirement - `if_defined: Optional[object]` Makes this input required when another input is defined: - Key: name of the input that must be defined - Value: message to display when this input is required - `if_not_defined: Optional[object]` Makes this input required when another input is not defined: - Key: name of the input that must be undefined - Value: message to display when this input is required - `step: Optional[float]` The step increment for numeric inputs. Only available for `number` input type. - `model_keyword: Optional[str]` The model keyword, this is a legacy parameter, please use conceptPrompt in parameters - `name: Optional[str]` The model name (example: "Cinematic Realism") - `negative_prompt_embedding: Optional[str]` Fine-tune the model's inferences with negative prompt embedding - `owner_id: Optional[str]` The owner ID (example: "team_VFhihHKMRZyDDnZAJwLb2Q") - `parameters: Optional[ModelParameters]` The parameters of the model - `age: Optional[str]` Age group of the voice (for professional cloning) Only available for ElevenLabs voice training - `batch_size: Optional[float]` The batch size Less steps, and will increase the learning rate Only available for Flux LoRA training - `class_prompt: Optional[str]` The prompt to specify images in the same class as provided instance images Only available for SD15 training - `clone_type: Optional[str]` Type of voice cloning: "instant" (fast) or "professional" (higher quality, requires captcha) Only available for ElevenLabs voice training - `concept_prompt: Optional[str]` The prompt with identifier specifying the instance (or subject) of the class (example: "a daiton dog") Default value varies depending on the model type: - For SD1.5: "daiton" if no class is associated with the model - For SDXL: "daiton" - For Flux: "" - `gender: Optional[str]` Gender of the voice (for professional cloning) Only available for ElevenLabs voice training - `language: Optional[str]` Language of the audio samples (ISO 639-1 code) Only available for ElevenLabs voice training - `learning_rate: Optional[float]` Initial learning rate (after the potential warmup period) Default value varies depending on the model type: - For SD1.5 and SDXL: 0.000005 - For Flux: 0.0001 - `learning_rate_text_encoder: Optional[float]` Initial learning rate (after the potential warmup period) for the text encoder Maximum [Flux LoRA: 0.001] Default [SDXL: 0.00005 | Flux LoRA: 0.00001] Minimum [SDXL: 0 | Flux LoRA: 0.000001] - `learning_rate_unet: Optional[float]` Initial learning rate (after the potential warmup period) for the UNet Only available for SDXL LoRA training - `lr_scheduler: Optional[Literal["constant", "constant-with-warmup", "cosine", 3 more]]` The scheduler type to use (default: "constant") Only available for SD15 and SDXL LoRA training - `"constant"` - `"constant-with-warmup"` - `"cosine"` - `"cosine-with-restarts"` - `"linear"` - `"polynomial"` - `max_train_steps: Optional[float]` Maximum number of training steps to execute (default: varies depending on the model type) For SDXL LoRA training, please use `numTextTrainSteps` and `numUNetTrainSteps` instead Default value varies depending on the model type: - For SD1.5: round((number of training images * 225) / 3) - For SDXL: number of training images * 175 - For Flux: number of training images * 100 Maximum value varies depending on the model type: - For SD1.5 and SDXL: [0, 40000] - For Flux: [0, 10000] - `nb_epochs: Optional[float]` The number of epochs to train for Only available for Flux LoRA training - `nb_repeats: Optional[float]` The number of times to repeat the training Only available for Flux LoRA training - `num_text_train_steps: Optional[float]` The number of training steps for the text encoder Only available for SDXL LoRA training - `num_u_net_train_steps: Optional[float]` The number of training steps for the UNet Only available for SDXL LoRA training - `optimize_for: Optional[Literal["likeness"]]` Optimize the model training task for a specific type of input images. The available values are: - "likeness": optimize training for likeness or portrait (targets specific transformer blocks) - "all": train all transformer blocks - "none": train no specific transformer blocks This parameter controls which double and single transformer blocks are trained during the LoRA training process. Only available for Flux LoRA training - `"likeness"` - `prior_loss_weight: Optional[float]` The weight of prior preservation loss Only available for SD15 and SDXL LoRA training - `random_crop: Optional[bool]` Whether to random crop or center crop images before resizing to the working resolution Only available for SD15 and SDXL LoRA training - `random_crop_ratio: Optional[float]` Ratio of random crops Only available for SD15 and SDXL LoRA training - `random_crop_scale: Optional[float]` Scale of random crops Only available for SD15 and SDXL LoRA training - `rank: Optional[float]` The dimension of the LoRA update matrices Only available for SDXL (deprecated), Flux LoRA and Musubi training Default value varies depending on the model type: - For SDXL (deprecated): 64 - For Flux: 16 - For Musubi: 64 Each trainer enforces its own tighter limit (Flux LoRA: [2; 64], Musubi: [2; 128]) - `remove_background_noise: Optional[bool]` Whether to remove background noise from audio samples before cloning. When enabled, each sample must be at least 5 seconds long. Only available for ElevenLabs voice training - `sample_prompts: Optional[List[str]]` The prompts to use for each epoch Only available for Flux LoRA training - `sample_source_images: Optional[List[str]]` The sample prompt images (AssetIds) paired with samplePrompts Only available for Flux LoRA training Must be the same length as samplePrompts - `scale_lr: Optional[bool]` Whether to scale the learning rate Note: Legacy parameter, will be ignored Only available for SD15 and SDXL LoRA training - `seed: Optional[float]` Used to reproduce previous results. Default: randomly generated number. Only available for SD15 and SDXL LoRA training - `text_encoder_training_ratio: Optional[float]` Whether to train the text encoder or not Example: For 100 steps and a value of 0.2, it means that the text encoder will be trained for 20 steps and then the UNet for 80 steps Note: Legacy parameter, please use `numTextTrainSteps` and `numUNetTrainSteps` Only available for SD15 and SDXL LoRA training - `validation_frequency: Optional[float]` Validation frequency. Cannot be greater than maxTrainSteps value Only available for SD15 and SDXL LoRA training - `validation_prompt: Optional[str]` Validation prompt Only available for SD15 and SDXL LoRA training - `voice_description: Optional[str]` Description of the voice characteristics Only available for ElevenLabs voice training - `wandb_key: Optional[str]` The Weights And Bias key to use for logging. The maximum length is 40 characters - `parent_model_id: Optional[str]` The id of the parent model - `performance_stats: Optional[ModelPerformanceStats]` Aggregated performance stats - `variants: List[ModelPerformanceStatsVariant]` Performance metrics per variant - `capability: str` The generation capability (example: "txt2img", "img2video", "txt2audio") - `computed_at: str` When these stats were last computed (ISO date) - `variant_key: str` Unique variant identifier (example: "txt2img:1K", "img2video:2K", "txt2audio") - `arena_score: Optional[ModelPerformanceStatsVariantArenaScore]` External quality score from arena.ai leaderboard - `arena_category: str` Arena category (example: "text_to_image", "image_to_video") - `arena_model_name: str` Model name on arena.ai - `fetched_at: str` When this score was last fetched (ISO date) - `rank: float` Rank in the arena category - `rating: float` ELO rating - `rating_lower: float` ELO rating confidence interval lower bound - `rating_upper: float` ELO rating confidence interval upper bound - `votes: float` Number of human votes - `cost_per_asset_max_cu: Optional[float]` Maximum cost per output asset (CU) - `cost_per_asset_min_cu: Optional[float]` Minimum cost per output asset (CU) - `cost_per_asset_p50_cu: Optional[float]` Median cost per output asset (CU) - `inference_latency_p50_sec: Optional[float]` Inference latency P50 per output asset (seconds) - `inference_latency_p75_sec: Optional[float]` Inference latency P75 per output asset (seconds) - `resolution: Optional[str]` The resolution bucket (example: "0.5K", "1K", "2K", "4K") - `total_latency_p50_sec: Optional[float]` Total latency P50 per output asset, including queue time (seconds) - `total_latency_p75_sec: Optional[float]` Total latency P75 per output asset, including queue time (seconds) - `default: Optional[str]` Default variant key for quick model comparison - `prompt_embedding: Optional[str]` Fine-tune the model's inferences with prompt embedding - `short_description: Optional[str]` The model short description (example: "This model generates highly detailed cinematic scenes.") - `soft_deletion_on: Optional[str]` The date when the model will be soft deleted (only for Free plan) - `thumbnail: Optional[ModelThumbnail]` A thumbnail for your model - `asset_id: str` The AssetId of the image used as a thumbnail for your model (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the image used as a thumbnail for your model - `training_image_pairs: Optional[List[ModelTrainingImagePair]]` Array of training image pairs - `instruction: Optional[str]` The instruction for the image pair, source to target - `source_id: Optional[str]` The source asset ID (must be a training asset) - `target_id: Optional[str]` The target asset ID (must be a training asset) - `training_images: Optional[List[ModelTrainingImage]]` The URLs of the first 3 training images of the model. To retrieve the full set of images, get it by modelId - `id: str` The training image ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `automatic_captioning: str` Automatic captioning of the image - `created_at: str` The training image upload date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `description: str` Description for the image - `download_url: str` The URL of the image - `name: str` The original file name of the image (example: "my-training-image.jpg") - `training_progress: Optional[ModelTrainingProgress]` Additional information about the training progress of the model - `stage: Literal["pending", "pending-captcha", "queued-for-train", 2 more]` The stage of the request - `"pending"` - `"pending-captcha"` - `"queued-for-train"` - `"running-train"` - `"starting-train"` - `updated_at: float` Timestamp in milliseconds of the last time the training progress was updated - `position: Optional[float]` Position of the job in the queue (ie. the number of job in the queue before this one) - `progress: Optional[float]` The progress of the job - `remaining_time_ms: Optional[float]` The remaining time in milliseconds - `started_at: Optional[float]` The timestamp in millisecond marking the start of the process - `training_stats: Optional[ModelTrainingStats]` Additional information about the model's training - `ended_at: Optional[str]` The training end time as an ISO date string - `queue_duration: Optional[float]` The training queued duration in seconds - `started_at: Optional[str]` The training start time as an ISO date string - `train_duration: Optional[float]` The training duration in seconds - `ui_config: Optional[ModelUiConfig]` The UI configuration for the model - `input_properties: Optional[Dict[str, ModelUiConfigInputProperties]]` Configuration for the input properties - `collapsed: Optional[bool]` - `loras_component: Optional[ModelUiConfigLorasComponent]` Configuration for the loras component - `label: str` The label of the component - `model_input: str` The input name of the model (model_array) - `scale_input: str` The input name of the scale (number_array) - `model_id_input: Optional[str]` The input model id (example: a composition or a single LoRA modelId) If specified, the model id will be attached to the output asset as a metadata If the model-decomposer parser is specified on it, modelInput and scaleInput will be automatically populated - `presets: Optional[List[ModelUiConfigPreset]]` Configuration for the presets - `fields: List[str]` - `presets: object` - `resolution_component: Optional[ModelUiConfigResolutionComponent]` Configuration for the resolution component - `height_input: str` The input name of the height - `label: str` The label of the component - `presets: List[ModelUiConfigResolutionComponentPreset]` The resolution presets - `height: float` - `label: str` - `width: float` - `width_input: str` The input name of the width - `selects: Optional[Dict[str, object]]` Configuration for the selects - `trigger_generate: Optional[ModelUiConfigTriggerGenerate]` Configuration for the trigger generate button - `label: str` - `after: Optional[str]` The 'name' of the input where the trigger generate button will be displayed (after the input). Do not specify both position and after. - `position: Optional[Literal["bottom", "top"]]` The position of the trigger generate button. If position specified, the button will be displayed at the specified position. Do not specify both position and after. - `"bottom"` - `"top"` - `user_id: Optional[str]` (Deprecated) The user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") ### Example ```python import os from scenario_sdk import Scenario client = Scenario( api_key=os.environ.get("SCENARIO_SDK_API_KEY"), # This is the default and can be omitted api_secret=os.environ.get("SCENARIO_SDK_API_SECRET"), # This is the default and can be omitted ) model = client.models.create() print(model.model) ``` #### Response ```json { "model": { "id": "id", "capabilities": [ "3d23d" ], "collectionIds": [ "string" ], "createdAt": "createdAt", "custom": true, "exampleAssetIds": [ "string" ], "privacy": "private", "source": "civitai", "status": "copying", "tags": [ "string" ], "trainingImagesNumber": 0, "type": "custom", "updatedAt": "updatedAt", "accessRestrictions": 0, "authorId": "authorId", "class": { "category": "category", "conceptPrompt": "conceptPrompt", "modelId": "modelId", "name": "name", "prompt": "prompt", "slug": "slug", "status": "published", "thumbnails": [ "string" ] }, "compliantModelIds": [ "string" ], "concepts": [ { "modelId": "modelId", "scale": -2, "modelEpoch": "modelEpoch" } ], "epoch": "epoch", "epochs": [ { "epoch": "epoch", "assets": [ { "assetId": "assetId", "url": "url" } ] } ], "inputs": [ { "name": "name", "type": "boolean", "allowedValues": [ {} ], "backgroundBehavior": "opaque", "color": true, "costImpact": true, "default": {}, "description": "description", "group": "group", "hint": "hint", "inputs": [ { "foo": "bar" } ], "kind": "3d", "label": "label", "maskFrom": "maskFrom", "max": 0, "maxLength": 0, "maxSize": 0, "min": 0, "minLength": 0, "modelTypes": [ "custom" ], "parent": true, "placeholder": "placeholder", "prompt": true, "promptSpark": true, "required": { "always": true, "conditionalValues": {}, "ifDefined": {}, "ifNotDefined": {} }, "step": 1 } ], "modelKeyword": "modelKeyword", "name": "name", "negativePromptEmbedding": "negativePromptEmbedding", "ownerId": "ownerId", "parameters": { "age": "age", "batchSize": 1, "classPrompt": "classPrompt", "cloneType": "cloneType", "conceptPrompt": "conceptPrompt", "gender": "gender", "language": "language", "learningRate": 1, "learningRateTextEncoder": 0.0005, "learningRateUnet": 1, "lrScheduler": "constant", "maxTrainSteps": 0, "nbEpochs": 1, "nbRepeats": 1, "numTextTrainSteps": 0, "numUNetTrainSteps": 0, "optimizeFor": "likeness", "priorLossWeight": 1, "randomCrop": true, "randomCropRatio": 0, "randomCropScale": 0, "rank": 2, "removeBackgroundNoise": true, "samplePrompts": [ "string" ], "sampleSourceImages": [ "string" ], "scaleLr": true, "seed": 0, "textEncoderTrainingRatio": 0, "validationFrequency": 0, "validationPrompt": "validationPrompt", "voiceDescription": "voiceDescription", "wandbKey": "wandbKey" }, "parentModelId": "parentModelId", "performanceStats": { "variants": [ { "capability": "capability", "computedAt": "computedAt", "variantKey": "variantKey", "arenaScore": { "arenaCategory": "arenaCategory", "arenaModelName": "arenaModelName", "fetchedAt": "fetchedAt", "rank": 0, "rating": 0, "ratingLower": 0, "ratingUpper": 0, "votes": 0 }, "costPerAssetMaxCU": 0, "costPerAssetMinCU": 0, "costPerAssetP50CU": 0, "inferenceLatencyP50Sec": 0, "inferenceLatencyP75Sec": 0, "resolution": "resolution", "totalLatencyP50Sec": 0, "totalLatencyP75Sec": 0 } ], "default": "default" }, "promptEmbedding": "promptEmbedding", "shortDescription": "shortDescription", "softDeletionOn": "softDeletionOn", "thumbnail": { "assetId": "assetId", "url": "url" }, "trainingImagePairs": [ { "instruction": "instruction", "sourceId": "sourceId", "targetId": "targetId" } ], "trainingImages": [ { "id": "id", "automaticCaptioning": "automaticCaptioning", "createdAt": "createdAt", "description": "description", "downloadUrl": "downloadUrl", "name": "name" } ], "trainingProgress": { "stage": "pending", "updatedAt": 0, "position": 0, "progress": 0, "remainingTimeMs": 0, "startedAt": 0 }, "trainingStats": { "endedAt": "endedAt", "queueDuration": 0, "startedAt": "startedAt", "trainDuration": 0 }, "uiConfig": { "inputProperties": { "foo": { "collapsed": true } }, "lorasComponent": { "label": "label", "modelInput": "modelInput", "scaleInput": "scaleInput", "modelIdInput": "modelIdInput" }, "presets": [ { "fields": [ "string" ], "presets": {} } ], "resolutionComponent": { "heightInput": "heightInput", "label": "label", "presets": [ { "height": 0, "label": "label", "width": 0 } ], "widthInput": "widthInput" }, "selects": { "foo": {} }, "triggerGenerate": { "label": "label", "after": "after", "position": "bottom" } }, "userId": "userId" } } ``` ## Get Bulk `models.get_bulk(ModelGetBulkParams**kwargs) -> ModelGetBulkResponse` **post** `/models/get-bulk` Get multiple models by their `modelIds` ### Parameters - `original_assets: Optional[bool]` If set to true, returns the original asset without transformation - `all_training_images: Optional[bool]` If true will return all training images; otherwise returns only the first 3 training images. If `trainingImagesPreview` set to true, this parameter is ignored. - `minimal: Optional[bool]` If true will return only the base details of the model (id, name, type) if true, all other parameters are ignored - `model_ids: Optional[Sequence[str]]` The list of model IDs to include in the response - `settings: Optional[bool]` If true, will return the settings: `promptEmbedding` and `negativePromptEmbedding`. - `thumbnail: Optional[bool]` If true will return the thumbnail, when no thumbnail is set, will try to fetch the first training image instead. - `training_images_preview: Optional[bool]` If true will return the first 3 training images; otherwise returns the full training images. If `allTrainingImages` set to true, this parameter is ignored. ### Returns - `class ModelGetBulkResponse: …` - `models: List[Model]` - `id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `privacy: Literal["private", "public", "unlisted"]` The privacy of the model (default: private) - `"private"` - `"public"` - `"unlisted"` - `type: Literal["custom", "elevenlabs-voice", "flux.1", 34 more]` The model type (example: "flux.1-lora") - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `access_restrictions: Optional[Literal[0, 100, 25, 2 more]]` The access restrictions of the model 0: Free plan 25: Creator plan 50: Pro plan 75: Team plan 100: Enterprise plan - `0` - `100` - `25` - `50` - `75` - `author_id: Optional[str]` The author user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") - `capabilities: Optional[List[Literal["3d23d", "audio2audio", "audio2video", 29 more]]]` List of model capabilities (example: ["txt2img", "img2img", "txt2img_ip_adapter", ...]) - `"3d23d"` - `"audio2audio"` - `"audio2video"` - `"controlnet"` - `"controlnet_img2img"` - `"controlnet_inpaint"` - `"controlnet_inpaint_ip_adapter"` - `"controlnet_ip_adapter"` - `"controlnet_reference"` - `"controlnet_texture"` - `"img23d"` - `"img2img"` - `"img2img_ip_adapter"` - `"img2img_texture"` - `"img2txt"` - `"img2video"` - `"inpaint"` - `"inpaint_ip_adapter"` - `"outpaint"` - `"reference"` - `"reference_texture"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2img_ip_adapter"` - `"txt2img_texture"` - `"txt2txt"` - `"txt2video"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `class_: Optional[ModelClass]` The class of the model - `category: str` The category slug of the class (example: "art-style") - `concept_prompt: str` The concept prompt of the class (example: "a sks character design") - `model_id: str` The model ID of the class (example: "stable-diffusion-v1-5") - `name: str` The class name (example: "Character Design") - `prompt: str` The class prompt (example: "a character design") - `slug: str` The class slug (example: "art-style-character-design") - `status: Literal["published", "unpublished"]` The class status (only published classes are listed, but unpublished classes can still appear in existing models) - `"published"` - `"unpublished"` - `thumbnails: List[str]` Some example images URLs to showcase the class - `collection_ids: Optional[List[str]]` A list of CollectionId this model belongs to - `compliant_model_ids: Optional[List[str]]` List of base model IDs compliant with the model (example: ["flux.1-dev", "flux.1-schnell"]) This attribute is mainly used for Flux LoRA models - `concepts: Optional[List[ModelConcept]]` The concepts is required for the type model: composition - `model_id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: float` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `model_epoch: Optional[str]` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `created_at: Optional[str]` The model creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `custom: Optional[bool]` Whether the model is a custom model and can be used only with POST /generate/custom/{modelId} endpoint - `epoch: Optional[str]` The epoch of the model. Only available for Flux Lora Trained models. If not set, uses the final model epoch (latest) - `epochs: Optional[List[ModelEpoch]]` The epochs of the model. Only available for Flux Lora Trained models. - `epoch: str` The epoch hash to identify the epoch - `assets: Optional[List[ModelEpochAsset]]` The assets of the epoch if sample prompts as been supplied during training - `asset_id: str` The AssetId of the image during training (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the asset - `example_asset_ids: Optional[List[str]]` List of all example asset IDs setup by the model owner - `inputs: Optional[List[ModelInput]]` The inputs of the model. Only used for custom models. To retrieve this list, get it by modelId with GET /models/{modelId} - `name: str` The name that must be user to call the model through the API - `type: Literal["boolean", "file", "file_array", 7 more]` The data type of the input - `"boolean"` - `"file"` - `"file_array"` - `"inputs_array"` - `"model"` - `"model_array"` - `"number"` - `"number_array"` - `"string"` - `"string_array"` - `allowed_values: Optional[List[object]]` The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown. - `background_behavior: Optional[Literal["opaque", "transparent"]]` Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`. - `"opaque"` - `"transparent"` - `color: Optional[bool]` Whether the input is a color or not. Only available for `string` input type. - `cost_impact: Optional[bool]` Whether this input affects the model's cost calculation - `default: Optional[object]` The default value for the input - `description: Optional[str]` Help text displayed in the UI to provide additional information about the input - `group: Optional[str]` Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI. - `hint: Optional[str]` Hint text displayed in the UI as a tooltip to guide the user - `inputs: Optional[List[Dict[str, object]]]` The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs. - `kind: Optional[Literal["3d", "audio", "document", 4 more]]` The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `label: Optional[str]` The label displayed in the UI for this input - `mask_from: Optional[str]` The name of the file input field to use as the mask source - `max: Optional[float]` The maximum allowed value. Only available for `number` and `array` input types. - `max_length: Optional[float]` The maximum allowed length for `string` inputs. Also applies to each item in `string_array`. - `max_size: Optional[float]` The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time. - `min: Optional[float]` The minimum allowed value. Only available for `number` and array input types. - `min_length: Optional[float]` The minimum allowed length for string inputs. Also applies to each item in `string_array`. - `model_types: Optional[List[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]]` The allowed model types for this input. Example: `["flux.1-lora"]`. Only available for `model_array` input type. - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `parent: Optional[bool]` Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types. For `file_array`, the parent asset is the first item in the array. - `placeholder: Optional[str]` Placeholder text for the input. Only available for 'string' input type. - `prompt: Optional[bool]` Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type. - `prompt_spark: Optional[bool]` Whether the input is used with prompt spark. Only available for `string` input type. - `required: Optional[ModelInputRequired]` Set of rules that describes when this input is required: - `always`: Input is always required - `ifNotDefined`: Input is required when another specified input is not defined - `ifDefined`: Input is required when another specified input is defined - `conditionalValues`: Input is required when another input has a specific value By default, the input is not required. - `always: Optional[bool]` Whether the input is always required - `conditional_values: Optional[object]` Makes this input required when another input has a specific value: - Key: name of the input to check - Value: operation and allowed values that trigger the requirement - `if_defined: Optional[object]` Makes this input required when another input is defined: - Key: name of the input that must be defined - Value: message to display when this input is required - `if_not_defined: Optional[object]` Makes this input required when another input is not defined: - Key: name of the input that must be undefined - Value: message to display when this input is required - `step: Optional[float]` The step increment for numeric inputs. Only available for `number` input type. - `model_keyword: Optional[str]` The model keyword, this is a legacy parameter, please use conceptPrompt in parameters - `name: Optional[str]` The model name (example: "Cinematic Realism") - `negative_prompt_embedding: Optional[str]` Fine-tune the model's inferences with negative prompt embedding - `owner_id: Optional[str]` The owner ID (example: "team_VFhihHKMRZyDDnZAJwLb2Q") - `parameters: Optional[ModelParameters]` The parameters of the model - `age: Optional[str]` Age group of the voice (for professional cloning) Only available for ElevenLabs voice training - `batch_size: Optional[float]` The batch size Less steps, and will increase the learning rate Only available for Flux LoRA training - `class_prompt: Optional[str]` The prompt to specify images in the same class as provided instance images Only available for SD15 training - `clone_type: Optional[str]` Type of voice cloning: "instant" (fast) or "professional" (higher quality, requires captcha) Only available for ElevenLabs voice training - `concept_prompt: Optional[str]` The prompt with identifier specifying the instance (or subject) of the class (example: "a daiton dog") Default value varies depending on the model type: - For SD1.5: "daiton" if no class is associated with the model - For SDXL: "daiton" - For Flux: "" - `gender: Optional[str]` Gender of the voice (for professional cloning) Only available for ElevenLabs voice training - `language: Optional[str]` Language of the audio samples (ISO 639-1 code) Only available for ElevenLabs voice training - `learning_rate: Optional[float]` Initial learning rate (after the potential warmup period) Default value varies depending on the model type: - For SD1.5 and SDXL: 0.000005 - For Flux: 0.0001 - `learning_rate_text_encoder: Optional[float]` Initial learning rate (after the potential warmup period) for the text encoder Maximum [Flux LoRA: 0.001] Default [SDXL: 0.00005 | Flux LoRA: 0.00001] Minimum [SDXL: 0 | Flux LoRA: 0.000001] - `learning_rate_unet: Optional[float]` Initial learning rate (after the potential warmup period) for the UNet Only available for SDXL LoRA training - `lr_scheduler: Optional[Literal["constant", "constant-with-warmup", "cosine", 3 more]]` The scheduler type to use (default: "constant") Only available for SD15 and SDXL LoRA training - `"constant"` - `"constant-with-warmup"` - `"cosine"` - `"cosine-with-restarts"` - `"linear"` - `"polynomial"` - `max_train_steps: Optional[float]` Maximum number of training steps to execute (default: varies depending on the model type) For SDXL LoRA training, please use `numTextTrainSteps` and `numUNetTrainSteps` instead Default value varies depending on the model type: - For SD1.5: round((number of training images * 225) / 3) - For SDXL: number of training images * 175 - For Flux: number of training images * 100 Maximum value varies depending on the model type: - For SD1.5 and SDXL: [0, 40000] - For Flux: [0, 10000] - `nb_epochs: Optional[float]` The number of epochs to train for Only available for Flux LoRA training - `nb_repeats: Optional[float]` The number of times to repeat the training Only available for Flux LoRA training - `num_text_train_steps: Optional[float]` The number of training steps for the text encoder Only available for SDXL LoRA training - `num_u_net_train_steps: Optional[float]` The number of training steps for the UNet Only available for SDXL LoRA training - `optimize_for: Optional[Literal["likeness"]]` Optimize the model training task for a specific type of input images. The available values are: - "likeness": optimize training for likeness or portrait (targets specific transformer blocks) - "all": train all transformer blocks - "none": train no specific transformer blocks This parameter controls which double and single transformer blocks are trained during the LoRA training process. Only available for Flux LoRA training - `"likeness"` - `prior_loss_weight: Optional[float]` The weight of prior preservation loss Only available for SD15 and SDXL LoRA training - `random_crop: Optional[bool]` Whether to random crop or center crop images before resizing to the working resolution Only available for SD15 and SDXL LoRA training - `random_crop_ratio: Optional[float]` Ratio of random crops Only available for SD15 and SDXL LoRA training - `random_crop_scale: Optional[float]` Scale of random crops Only available for SD15 and SDXL LoRA training - `rank: Optional[float]` The dimension of the LoRA update matrices Only available for SDXL (deprecated), Flux LoRA and Musubi training Default value varies depending on the model type: - For SDXL (deprecated): 64 - For Flux: 16 - For Musubi: 64 Each trainer enforces its own tighter limit (Flux LoRA: [2; 64], Musubi: [2; 128]) - `remove_background_noise: Optional[bool]` Whether to remove background noise from audio samples before cloning. When enabled, each sample must be at least 5 seconds long. Only available for ElevenLabs voice training - `sample_prompts: Optional[List[str]]` The prompts to use for each epoch Only available for Flux LoRA training - `sample_source_images: Optional[List[str]]` The sample prompt images (AssetIds) paired with samplePrompts Only available for Flux LoRA training Must be the same length as samplePrompts - `scale_lr: Optional[bool]` Whether to scale the learning rate Note: Legacy parameter, will be ignored Only available for SD15 and SDXL LoRA training - `seed: Optional[float]` Used to reproduce previous results. Default: randomly generated number. Only available for SD15 and SDXL LoRA training - `text_encoder_training_ratio: Optional[float]` Whether to train the text encoder or not Example: For 100 steps and a value of 0.2, it means that the text encoder will be trained for 20 steps and then the UNet for 80 steps Note: Legacy parameter, please use `numTextTrainSteps` and `numUNetTrainSteps` Only available for SD15 and SDXL LoRA training - `validation_frequency: Optional[float]` Validation frequency. Cannot be greater than maxTrainSteps value Only available for SD15 and SDXL LoRA training - `validation_prompt: Optional[str]` Validation prompt Only available for SD15 and SDXL LoRA training - `voice_description: Optional[str]` Description of the voice characteristics Only available for ElevenLabs voice training - `wandb_key: Optional[str]` The Weights And Bias key to use for logging. The maximum length is 40 characters - `parent_model_id: Optional[str]` The id of the parent model - `performance_stats: Optional[ModelPerformanceStats]` Aggregated performance stats - `variants: List[ModelPerformanceStatsVariant]` Performance metrics per variant - `capability: str` The generation capability (example: "txt2img", "img2video", "txt2audio") - `computed_at: str` When these stats were last computed (ISO date) - `variant_key: str` Unique variant identifier (example: "txt2img:1K", "img2video:2K", "txt2audio") - `arena_score: Optional[ModelPerformanceStatsVariantArenaScore]` External quality score from arena.ai leaderboard - `arena_category: str` Arena category (example: "text_to_image", "image_to_video") - `arena_model_name: str` Model name on arena.ai - `fetched_at: str` When this score was last fetched (ISO date) - `rank: float` Rank in the arena category - `rating: float` ELO rating - `rating_lower: float` ELO rating confidence interval lower bound - `rating_upper: float` ELO rating confidence interval upper bound - `votes: float` Number of human votes - `cost_per_asset_max_cu: Optional[float]` Maximum cost per output asset (CU) - `cost_per_asset_min_cu: Optional[float]` Minimum cost per output asset (CU) - `cost_per_asset_p50_cu: Optional[float]` Median cost per output asset (CU) - `inference_latency_p50_sec: Optional[float]` Inference latency P50 per output asset (seconds) - `inference_latency_p75_sec: Optional[float]` Inference latency P75 per output asset (seconds) - `resolution: Optional[str]` The resolution bucket (example: "0.5K", "1K", "2K", "4K") - `total_latency_p50_sec: Optional[float]` Total latency P50 per output asset, including queue time (seconds) - `total_latency_p75_sec: Optional[float]` Total latency P75 per output asset, including queue time (seconds) - `default: Optional[str]` Default variant key for quick model comparison - `prompt_embedding: Optional[str]` Fine-tune the model's inferences with prompt embedding - `short_description: Optional[str]` The model short description (example: "This model generates highly detailed cinematic scenes.") - `soft_deletion_on: Optional[str]` The date when the model will be soft deleted (only for Free plan) - `source: Optional[Literal["civitai", "huggingface", "other", "scenario"]]` The source of the model - `"civitai"` - `"huggingface"` - `"other"` - `"scenario"` - `status: Optional[Literal["copying", "failed", "new", 3 more]]` The model status - `"copying"` - `"failed"` - `"new"` - `"trained"` - `"training"` - `"training-canceled"` - `tags: Optional[List[str]]` The associated tags (example: ["sci-fi", "landscape"]) - `thumbnail: Optional[ModelThumbnail]` A thumbnail for your model - `asset_id: str` The AssetId of the image used as a thumbnail for your model (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the image used as a thumbnail for your model - `training_image_pairs: Optional[List[ModelTrainingImagePair]]` Array of training image pairs - `instruction: Optional[str]` The instruction for the image pair, source to target - `source_id: Optional[str]` The source asset ID (must be a training asset) - `target_id: Optional[str]` The target asset ID (must be a training asset) - `training_images: Optional[List[ModelTrainingImage]]` The URLs of the first 3 training images of the model. To retrieve the full set of images, get it by modelId - `id: str` The training image ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `automatic_captioning: str` Automatic captioning of the image - `created_at: str` The training image upload date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `description: str` Description for the image - `download_url: str` The URL of the image - `name: str` The original file name of the image (example: "my-training-image.jpg") - `training_images_number: Optional[float]` The total number of training images - `training_progress: Optional[ModelTrainingProgress]` Additional information about the training progress of the model - `stage: Literal["pending", "pending-captcha", "queued-for-train", 2 more]` The stage of the request - `"pending"` - `"pending-captcha"` - `"queued-for-train"` - `"running-train"` - `"starting-train"` - `updated_at: float` Timestamp in milliseconds of the last time the training progress was updated - `position: Optional[float]` Position of the job in the queue (ie. the number of job in the queue before this one) - `progress: Optional[float]` The progress of the job - `remaining_time_ms: Optional[float]` The remaining time in milliseconds - `started_at: Optional[float]` The timestamp in millisecond marking the start of the process - `training_stats: Optional[ModelTrainingStats]` Additional information about the model's training - `ended_at: Optional[str]` The training end time as an ISO date string - `queue_duration: Optional[float]` The training queued duration in seconds - `started_at: Optional[str]` The training start time as an ISO date string - `train_duration: Optional[float]` The training duration in seconds - `ui_config: Optional[ModelUiConfig]` The UI configuration for the model - `input_properties: Optional[Dict[str, ModelUiConfigInputProperties]]` Configuration for the input properties - `collapsed: Optional[bool]` - `loras_component: Optional[ModelUiConfigLorasComponent]` Configuration for the loras component - `label: str` The label of the component - `model_input: str` The input name of the model (model_array) - `scale_input: str` The input name of the scale (number_array) - `model_id_input: Optional[str]` The input model id (example: a composition or a single LoRA modelId) If specified, the model id will be attached to the output asset as a metadata If the model-decomposer parser is specified on it, modelInput and scaleInput will be automatically populated - `presets: Optional[List[ModelUiConfigPreset]]` Configuration for the presets - `fields: List[str]` - `presets: object` - `resolution_component: Optional[ModelUiConfigResolutionComponent]` Configuration for the resolution component - `height_input: str` The input name of the height - `label: str` The label of the component - `presets: List[ModelUiConfigResolutionComponentPreset]` The resolution presets - `height: float` - `label: str` - `width: float` - `width_input: str` The input name of the width - `selects: Optional[Dict[str, object]]` Configuration for the selects - `trigger_generate: Optional[ModelUiConfigTriggerGenerate]` Configuration for the trigger generate button - `label: str` - `after: Optional[str]` The 'name' of the input where the trigger generate button will be displayed (after the input). Do not specify both position and after. - `position: Optional[Literal["bottom", "top"]]` The position of the trigger generate button. If position specified, the button will be displayed at the specified position. Do not specify both position and after. - `"bottom"` - `"top"` - `updated_at: Optional[str]` The model last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `user_id: Optional[str]` (Deprecated) The user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") ### Example ```python import os from scenario_sdk import Scenario client = Scenario( api_key=os.environ.get("SCENARIO_SDK_API_KEY"), # This is the default and can be omitted api_secret=os.environ.get("SCENARIO_SDK_API_SECRET"), # This is the default and can be omitted ) response = client.models.get_bulk() print(response.models) ``` #### Response ```json { "models": [ { "id": "id", "privacy": "private", "type": "custom", "accessRestrictions": 0, "authorId": "authorId", "capabilities": [ "3d23d" ], "class": { "category": "category", "conceptPrompt": "conceptPrompt", "modelId": "modelId", "name": "name", "prompt": "prompt", "slug": "slug", "status": "published", "thumbnails": [ "string" ] }, "collectionIds": [ "string" ], "compliantModelIds": [ "string" ], "concepts": [ { "modelId": "modelId", "scale": -2, "modelEpoch": "modelEpoch" } ], "createdAt": "createdAt", "custom": true, "epoch": "epoch", "epochs": [ { "epoch": "epoch", "assets": [ { "assetId": "assetId", "url": "url" } ] } ], "exampleAssetIds": [ "string" ], "inputs": [ { "name": "name", "type": "boolean", "allowedValues": [ {} ], "backgroundBehavior": "opaque", "color": true, "costImpact": true, "default": {}, "description": "description", "group": "group", "hint": "hint", "inputs": [ { "foo": "bar" } ], "kind": "3d", "label": "label", "maskFrom": "maskFrom", "max": 0, "maxLength": 0, "maxSize": 0, "min": 0, "minLength": 0, "modelTypes": [ "custom" ], "parent": true, "placeholder": "placeholder", "prompt": true, "promptSpark": true, "required": { "always": true, "conditionalValues": {}, "ifDefined": {}, "ifNotDefined": {} }, "step": 1 } ], "modelKeyword": "modelKeyword", "name": "name", "negativePromptEmbedding": "negativePromptEmbedding", "ownerId": "ownerId", "parameters": { "age": "age", "batchSize": 1, "classPrompt": "classPrompt", "cloneType": "cloneType", "conceptPrompt": "conceptPrompt", "gender": "gender", "language": "language", "learningRate": 1, "learningRateTextEncoder": 0.0005, "learningRateUnet": 1, "lrScheduler": "constant", "maxTrainSteps": 0, "nbEpochs": 1, "nbRepeats": 1, "numTextTrainSteps": 0, "numUNetTrainSteps": 0, "optimizeFor": "likeness", "priorLossWeight": 1, "randomCrop": true, "randomCropRatio": 0, "randomCropScale": 0, "rank": 2, "removeBackgroundNoise": true, "samplePrompts": [ "string" ], "sampleSourceImages": [ "string" ], "scaleLr": true, "seed": 0, "textEncoderTrainingRatio": 0, "validationFrequency": 0, "validationPrompt": "validationPrompt", "voiceDescription": "voiceDescription", "wandbKey": "wandbKey" }, "parentModelId": "parentModelId", "performanceStats": { "variants": [ { "capability": "capability", "computedAt": "computedAt", "variantKey": "variantKey", "arenaScore": { "arenaCategory": "arenaCategory", "arenaModelName": "arenaModelName", "fetchedAt": "fetchedAt", "rank": 0, "rating": 0, "ratingLower": 0, "ratingUpper": 0, "votes": 0 }, "costPerAssetMaxCU": 0, "costPerAssetMinCU": 0, "costPerAssetP50CU": 0, "inferenceLatencyP50Sec": 0, "inferenceLatencyP75Sec": 0, "resolution": "resolution", "totalLatencyP50Sec": 0, "totalLatencyP75Sec": 0 } ], "default": "default" }, "promptEmbedding": "promptEmbedding", "shortDescription": "shortDescription", "softDeletionOn": "softDeletionOn", "source": "civitai", "status": "copying", "tags": [ "string" ], "thumbnail": { "assetId": "assetId", "url": "url" }, "trainingImagePairs": [ { "instruction": "instruction", "sourceId": "sourceId", "targetId": "targetId" } ], "trainingImages": [ { "id": "id", "automaticCaptioning": "automaticCaptioning", "createdAt": "createdAt", "description": "description", "downloadUrl": "downloadUrl", "name": "name" } ], "trainingImagesNumber": 0, "trainingProgress": { "stage": "pending", "updatedAt": 0, "position": 0, "progress": 0, "remainingTimeMs": 0, "startedAt": 0 }, "trainingStats": { "endedAt": "endedAt", "queueDuration": 0, "startedAt": "startedAt", "trainDuration": 0 }, "uiConfig": { "inputProperties": { "foo": { "collapsed": true } }, "lorasComponent": { "label": "label", "modelInput": "modelInput", "scaleInput": "scaleInput", "modelIdInput": "modelIdInput" }, "presets": [ { "fields": [ "string" ], "presets": {} } ], "resolutionComponent": { "heightInput": "heightInput", "label": "label", "presets": [ { "height": 0, "label": "label", "width": 0 } ], "widthInput": "widthInput" }, "selects": { "foo": {} }, "triggerGenerate": { "label": "label", "after": "after", "position": "bottom" } }, "updatedAt": "updatedAt", "userId": "userId" } ] } ``` ## Retrieve `models.retrieve(strmodel_id, ModelRetrieveParams**kwargs) -> ModelRetrieveResponse` **get** `/models/{modelId}` Get the details of the given `modelId`, including its training status and training progress if available. Supports both public access (via the `Authorization` header set to `public-auth-token`) and authenticated user access (including API keys). ### Parameters - `model_id: str` - `original_assets: Optional[bool]` If set to true, returns the original asset without transformation ### Returns - `class ModelRetrieveResponse: …` - `model: Model` - `id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `capabilities: List[Literal["3d23d", "audio2audio", "audio2video", 29 more]]` List of model capabilities (example: ["txt2img", "img2img", "txt2img_ip_adapter", ...]) - `"3d23d"` - `"audio2audio"` - `"audio2video"` - `"controlnet"` - `"controlnet_img2img"` - `"controlnet_inpaint"` - `"controlnet_inpaint_ip_adapter"` - `"controlnet_ip_adapter"` - `"controlnet_reference"` - `"controlnet_texture"` - `"img23d"` - `"img2img"` - `"img2img_ip_adapter"` - `"img2img_texture"` - `"img2txt"` - `"img2video"` - `"inpaint"` - `"inpaint_ip_adapter"` - `"outpaint"` - `"reference"` - `"reference_texture"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2img_ip_adapter"` - `"txt2img_texture"` - `"txt2txt"` - `"txt2video"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `collection_ids: List[str]` A list of CollectionId this model belongs to - `created_at: str` The model creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `custom: bool` Whether the model is a custom model and can be used only with POST /generate/custom/{modelId} endpoint - `example_asset_ids: List[str]` List of all example asset IDs setup by the model owner - `privacy: Literal["private", "public", "unlisted"]` The privacy of the model (default: private) - `"private"` - `"public"` - `"unlisted"` - `source: Literal["civitai", "huggingface", "other", "scenario"]` The source of the model - `"civitai"` - `"huggingface"` - `"other"` - `"scenario"` - `status: Literal["copying", "failed", "new", 3 more]` The model status - `"copying"` - `"failed"` - `"new"` - `"trained"` - `"training"` - `"training-canceled"` - `tags: List[str]` The associated tags (example: ["sci-fi", "landscape"]) - `training_images_number: float` The total number of training images - `type: Literal["custom", "elevenlabs-voice", "flux.1", 34 more]` The model type (example: "flux.1-lora") - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `updated_at: str` The model last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `access_restrictions: Optional[Literal[0, 100, 25, 2 more]]` The access restrictions of the model 0: Free plan 25: Creator plan 50: Pro plan 75: Team plan 100: Enterprise plan - `0` - `100` - `25` - `50` - `75` - `author_id: Optional[str]` The author user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") - `class_: Optional[ModelClass]` The class of the model - `category: str` The category slug of the class (example: "art-style") - `concept_prompt: str` The concept prompt of the class (example: "a sks character design") - `model_id: str` The model ID of the class (example: "stable-diffusion-v1-5") - `name: str` The class name (example: "Character Design") - `prompt: str` The class prompt (example: "a character design") - `slug: str` The class slug (example: "art-style-character-design") - `status: Literal["published", "unpublished"]` The class status (only published classes are listed, but unpublished classes can still appear in existing models) - `"published"` - `"unpublished"` - `thumbnails: List[str]` Some example images URLs to showcase the class - `compliant_model_ids: Optional[List[str]]` List of base model IDs compliant with the model (example: ["flux.1-dev", "flux.1-schnell"]) This attribute is mainly used for Flux LoRA models - `concepts: Optional[List[ModelConcept]]` The concepts is required for the type model: composition - `model_id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: float` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `model_epoch: Optional[str]` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `epoch: Optional[str]` The epoch of the model. Only available for Flux Lora Trained models. If not set, uses the final model epoch (latest) - `epochs: Optional[List[ModelEpoch]]` The epochs of the model. Only available for Flux Lora Trained models. - `epoch: str` The epoch hash to identify the epoch - `assets: Optional[List[ModelEpochAsset]]` The assets of the epoch if sample prompts as been supplied during training - `asset_id: str` The AssetId of the image during training (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the asset - `inputs: Optional[List[ModelInput]]` The inputs of the model. Only used for custom models. To retrieve this list, get it by modelId with GET /models/{modelId} - `name: str` The name that must be user to call the model through the API - `type: Literal["boolean", "file", "file_array", 7 more]` The data type of the input - `"boolean"` - `"file"` - `"file_array"` - `"inputs_array"` - `"model"` - `"model_array"` - `"number"` - `"number_array"` - `"string"` - `"string_array"` - `allowed_values: Optional[List[object]]` The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown. - `background_behavior: Optional[Literal["opaque", "transparent"]]` Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`. - `"opaque"` - `"transparent"` - `color: Optional[bool]` Whether the input is a color or not. Only available for `string` input type. - `cost_impact: Optional[bool]` Whether this input affects the model's cost calculation - `default: Optional[object]` The default value for the input - `description: Optional[str]` Help text displayed in the UI to provide additional information about the input - `group: Optional[str]` Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI. - `hint: Optional[str]` Hint text displayed in the UI as a tooltip to guide the user - `inputs: Optional[List[Dict[str, object]]]` The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs. - `kind: Optional[Literal["3d", "audio", "document", 4 more]]` The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `label: Optional[str]` The label displayed in the UI for this input - `mask_from: Optional[str]` The name of the file input field to use as the mask source - `max: Optional[float]` The maximum allowed value. Only available for `number` and `array` input types. - `max_length: Optional[float]` The maximum allowed length for `string` inputs. Also applies to each item in `string_array`. - `max_size: Optional[float]` The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time. - `min: Optional[float]` The minimum allowed value. Only available for `number` and array input types. - `min_length: Optional[float]` The minimum allowed length for string inputs. Also applies to each item in `string_array`. - `model_types: Optional[List[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]]` The allowed model types for this input. Example: `["flux.1-lora"]`. Only available for `model_array` input type. - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `parent: Optional[bool]` Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types. For `file_array`, the parent asset is the first item in the array. - `placeholder: Optional[str]` Placeholder text for the input. Only available for 'string' input type. - `prompt: Optional[bool]` Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type. - `prompt_spark: Optional[bool]` Whether the input is used with prompt spark. Only available for `string` input type. - `required: Optional[ModelInputRequired]` Set of rules that describes when this input is required: - `always`: Input is always required - `ifNotDefined`: Input is required when another specified input is not defined - `ifDefined`: Input is required when another specified input is defined - `conditionalValues`: Input is required when another input has a specific value By default, the input is not required. - `always: Optional[bool]` Whether the input is always required - `conditional_values: Optional[object]` Makes this input required when another input has a specific value: - Key: name of the input to check - Value: operation and allowed values that trigger the requirement - `if_defined: Optional[object]` Makes this input required when another input is defined: - Key: name of the input that must be defined - Value: message to display when this input is required - `if_not_defined: Optional[object]` Makes this input required when another input is not defined: - Key: name of the input that must be undefined - Value: message to display when this input is required - `step: Optional[float]` The step increment for numeric inputs. Only available for `number` input type. - `model_keyword: Optional[str]` The model keyword, this is a legacy parameter, please use conceptPrompt in parameters - `name: Optional[str]` The model name (example: "Cinematic Realism") - `negative_prompt_embedding: Optional[str]` Fine-tune the model's inferences with negative prompt embedding - `owner_id: Optional[str]` The owner ID (example: "team_VFhihHKMRZyDDnZAJwLb2Q") - `parameters: Optional[ModelParameters]` The parameters of the model - `age: Optional[str]` Age group of the voice (for professional cloning) Only available for ElevenLabs voice training - `batch_size: Optional[float]` The batch size Less steps, and will increase the learning rate Only available for Flux LoRA training - `class_prompt: Optional[str]` The prompt to specify images in the same class as provided instance images Only available for SD15 training - `clone_type: Optional[str]` Type of voice cloning: "instant" (fast) or "professional" (higher quality, requires captcha) Only available for ElevenLabs voice training - `concept_prompt: Optional[str]` The prompt with identifier specifying the instance (or subject) of the class (example: "a daiton dog") Default value varies depending on the model type: - For SD1.5: "daiton" if no class is associated with the model - For SDXL: "daiton" - For Flux: "" - `gender: Optional[str]` Gender of the voice (for professional cloning) Only available for ElevenLabs voice training - `language: Optional[str]` Language of the audio samples (ISO 639-1 code) Only available for ElevenLabs voice training - `learning_rate: Optional[float]` Initial learning rate (after the potential warmup period) Default value varies depending on the model type: - For SD1.5 and SDXL: 0.000005 - For Flux: 0.0001 - `learning_rate_text_encoder: Optional[float]` Initial learning rate (after the potential warmup period) for the text encoder Maximum [Flux LoRA: 0.001] Default [SDXL: 0.00005 | Flux LoRA: 0.00001] Minimum [SDXL: 0 | Flux LoRA: 0.000001] - `learning_rate_unet: Optional[float]` Initial learning rate (after the potential warmup period) for the UNet Only available for SDXL LoRA training - `lr_scheduler: Optional[Literal["constant", "constant-with-warmup", "cosine", 3 more]]` The scheduler type to use (default: "constant") Only available for SD15 and SDXL LoRA training - `"constant"` - `"constant-with-warmup"` - `"cosine"` - `"cosine-with-restarts"` - `"linear"` - `"polynomial"` - `max_train_steps: Optional[float]` Maximum number of training steps to execute (default: varies depending on the model type) For SDXL LoRA training, please use `numTextTrainSteps` and `numUNetTrainSteps` instead Default value varies depending on the model type: - For SD1.5: round((number of training images * 225) / 3) - For SDXL: number of training images * 175 - For Flux: number of training images * 100 Maximum value varies depending on the model type: - For SD1.5 and SDXL: [0, 40000] - For Flux: [0, 10000] - `nb_epochs: Optional[float]` The number of epochs to train for Only available for Flux LoRA training - `nb_repeats: Optional[float]` The number of times to repeat the training Only available for Flux LoRA training - `num_text_train_steps: Optional[float]` The number of training steps for the text encoder Only available for SDXL LoRA training - `num_u_net_train_steps: Optional[float]` The number of training steps for the UNet Only available for SDXL LoRA training - `optimize_for: Optional[Literal["likeness"]]` Optimize the model training task for a specific type of input images. The available values are: - "likeness": optimize training for likeness or portrait (targets specific transformer blocks) - "all": train all transformer blocks - "none": train no specific transformer blocks This parameter controls which double and single transformer blocks are trained during the LoRA training process. Only available for Flux LoRA training - `"likeness"` - `prior_loss_weight: Optional[float]` The weight of prior preservation loss Only available for SD15 and SDXL LoRA training - `random_crop: Optional[bool]` Whether to random crop or center crop images before resizing to the working resolution Only available for SD15 and SDXL LoRA training - `random_crop_ratio: Optional[float]` Ratio of random crops Only available for SD15 and SDXL LoRA training - `random_crop_scale: Optional[float]` Scale of random crops Only available for SD15 and SDXL LoRA training - `rank: Optional[float]` The dimension of the LoRA update matrices Only available for SDXL (deprecated), Flux LoRA and Musubi training Default value varies depending on the model type: - For SDXL (deprecated): 64 - For Flux: 16 - For Musubi: 64 Each trainer enforces its own tighter limit (Flux LoRA: [2; 64], Musubi: [2; 128]) - `remove_background_noise: Optional[bool]` Whether to remove background noise from audio samples before cloning. When enabled, each sample must be at least 5 seconds long. Only available for ElevenLabs voice training - `sample_prompts: Optional[List[str]]` The prompts to use for each epoch Only available for Flux LoRA training - `sample_source_images: Optional[List[str]]` The sample prompt images (AssetIds) paired with samplePrompts Only available for Flux LoRA training Must be the same length as samplePrompts - `scale_lr: Optional[bool]` Whether to scale the learning rate Note: Legacy parameter, will be ignored Only available for SD15 and SDXL LoRA training - `seed: Optional[float]` Used to reproduce previous results. Default: randomly generated number. Only available for SD15 and SDXL LoRA training - `text_encoder_training_ratio: Optional[float]` Whether to train the text encoder or not Example: For 100 steps and a value of 0.2, it means that the text encoder will be trained for 20 steps and then the UNet for 80 steps Note: Legacy parameter, please use `numTextTrainSteps` and `numUNetTrainSteps` Only available for SD15 and SDXL LoRA training - `validation_frequency: Optional[float]` Validation frequency. Cannot be greater than maxTrainSteps value Only available for SD15 and SDXL LoRA training - `validation_prompt: Optional[str]` Validation prompt Only available for SD15 and SDXL LoRA training - `voice_description: Optional[str]` Description of the voice characteristics Only available for ElevenLabs voice training - `wandb_key: Optional[str]` The Weights And Bias key to use for logging. The maximum length is 40 characters - `parent_model_id: Optional[str]` The id of the parent model - `performance_stats: Optional[ModelPerformanceStats]` Aggregated performance stats - `variants: List[ModelPerformanceStatsVariant]` Performance metrics per variant - `capability: str` The generation capability (example: "txt2img", "img2video", "txt2audio") - `computed_at: str` When these stats were last computed (ISO date) - `variant_key: str` Unique variant identifier (example: "txt2img:1K", "img2video:2K", "txt2audio") - `arena_score: Optional[ModelPerformanceStatsVariantArenaScore]` External quality score from arena.ai leaderboard - `arena_category: str` Arena category (example: "text_to_image", "image_to_video") - `arena_model_name: str` Model name on arena.ai - `fetched_at: str` When this score was last fetched (ISO date) - `rank: float` Rank in the arena category - `rating: float` ELO rating - `rating_lower: float` ELO rating confidence interval lower bound - `rating_upper: float` ELO rating confidence interval upper bound - `votes: float` Number of human votes - `cost_per_asset_max_cu: Optional[float]` Maximum cost per output asset (CU) - `cost_per_asset_min_cu: Optional[float]` Minimum cost per output asset (CU) - `cost_per_asset_p50_cu: Optional[float]` Median cost per output asset (CU) - `inference_latency_p50_sec: Optional[float]` Inference latency P50 per output asset (seconds) - `inference_latency_p75_sec: Optional[float]` Inference latency P75 per output asset (seconds) - `resolution: Optional[str]` The resolution bucket (example: "0.5K", "1K", "2K", "4K") - `total_latency_p50_sec: Optional[float]` Total latency P50 per output asset, including queue time (seconds) - `total_latency_p75_sec: Optional[float]` Total latency P75 per output asset, including queue time (seconds) - `default: Optional[str]` Default variant key for quick model comparison - `prompt_embedding: Optional[str]` Fine-tune the model's inferences with prompt embedding - `short_description: Optional[str]` The model short description (example: "This model generates highly detailed cinematic scenes.") - `soft_deletion_on: Optional[str]` The date when the model will be soft deleted (only for Free plan) - `thumbnail: Optional[ModelThumbnail]` A thumbnail for your model - `asset_id: str` The AssetId of the image used as a thumbnail for your model (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the image used as a thumbnail for your model - `training_image_pairs: Optional[List[ModelTrainingImagePair]]` Array of training image pairs - `instruction: Optional[str]` The instruction for the image pair, source to target - `source_id: Optional[str]` The source asset ID (must be a training asset) - `target_id: Optional[str]` The target asset ID (must be a training asset) - `training_images: Optional[List[ModelTrainingImage]]` The URLs of the first 3 training images of the model. To retrieve the full set of images, get it by modelId - `id: str` The training image ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `automatic_captioning: str` Automatic captioning of the image - `created_at: str` The training image upload date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `description: str` Description for the image - `download_url: str` The URL of the image - `name: str` The original file name of the image (example: "my-training-image.jpg") - `training_progress: Optional[ModelTrainingProgress]` Additional information about the training progress of the model - `stage: Literal["pending", "pending-captcha", "queued-for-train", 2 more]` The stage of the request - `"pending"` - `"pending-captcha"` - `"queued-for-train"` - `"running-train"` - `"starting-train"` - `updated_at: float` Timestamp in milliseconds of the last time the training progress was updated - `position: Optional[float]` Position of the job in the queue (ie. the number of job in the queue before this one) - `progress: Optional[float]` The progress of the job - `remaining_time_ms: Optional[float]` The remaining time in milliseconds - `started_at: Optional[float]` The timestamp in millisecond marking the start of the process - `training_stats: Optional[ModelTrainingStats]` Additional information about the model's training - `ended_at: Optional[str]` The training end time as an ISO date string - `queue_duration: Optional[float]` The training queued duration in seconds - `started_at: Optional[str]` The training start time as an ISO date string - `train_duration: Optional[float]` The training duration in seconds - `ui_config: Optional[ModelUiConfig]` The UI configuration for the model - `input_properties: Optional[Dict[str, ModelUiConfigInputProperties]]` Configuration for the input properties - `collapsed: Optional[bool]` - `loras_component: Optional[ModelUiConfigLorasComponent]` Configuration for the loras component - `label: str` The label of the component - `model_input: str` The input name of the model (model_array) - `scale_input: str` The input name of the scale (number_array) - `model_id_input: Optional[str]` The input model id (example: a composition or a single LoRA modelId) If specified, the model id will be attached to the output asset as a metadata If the model-decomposer parser is specified on it, modelInput and scaleInput will be automatically populated - `presets: Optional[List[ModelUiConfigPreset]]` Configuration for the presets - `fields: List[str]` - `presets: object` - `resolution_component: Optional[ModelUiConfigResolutionComponent]` Configuration for the resolution component - `height_input: str` The input name of the height - `label: str` The label of the component - `presets: List[ModelUiConfigResolutionComponentPreset]` The resolution presets - `height: float` - `label: str` - `width: float` - `width_input: str` The input name of the width - `selects: Optional[Dict[str, object]]` Configuration for the selects - `trigger_generate: Optional[ModelUiConfigTriggerGenerate]` Configuration for the trigger generate button - `label: str` - `after: Optional[str]` The 'name' of the input where the trigger generate button will be displayed (after the input). Do not specify both position and after. - `position: Optional[Literal["bottom", "top"]]` The position of the trigger generate button. If position specified, the button will be displayed at the specified position. Do not specify both position and after. - `"bottom"` - `"top"` - `user_id: Optional[str]` (Deprecated) The user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") ### Example ```python import os from scenario_sdk import Scenario client = Scenario( api_key=os.environ.get("SCENARIO_SDK_API_KEY"), # This is the default and can be omitted api_secret=os.environ.get("SCENARIO_SDK_API_SECRET"), # This is the default and can be omitted ) model = client.models.retrieve( model_id="modelId", ) print(model.model) ``` #### Response ```json { "model": { "id": "id", "capabilities": [ "3d23d" ], "collectionIds": [ "string" ], "createdAt": "createdAt", "custom": true, "exampleAssetIds": [ "string" ], "privacy": "private", "source": "civitai", "status": "copying", "tags": [ "string" ], "trainingImagesNumber": 0, "type": "custom", "updatedAt": "updatedAt", "accessRestrictions": 0, "authorId": "authorId", "class": { "category": "category", "conceptPrompt": "conceptPrompt", "modelId": "modelId", "name": "name", "prompt": "prompt", "slug": "slug", "status": "published", "thumbnails": [ "string" ] }, "compliantModelIds": [ "string" ], "concepts": [ { "modelId": "modelId", "scale": -2, "modelEpoch": "modelEpoch" } ], "epoch": "epoch", "epochs": [ { "epoch": "epoch", "assets": [ { "assetId": "assetId", "url": "url" } ] } ], "inputs": [ { "name": "name", "type": "boolean", "allowedValues": [ {} ], "backgroundBehavior": "opaque", "color": true, "costImpact": true, "default": {}, "description": "description", "group": "group", "hint": "hint", "inputs": [ { "foo": "bar" } ], "kind": "3d", "label": "label", "maskFrom": "maskFrom", "max": 0, "maxLength": 0, "maxSize": 0, "min": 0, "minLength": 0, "modelTypes": [ "custom" ], "parent": true, "placeholder": "placeholder", "prompt": true, "promptSpark": true, "required": { "always": true, "conditionalValues": {}, "ifDefined": {}, "ifNotDefined": {} }, "step": 1 } ], "modelKeyword": "modelKeyword", "name": "name", "negativePromptEmbedding": "negativePromptEmbedding", "ownerId": "ownerId", "parameters": { "age": "age", "batchSize": 1, "classPrompt": "classPrompt", "cloneType": "cloneType", "conceptPrompt": "conceptPrompt", "gender": "gender", "language": "language", "learningRate": 1, "learningRateTextEncoder": 0.0005, "learningRateUnet": 1, "lrScheduler": "constant", "maxTrainSteps": 0, "nbEpochs": 1, "nbRepeats": 1, "numTextTrainSteps": 0, "numUNetTrainSteps": 0, "optimizeFor": "likeness", "priorLossWeight": 1, "randomCrop": true, "randomCropRatio": 0, "randomCropScale": 0, "rank": 2, "removeBackgroundNoise": true, "samplePrompts": [ "string" ], "sampleSourceImages": [ "string" ], "scaleLr": true, "seed": 0, "textEncoderTrainingRatio": 0, "validationFrequency": 0, "validationPrompt": "validationPrompt", "voiceDescription": "voiceDescription", "wandbKey": "wandbKey" }, "parentModelId": "parentModelId", "performanceStats": { "variants": [ { "capability": "capability", "computedAt": "computedAt", "variantKey": "variantKey", "arenaScore": { "arenaCategory": "arenaCategory", "arenaModelName": "arenaModelName", "fetchedAt": "fetchedAt", "rank": 0, "rating": 0, "ratingLower": 0, "ratingUpper": 0, "votes": 0 }, "costPerAssetMaxCU": 0, "costPerAssetMinCU": 0, "costPerAssetP50CU": 0, "inferenceLatencyP50Sec": 0, "inferenceLatencyP75Sec": 0, "resolution": "resolution", "totalLatencyP50Sec": 0, "totalLatencyP75Sec": 0 } ], "default": "default" }, "promptEmbedding": "promptEmbedding", "shortDescription": "shortDescription", "softDeletionOn": "softDeletionOn", "thumbnail": { "assetId": "assetId", "url": "url" }, "trainingImagePairs": [ { "instruction": "instruction", "sourceId": "sourceId", "targetId": "targetId" } ], "trainingImages": [ { "id": "id", "automaticCaptioning": "automaticCaptioning", "createdAt": "createdAt", "description": "description", "downloadUrl": "downloadUrl", "name": "name" } ], "trainingProgress": { "stage": "pending", "updatedAt": 0, "position": 0, "progress": 0, "remainingTimeMs": 0, "startedAt": 0 }, "trainingStats": { "endedAt": "endedAt", "queueDuration": 0, "startedAt": "startedAt", "trainDuration": 0 }, "uiConfig": { "inputProperties": { "foo": { "collapsed": true } }, "lorasComponent": { "label": "label", "modelInput": "modelInput", "scaleInput": "scaleInput", "modelIdInput": "modelIdInput" }, "presets": [ { "fields": [ "string" ], "presets": {} } ], "resolutionComponent": { "heightInput": "heightInput", "label": "label", "presets": [ { "height": 0, "label": "label", "width": 0 } ], "widthInput": "widthInput" }, "selects": { "foo": {} }, "triggerGenerate": { "label": "label", "after": "after", "position": "bottom" } }, "userId": "userId" } } ``` ## Update `models.update(strmodel_id, ModelUpdateParams**kwargs) -> ModelUpdateResponse` **put** `/models/{modelId}` Update the given `modelId` ### Parameters - `model_id: str` - `original_assets: Optional[bool]` If set to true, returns the original asset without transformation - `class_slug: Optional[str]` The slug of the class you want to use (ex: "characters-npcs-mobs-characters"). Set to null to unset the class - `concepts: Optional[Iterable[Concept]]` The concepts is required for composition models. With one or more loras Only applicable to Flux based models (and older SD1.5 and SDXL models) - `model_id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: float` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `model_epoch: Optional[str]` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `epoch: Optional[str]` The epoch of the model. Only available for flux.1-lora and flux.1-kontext-lora based models. The epoch can only be set if the model has epochs and is in status "trained". The default epoch (if not set) is the final model epoch (latest). Set to null to unset the epoch. - `name: Optional[str]` The model's name (ex: "Cinematic Realism"). If not set, the model's name will be automatically generated when starting training based on training data. - `negative_prompt_embedding: Optional[str]` Add a negative prompt embedding to every model's generation - `parameters: Optional[Parameters]` The parameters to use for the model's training - `age: Optional[str]` Age group of the voice (for professional cloning) Only available for ElevenLabs voice training - `batch_size: Optional[float]` The batch size Less steps, and will increase the learning rate Only available for Flux LoRA training - `class_prompt: Optional[str]` The prompt to specify images in the same class as provided instance images Only available for SD15 training - `clone_type: Optional[str]` Type of voice cloning: "instant" (fast) or "professional" (higher quality, requires captcha) Only available for ElevenLabs voice training - `concept_prompt: Optional[str]` The prompt with identifier specifying the instance (or subject) of the class (example: "a daiton dog") Default value varies depending on the model type: - For SD1.5: "daiton" if no class is associated with the model - For SDXL: "daiton" - For Flux: "" - `gender: Optional[str]` Gender of the voice (for professional cloning) Only available for ElevenLabs voice training - `language: Optional[str]` Language of the audio samples (ISO 639-1 code) Only available for ElevenLabs voice training - `learning_rate: Optional[float]` Initial learning rate (after the potential warmup period) Default value varies depending on the model type: - For SD1.5 and SDXL: 0.000005 - For Flux: 0.0001 - `learning_rate_text_encoder: Optional[float]` Initial learning rate (after the potential warmup period) for the text encoder Maximum [Flux LoRA: 0.001] Default [SDXL: 0.00005 | Flux LoRA: 0.00001] Minimum [SDXL: 0 | Flux LoRA: 0.000001] - `learning_rate_unet: Optional[float]` Initial learning rate (after the potential warmup period) for the UNet Only available for SDXL LoRA training - `lr_scheduler: Optional[Literal["constant", "constant-with-warmup", "cosine", 3 more]]` The scheduler type to use (default: "constant") Only available for SD15 and SDXL LoRA training - `"constant"` - `"constant-with-warmup"` - `"cosine"` - `"cosine-with-restarts"` - `"linear"` - `"polynomial"` - `max_train_steps: Optional[float]` Maximum number of training steps to execute (default: varies depending on the model type) For SDXL LoRA training, please use `numTextTrainSteps` and `numUNetTrainSteps` instead Default value varies depending on the model type: - For SD1.5: round((number of training images * 225) / 3) - For SDXL: number of training images * 175 - For Flux: number of training images * 100 Maximum value varies depending on the model type: - For SD1.5 and SDXL: [0, 40000] - For Flux: [0, 10000] - `nb_epochs: Optional[float]` The number of epochs to train for Only available for Flux LoRA training - `nb_repeats: Optional[float]` The number of times to repeat the training Only available for Flux LoRA training - `num_text_train_steps: Optional[float]` The number of training steps for the text encoder Only available for SDXL LoRA training - `num_u_net_train_steps: Optional[float]` The number of training steps for the UNet Only available for SDXL LoRA training - `optimize_for: Optional[Literal["likeness"]]` Optimize the model training task for a specific type of input images. The available values are: - "likeness": optimize training for likeness or portrait (targets specific transformer blocks) - "all": train all transformer blocks - "none": train no specific transformer blocks This parameter controls which double and single transformer blocks are trained during the LoRA training process. Only available for Flux LoRA training - `"likeness"` - `prior_loss_weight: Optional[float]` The weight of prior preservation loss Only available for SD15 and SDXL LoRA training - `random_crop: Optional[bool]` Whether to random crop or center crop images before resizing to the working resolution Only available for SD15 and SDXL LoRA training - `random_crop_ratio: Optional[float]` Ratio of random crops Only available for SD15 and SDXL LoRA training - `random_crop_scale: Optional[float]` Scale of random crops Only available for SD15 and SDXL LoRA training - `rank: Optional[float]` The dimension of the LoRA update matrices Only available for SDXL (deprecated), Flux LoRA and Musubi training Default value varies depending on the model type: - For SDXL (deprecated): 64 - For Flux: 16 - For Musubi: 64 Each trainer enforces its own tighter limit (Flux LoRA: [2; 64], Musubi: [2; 128]) - `remove_background_noise: Optional[bool]` Whether to remove background noise from audio samples before cloning. When enabled, each sample must be at least 5 seconds long. Only available for ElevenLabs voice training - `sample_prompts: Optional[Sequence[str]]` The prompts to use for each epoch Only available for Flux LoRA training - `sample_source_images: Optional[Sequence[str]]` The sample prompt images (AssetIds) paired with samplePrompts Only available for Flux LoRA training Must be the same length as samplePrompts - `scale_lr: Optional[bool]` Whether to scale the learning rate Note: Legacy parameter, will be ignored Only available for SD15 and SDXL LoRA training - `seed: Optional[float]` Used to reproduce previous results. Default: randomly generated number. Only available for SD15 and SDXL LoRA training - `text_encoder_training_ratio: Optional[float]` Whether to train the text encoder or not Example: For 100 steps and a value of 0.2, it means that the text encoder will be trained for 20 steps and then the UNet for 80 steps Note: Legacy parameter, please use `numTextTrainSteps` and `numUNetTrainSteps` Only available for SD15 and SDXL LoRA training - `validation_frequency: Optional[float]` Validation frequency. Cannot be greater than maxTrainSteps value Only available for SD15 and SDXL LoRA training - `validation_prompt: Optional[str]` Validation prompt Only available for SD15 and SDXL LoRA training - `voice_description: Optional[str]` Description of the voice characteristics Only available for ElevenLabs voice training - `wandb_key: Optional[str]` The Weights And Bias key to use for logging. The maximum length is 40 characters - `prompt_embedding: Optional[str]` Add a prompt embedding to every model's generation - `short_description: Optional[str]` The model's short description (ex: "This model generates highly detailed cinematic scenes."). If not set, the model's short description will be automatically generated when starting training based on training data. - `thumbnail: Optional[str]` The AssetId of the image you want to use as a thumbnail for the model (example: "asset_GTrL3mq4SXWyMxkOHRxlpw"). Set to null to unset the thumbnail - `type: Optional[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]` The model's type (ex: "flux.1-lora"). The type can only be changed if the model has the "new" status. - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` ### Returns - `class ModelUpdateResponse: …` - `model: Model` - `id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `capabilities: List[Literal["3d23d", "audio2audio", "audio2video", 29 more]]` List of model capabilities (example: ["txt2img", "img2img", "txt2img_ip_adapter", ...]) - `"3d23d"` - `"audio2audio"` - `"audio2video"` - `"controlnet"` - `"controlnet_img2img"` - `"controlnet_inpaint"` - `"controlnet_inpaint_ip_adapter"` - `"controlnet_ip_adapter"` - `"controlnet_reference"` - `"controlnet_texture"` - `"img23d"` - `"img2img"` - `"img2img_ip_adapter"` - `"img2img_texture"` - `"img2txt"` - `"img2video"` - `"inpaint"` - `"inpaint_ip_adapter"` - `"outpaint"` - `"reference"` - `"reference_texture"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2img_ip_adapter"` - `"txt2img_texture"` - `"txt2txt"` - `"txt2video"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `collection_ids: List[str]` A list of CollectionId this model belongs to - `created_at: str` The model creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `custom: bool` Whether the model is a custom model and can be used only with POST /generate/custom/{modelId} endpoint - `example_asset_ids: List[str]` List of all example asset IDs setup by the model owner - `privacy: Literal["private", "public", "unlisted"]` The privacy of the model (default: private) - `"private"` - `"public"` - `"unlisted"` - `source: Literal["civitai", "huggingface", "other", "scenario"]` The source of the model - `"civitai"` - `"huggingface"` - `"other"` - `"scenario"` - `status: Literal["copying", "failed", "new", 3 more]` The model status - `"copying"` - `"failed"` - `"new"` - `"trained"` - `"training"` - `"training-canceled"` - `tags: List[str]` The associated tags (example: ["sci-fi", "landscape"]) - `training_images_number: float` The total number of training images - `type: Literal["custom", "elevenlabs-voice", "flux.1", 34 more]` The model type (example: "flux.1-lora") - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `updated_at: str` The model last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `access_restrictions: Optional[Literal[0, 100, 25, 2 more]]` The access restrictions of the model 0: Free plan 25: Creator plan 50: Pro plan 75: Team plan 100: Enterprise plan - `0` - `100` - `25` - `50` - `75` - `author_id: Optional[str]` The author user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") - `class_: Optional[ModelClass]` The class of the model - `category: str` The category slug of the class (example: "art-style") - `concept_prompt: str` The concept prompt of the class (example: "a sks character design") - `model_id: str` The model ID of the class (example: "stable-diffusion-v1-5") - `name: str` The class name (example: "Character Design") - `prompt: str` The class prompt (example: "a character design") - `slug: str` The class slug (example: "art-style-character-design") - `status: Literal["published", "unpublished"]` The class status (only published classes are listed, but unpublished classes can still appear in existing models) - `"published"` - `"unpublished"` - `thumbnails: List[str]` Some example images URLs to showcase the class - `compliant_model_ids: Optional[List[str]]` List of base model IDs compliant with the model (example: ["flux.1-dev", "flux.1-schnell"]) This attribute is mainly used for Flux LoRA models - `concepts: Optional[List[ModelConcept]]` The concepts is required for the type model: composition - `model_id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: float` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `model_epoch: Optional[str]` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `epoch: Optional[str]` The epoch of the model. Only available for Flux Lora Trained models. If not set, uses the final model epoch (latest) - `epochs: Optional[List[ModelEpoch]]` The epochs of the model. Only available for Flux Lora Trained models. - `epoch: str` The epoch hash to identify the epoch - `assets: Optional[List[ModelEpochAsset]]` The assets of the epoch if sample prompts as been supplied during training - `asset_id: str` The AssetId of the image during training (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the asset - `inputs: Optional[List[ModelInput]]` The inputs of the model. Only used for custom models. To retrieve this list, get it by modelId with GET /models/{modelId} - `name: str` The name that must be user to call the model through the API - `type: Literal["boolean", "file", "file_array", 7 more]` The data type of the input - `"boolean"` - `"file"` - `"file_array"` - `"inputs_array"` - `"model"` - `"model_array"` - `"number"` - `"number_array"` - `"string"` - `"string_array"` - `allowed_values: Optional[List[object]]` The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown. - `background_behavior: Optional[Literal["opaque", "transparent"]]` Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`. - `"opaque"` - `"transparent"` - `color: Optional[bool]` Whether the input is a color or not. Only available for `string` input type. - `cost_impact: Optional[bool]` Whether this input affects the model's cost calculation - `default: Optional[object]` The default value for the input - `description: Optional[str]` Help text displayed in the UI to provide additional information about the input - `group: Optional[str]` Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI. - `hint: Optional[str]` Hint text displayed in the UI as a tooltip to guide the user - `inputs: Optional[List[Dict[str, object]]]` The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs. - `kind: Optional[Literal["3d", "audio", "document", 4 more]]` The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `label: Optional[str]` The label displayed in the UI for this input - `mask_from: Optional[str]` The name of the file input field to use as the mask source - `max: Optional[float]` The maximum allowed value. Only available for `number` and `array` input types. - `max_length: Optional[float]` The maximum allowed length for `string` inputs. Also applies to each item in `string_array`. - `max_size: Optional[float]` The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time. - `min: Optional[float]` The minimum allowed value. Only available for `number` and array input types. - `min_length: Optional[float]` The minimum allowed length for string inputs. Also applies to each item in `string_array`. - `model_types: Optional[List[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]]` The allowed model types for this input. Example: `["flux.1-lora"]`. Only available for `model_array` input type. - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `parent: Optional[bool]` Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types. For `file_array`, the parent asset is the first item in the array. - `placeholder: Optional[str]` Placeholder text for the input. Only available for 'string' input type. - `prompt: Optional[bool]` Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type. - `prompt_spark: Optional[bool]` Whether the input is used with prompt spark. Only available for `string` input type. - `required: Optional[ModelInputRequired]` Set of rules that describes when this input is required: - `always`: Input is always required - `ifNotDefined`: Input is required when another specified input is not defined - `ifDefined`: Input is required when another specified input is defined - `conditionalValues`: Input is required when another input has a specific value By default, the input is not required. - `always: Optional[bool]` Whether the input is always required - `conditional_values: Optional[object]` Makes this input required when another input has a specific value: - Key: name of the input to check - Value: operation and allowed values that trigger the requirement - `if_defined: Optional[object]` Makes this input required when another input is defined: - Key: name of the input that must be defined - Value: message to display when this input is required - `if_not_defined: Optional[object]` Makes this input required when another input is not defined: - Key: name of the input that must be undefined - Value: message to display when this input is required - `step: Optional[float]` The step increment for numeric inputs. Only available for `number` input type. - `model_keyword: Optional[str]` The model keyword, this is a legacy parameter, please use conceptPrompt in parameters - `name: Optional[str]` The model name (example: "Cinematic Realism") - `negative_prompt_embedding: Optional[str]` Fine-tune the model's inferences with negative prompt embedding - `owner_id: Optional[str]` The owner ID (example: "team_VFhihHKMRZyDDnZAJwLb2Q") - `parameters: Optional[ModelParameters]` The parameters of the model - `age: Optional[str]` Age group of the voice (for professional cloning) Only available for ElevenLabs voice training - `batch_size: Optional[float]` The batch size Less steps, and will increase the learning rate Only available for Flux LoRA training - `class_prompt: Optional[str]` The prompt to specify images in the same class as provided instance images Only available for SD15 training - `clone_type: Optional[str]` Type of voice cloning: "instant" (fast) or "professional" (higher quality, requires captcha) Only available for ElevenLabs voice training - `concept_prompt: Optional[str]` The prompt with identifier specifying the instance (or subject) of the class (example: "a daiton dog") Default value varies depending on the model type: - For SD1.5: "daiton" if no class is associated with the model - For SDXL: "daiton" - For Flux: "" - `gender: Optional[str]` Gender of the voice (for professional cloning) Only available for ElevenLabs voice training - `language: Optional[str]` Language of the audio samples (ISO 639-1 code) Only available for ElevenLabs voice training - `learning_rate: Optional[float]` Initial learning rate (after the potential warmup period) Default value varies depending on the model type: - For SD1.5 and SDXL: 0.000005 - For Flux: 0.0001 - `learning_rate_text_encoder: Optional[float]` Initial learning rate (after the potential warmup period) for the text encoder Maximum [Flux LoRA: 0.001] Default [SDXL: 0.00005 | Flux LoRA: 0.00001] Minimum [SDXL: 0 | Flux LoRA: 0.000001] - `learning_rate_unet: Optional[float]` Initial learning rate (after the potential warmup period) for the UNet Only available for SDXL LoRA training - `lr_scheduler: Optional[Literal["constant", "constant-with-warmup", "cosine", 3 more]]` The scheduler type to use (default: "constant") Only available for SD15 and SDXL LoRA training - `"constant"` - `"constant-with-warmup"` - `"cosine"` - `"cosine-with-restarts"` - `"linear"` - `"polynomial"` - `max_train_steps: Optional[float]` Maximum number of training steps to execute (default: varies depending on the model type) For SDXL LoRA training, please use `numTextTrainSteps` and `numUNetTrainSteps` instead Default value varies depending on the model type: - For SD1.5: round((number of training images * 225) / 3) - For SDXL: number of training images * 175 - For Flux: number of training images * 100 Maximum value varies depending on the model type: - For SD1.5 and SDXL: [0, 40000] - For Flux: [0, 10000] - `nb_epochs: Optional[float]` The number of epochs to train for Only available for Flux LoRA training - `nb_repeats: Optional[float]` The number of times to repeat the training Only available for Flux LoRA training - `num_text_train_steps: Optional[float]` The number of training steps for the text encoder Only available for SDXL LoRA training - `num_u_net_train_steps: Optional[float]` The number of training steps for the UNet Only available for SDXL LoRA training - `optimize_for: Optional[Literal["likeness"]]` Optimize the model training task for a specific type of input images. The available values are: - "likeness": optimize training for likeness or portrait (targets specific transformer blocks) - "all": train all transformer blocks - "none": train no specific transformer blocks This parameter controls which double and single transformer blocks are trained during the LoRA training process. Only available for Flux LoRA training - `"likeness"` - `prior_loss_weight: Optional[float]` The weight of prior preservation loss Only available for SD15 and SDXL LoRA training - `random_crop: Optional[bool]` Whether to random crop or center crop images before resizing to the working resolution Only available for SD15 and SDXL LoRA training - `random_crop_ratio: Optional[float]` Ratio of random crops Only available for SD15 and SDXL LoRA training - `random_crop_scale: Optional[float]` Scale of random crops Only available for SD15 and SDXL LoRA training - `rank: Optional[float]` The dimension of the LoRA update matrices Only available for SDXL (deprecated), Flux LoRA and Musubi training Default value varies depending on the model type: - For SDXL (deprecated): 64 - For Flux: 16 - For Musubi: 64 Each trainer enforces its own tighter limit (Flux LoRA: [2; 64], Musubi: [2; 128]) - `remove_background_noise: Optional[bool]` Whether to remove background noise from audio samples before cloning. When enabled, each sample must be at least 5 seconds long. Only available for ElevenLabs voice training - `sample_prompts: Optional[List[str]]` The prompts to use for each epoch Only available for Flux LoRA training - `sample_source_images: Optional[List[str]]` The sample prompt images (AssetIds) paired with samplePrompts Only available for Flux LoRA training Must be the same length as samplePrompts - `scale_lr: Optional[bool]` Whether to scale the learning rate Note: Legacy parameter, will be ignored Only available for SD15 and SDXL LoRA training - `seed: Optional[float]` Used to reproduce previous results. Default: randomly generated number. Only available for SD15 and SDXL LoRA training - `text_encoder_training_ratio: Optional[float]` Whether to train the text encoder or not Example: For 100 steps and a value of 0.2, it means that the text encoder will be trained for 20 steps and then the UNet for 80 steps Note: Legacy parameter, please use `numTextTrainSteps` and `numUNetTrainSteps` Only available for SD15 and SDXL LoRA training - `validation_frequency: Optional[float]` Validation frequency. Cannot be greater than maxTrainSteps value Only available for SD15 and SDXL LoRA training - `validation_prompt: Optional[str]` Validation prompt Only available for SD15 and SDXL LoRA training - `voice_description: Optional[str]` Description of the voice characteristics Only available for ElevenLabs voice training - `wandb_key: Optional[str]` The Weights And Bias key to use for logging. The maximum length is 40 characters - `parent_model_id: Optional[str]` The id of the parent model - `performance_stats: Optional[ModelPerformanceStats]` Aggregated performance stats - `variants: List[ModelPerformanceStatsVariant]` Performance metrics per variant - `capability: str` The generation capability (example: "txt2img", "img2video", "txt2audio") - `computed_at: str` When these stats were last computed (ISO date) - `variant_key: str` Unique variant identifier (example: "txt2img:1K", "img2video:2K", "txt2audio") - `arena_score: Optional[ModelPerformanceStatsVariantArenaScore]` External quality score from arena.ai leaderboard - `arena_category: str` Arena category (example: "text_to_image", "image_to_video") - `arena_model_name: str` Model name on arena.ai - `fetched_at: str` When this score was last fetched (ISO date) - `rank: float` Rank in the arena category - `rating: float` ELO rating - `rating_lower: float` ELO rating confidence interval lower bound - `rating_upper: float` ELO rating confidence interval upper bound - `votes: float` Number of human votes - `cost_per_asset_max_cu: Optional[float]` Maximum cost per output asset (CU) - `cost_per_asset_min_cu: Optional[float]` Minimum cost per output asset (CU) - `cost_per_asset_p50_cu: Optional[float]` Median cost per output asset (CU) - `inference_latency_p50_sec: Optional[float]` Inference latency P50 per output asset (seconds) - `inference_latency_p75_sec: Optional[float]` Inference latency P75 per output asset (seconds) - `resolution: Optional[str]` The resolution bucket (example: "0.5K", "1K", "2K", "4K") - `total_latency_p50_sec: Optional[float]` Total latency P50 per output asset, including queue time (seconds) - `total_latency_p75_sec: Optional[float]` Total latency P75 per output asset, including queue time (seconds) - `default: Optional[str]` Default variant key for quick model comparison - `prompt_embedding: Optional[str]` Fine-tune the model's inferences with prompt embedding - `short_description: Optional[str]` The model short description (example: "This model generates highly detailed cinematic scenes.") - `soft_deletion_on: Optional[str]` The date when the model will be soft deleted (only for Free plan) - `thumbnail: Optional[ModelThumbnail]` A thumbnail for your model - `asset_id: str` The AssetId of the image used as a thumbnail for your model (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the image used as a thumbnail for your model - `training_image_pairs: Optional[List[ModelTrainingImagePair]]` Array of training image pairs - `instruction: Optional[str]` The instruction for the image pair, source to target - `source_id: Optional[str]` The source asset ID (must be a training asset) - `target_id: Optional[str]` The target asset ID (must be a training asset) - `training_images: Optional[List[ModelTrainingImage]]` The URLs of the first 3 training images of the model. To retrieve the full set of images, get it by modelId - `id: str` The training image ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `automatic_captioning: str` Automatic captioning of the image - `created_at: str` The training image upload date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `description: str` Description for the image - `download_url: str` The URL of the image - `name: str` The original file name of the image (example: "my-training-image.jpg") - `training_progress: Optional[ModelTrainingProgress]` Additional information about the training progress of the model - `stage: Literal["pending", "pending-captcha", "queued-for-train", 2 more]` The stage of the request - `"pending"` - `"pending-captcha"` - `"queued-for-train"` - `"running-train"` - `"starting-train"` - `updated_at: float` Timestamp in milliseconds of the last time the training progress was updated - `position: Optional[float]` Position of the job in the queue (ie. the number of job in the queue before this one) - `progress: Optional[float]` The progress of the job - `remaining_time_ms: Optional[float]` The remaining time in milliseconds - `started_at: Optional[float]` The timestamp in millisecond marking the start of the process - `training_stats: Optional[ModelTrainingStats]` Additional information about the model's training - `ended_at: Optional[str]` The training end time as an ISO date string - `queue_duration: Optional[float]` The training queued duration in seconds - `started_at: Optional[str]` The training start time as an ISO date string - `train_duration: Optional[float]` The training duration in seconds - `ui_config: Optional[ModelUiConfig]` The UI configuration for the model - `input_properties: Optional[Dict[str, ModelUiConfigInputProperties]]` Configuration for the input properties - `collapsed: Optional[bool]` - `loras_component: Optional[ModelUiConfigLorasComponent]` Configuration for the loras component - `label: str` The label of the component - `model_input: str` The input name of the model (model_array) - `scale_input: str` The input name of the scale (number_array) - `model_id_input: Optional[str]` The input model id (example: a composition or a single LoRA modelId) If specified, the model id will be attached to the output asset as a metadata If the model-decomposer parser is specified on it, modelInput and scaleInput will be automatically populated - `presets: Optional[List[ModelUiConfigPreset]]` Configuration for the presets - `fields: List[str]` - `presets: object` - `resolution_component: Optional[ModelUiConfigResolutionComponent]` Configuration for the resolution component - `height_input: str` The input name of the height - `label: str` The label of the component - `presets: List[ModelUiConfigResolutionComponentPreset]` The resolution presets - `height: float` - `label: str` - `width: float` - `width_input: str` The input name of the width - `selects: Optional[Dict[str, object]]` Configuration for the selects - `trigger_generate: Optional[ModelUiConfigTriggerGenerate]` Configuration for the trigger generate button - `label: str` - `after: Optional[str]` The 'name' of the input where the trigger generate button will be displayed (after the input). Do not specify both position and after. - `position: Optional[Literal["bottom", "top"]]` The position of the trigger generate button. If position specified, the button will be displayed at the specified position. Do not specify both position and after. - `"bottom"` - `"top"` - `user_id: Optional[str]` (Deprecated) The user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") ### Example ```python import os from scenario_sdk import Scenario client = Scenario( api_key=os.environ.get("SCENARIO_SDK_API_KEY"), # This is the default and can be omitted api_secret=os.environ.get("SCENARIO_SDK_API_SECRET"), # This is the default and can be omitted ) model = client.models.update( model_id="modelId", ) print(model.model) ``` #### Response ```json { "model": { "id": "id", "capabilities": [ "3d23d" ], "collectionIds": [ "string" ], "createdAt": "createdAt", "custom": true, "exampleAssetIds": [ "string" ], "privacy": "private", "source": "civitai", "status": "copying", "tags": [ "string" ], "trainingImagesNumber": 0, "type": "custom", "updatedAt": "updatedAt", "accessRestrictions": 0, "authorId": "authorId", "class": { "category": "category", "conceptPrompt": "conceptPrompt", "modelId": "modelId", "name": "name", "prompt": "prompt", "slug": "slug", "status": "published", "thumbnails": [ "string" ] }, "compliantModelIds": [ "string" ], "concepts": [ { "modelId": "modelId", "scale": -2, "modelEpoch": "modelEpoch" } ], "epoch": "epoch", "epochs": [ { "epoch": "epoch", "assets": [ { "assetId": "assetId", "url": "url" } ] } ], "inputs": [ { "name": "name", "type": "boolean", "allowedValues": [ {} ], "backgroundBehavior": "opaque", "color": true, "costImpact": true, "default": {}, "description": "description", "group": "group", "hint": "hint", "inputs": [ { "foo": "bar" } ], "kind": "3d", "label": "label", "maskFrom": "maskFrom", "max": 0, "maxLength": 0, "maxSize": 0, "min": 0, "minLength": 0, "modelTypes": [ "custom" ], "parent": true, "placeholder": "placeholder", "prompt": true, "promptSpark": true, "required": { "always": true, "conditionalValues": {}, "ifDefined": {}, "ifNotDefined": {} }, "step": 1 } ], "modelKeyword": "modelKeyword", "name": "name", "negativePromptEmbedding": "negativePromptEmbedding", "ownerId": "ownerId", "parameters": { "age": "age", "batchSize": 1, "classPrompt": "classPrompt", "cloneType": "cloneType", "conceptPrompt": "conceptPrompt", "gender": "gender", "language": "language", "learningRate": 1, "learningRateTextEncoder": 0.0005, "learningRateUnet": 1, "lrScheduler": "constant", "maxTrainSteps": 0, "nbEpochs": 1, "nbRepeats": 1, "numTextTrainSteps": 0, "numUNetTrainSteps": 0, "optimizeFor": "likeness", "priorLossWeight": 1, "randomCrop": true, "randomCropRatio": 0, "randomCropScale": 0, "rank": 2, "removeBackgroundNoise": true, "samplePrompts": [ "string" ], "sampleSourceImages": [ "string" ], "scaleLr": true, "seed": 0, "textEncoderTrainingRatio": 0, "validationFrequency": 0, "validationPrompt": "validationPrompt", "voiceDescription": "voiceDescription", "wandbKey": "wandbKey" }, "parentModelId": "parentModelId", "performanceStats": { "variants": [ { "capability": "capability", "computedAt": "computedAt", "variantKey": "variantKey", "arenaScore": { "arenaCategory": "arenaCategory", "arenaModelName": "arenaModelName", "fetchedAt": "fetchedAt", "rank": 0, "rating": 0, "ratingLower": 0, "ratingUpper": 0, "votes": 0 }, "costPerAssetMaxCU": 0, "costPerAssetMinCU": 0, "costPerAssetP50CU": 0, "inferenceLatencyP50Sec": 0, "inferenceLatencyP75Sec": 0, "resolution": "resolution", "totalLatencyP50Sec": 0, "totalLatencyP75Sec": 0 } ], "default": "default" }, "promptEmbedding": "promptEmbedding", "shortDescription": "shortDescription", "softDeletionOn": "softDeletionOn", "thumbnail": { "assetId": "assetId", "url": "url" }, "trainingImagePairs": [ { "instruction": "instruction", "sourceId": "sourceId", "targetId": "targetId" } ], "trainingImages": [ { "id": "id", "automaticCaptioning": "automaticCaptioning", "createdAt": "createdAt", "description": "description", "downloadUrl": "downloadUrl", "name": "name" } ], "trainingProgress": { "stage": "pending", "updatedAt": 0, "position": 0, "progress": 0, "remainingTimeMs": 0, "startedAt": 0 }, "trainingStats": { "endedAt": "endedAt", "queueDuration": 0, "startedAt": "startedAt", "trainDuration": 0 }, "uiConfig": { "inputProperties": { "foo": { "collapsed": true } }, "lorasComponent": { "label": "label", "modelInput": "modelInput", "scaleInput": "scaleInput", "modelIdInput": "modelIdInput" }, "presets": [ { "fields": [ "string" ], "presets": {} } ], "resolutionComponent": { "heightInput": "heightInput", "label": "label", "presets": [ { "height": 0, "label": "label", "width": 0 } ], "widthInput": "widthInput" }, "selects": { "foo": {} }, "triggerGenerate": { "label": "label", "after": "after", "position": "bottom" } }, "userId": "userId" } } ``` ## Delete `models.delete(strmodel_id) -> object` **delete** `/models/{modelId}` Delete a model ### Parameters - `model_id: str` ### Returns - `object` ### Example ```python import os from scenario_sdk import Scenario client = Scenario( api_key=os.environ.get("SCENARIO_SDK_API_KEY"), # This is the default and can be omitted api_secret=os.environ.get("SCENARIO_SDK_API_SECRET"), # This is the default and can be omitted ) model = client.models.delete( "modelId", ) print(model) ``` #### Response ```json {} ``` ## Copy `models.copy(strmodel_id, ModelCopyParams**kwargs) -> ModelCopyResponse` **post** `/models/{modelId}/copy` Copy the given `modelId` to a new model, thumbnail, presets, and all of its training images and pairs if any ### Parameters - `model_id: str` - `original_assets: Optional[bool]` If set to true, returns the original asset without transformation - `copy_as_trained: Optional[bool]` If set to true, the training data will be copied - `copy_examples: Optional[bool]` true by default, the example images will be copied ### Returns - `class ModelCopyResponse: …` - `model: Model` - `id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `capabilities: List[Literal["3d23d", "audio2audio", "audio2video", 29 more]]` List of model capabilities (example: ["txt2img", "img2img", "txt2img_ip_adapter", ...]) - `"3d23d"` - `"audio2audio"` - `"audio2video"` - `"controlnet"` - `"controlnet_img2img"` - `"controlnet_inpaint"` - `"controlnet_inpaint_ip_adapter"` - `"controlnet_ip_adapter"` - `"controlnet_reference"` - `"controlnet_texture"` - `"img23d"` - `"img2img"` - `"img2img_ip_adapter"` - `"img2img_texture"` - `"img2txt"` - `"img2video"` - `"inpaint"` - `"inpaint_ip_adapter"` - `"outpaint"` - `"reference"` - `"reference_texture"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2img_ip_adapter"` - `"txt2img_texture"` - `"txt2txt"` - `"txt2video"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `collection_ids: List[str]` A list of CollectionId this model belongs to - `created_at: str` The model creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `custom: bool` Whether the model is a custom model and can be used only with POST /generate/custom/{modelId} endpoint - `example_asset_ids: List[str]` List of all example asset IDs setup by the model owner - `privacy: Literal["private", "public", "unlisted"]` The privacy of the model (default: private) - `"private"` - `"public"` - `"unlisted"` - `source: Literal["civitai", "huggingface", "other", "scenario"]` The source of the model - `"civitai"` - `"huggingface"` - `"other"` - `"scenario"` - `status: Literal["copying", "failed", "new", 3 more]` The model status - `"copying"` - `"failed"` - `"new"` - `"trained"` - `"training"` - `"training-canceled"` - `tags: List[str]` The associated tags (example: ["sci-fi", "landscape"]) - `training_images_number: float` The total number of training images - `type: Literal["custom", "elevenlabs-voice", "flux.1", 34 more]` The model type (example: "flux.1-lora") - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `updated_at: str` The model last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `access_restrictions: Optional[Literal[0, 100, 25, 2 more]]` The access restrictions of the model 0: Free plan 25: Creator plan 50: Pro plan 75: Team plan 100: Enterprise plan - `0` - `100` - `25` - `50` - `75` - `author_id: Optional[str]` The author user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") - `class_: Optional[ModelClass]` The class of the model - `category: str` The category slug of the class (example: "art-style") - `concept_prompt: str` The concept prompt of the class (example: "a sks character design") - `model_id: str` The model ID of the class (example: "stable-diffusion-v1-5") - `name: str` The class name (example: "Character Design") - `prompt: str` The class prompt (example: "a character design") - `slug: str` The class slug (example: "art-style-character-design") - `status: Literal["published", "unpublished"]` The class status (only published classes are listed, but unpublished classes can still appear in existing models) - `"published"` - `"unpublished"` - `thumbnails: List[str]` Some example images URLs to showcase the class - `compliant_model_ids: Optional[List[str]]` List of base model IDs compliant with the model (example: ["flux.1-dev", "flux.1-schnell"]) This attribute is mainly used for Flux LoRA models - `concepts: Optional[List[ModelConcept]]` The concepts is required for the type model: composition - `model_id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: float` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `model_epoch: Optional[str]` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `epoch: Optional[str]` The epoch of the model. Only available for Flux Lora Trained models. If not set, uses the final model epoch (latest) - `epochs: Optional[List[ModelEpoch]]` The epochs of the model. Only available for Flux Lora Trained models. - `epoch: str` The epoch hash to identify the epoch - `assets: Optional[List[ModelEpochAsset]]` The assets of the epoch if sample prompts as been supplied during training - `asset_id: str` The AssetId of the image during training (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the asset - `inputs: Optional[List[ModelInput]]` The inputs of the model. Only used for custom models. To retrieve this list, get it by modelId with GET /models/{modelId} - `name: str` The name that must be user to call the model through the API - `type: Literal["boolean", "file", "file_array", 7 more]` The data type of the input - `"boolean"` - `"file"` - `"file_array"` - `"inputs_array"` - `"model"` - `"model_array"` - `"number"` - `"number_array"` - `"string"` - `"string_array"` - `allowed_values: Optional[List[object]]` The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown. - `background_behavior: Optional[Literal["opaque", "transparent"]]` Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`. - `"opaque"` - `"transparent"` - `color: Optional[bool]` Whether the input is a color or not. Only available for `string` input type. - `cost_impact: Optional[bool]` Whether this input affects the model's cost calculation - `default: Optional[object]` The default value for the input - `description: Optional[str]` Help text displayed in the UI to provide additional information about the input - `group: Optional[str]` Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI. - `hint: Optional[str]` Hint text displayed in the UI as a tooltip to guide the user - `inputs: Optional[List[Dict[str, object]]]` The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs. - `kind: Optional[Literal["3d", "audio", "document", 4 more]]` The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `label: Optional[str]` The label displayed in the UI for this input - `mask_from: Optional[str]` The name of the file input field to use as the mask source - `max: Optional[float]` The maximum allowed value. Only available for `number` and `array` input types. - `max_length: Optional[float]` The maximum allowed length for `string` inputs. Also applies to each item in `string_array`. - `max_size: Optional[float]` The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time. - `min: Optional[float]` The minimum allowed value. Only available for `number` and array input types. - `min_length: Optional[float]` The minimum allowed length for string inputs. Also applies to each item in `string_array`. - `model_types: Optional[List[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]]` The allowed model types for this input. Example: `["flux.1-lora"]`. Only available for `model_array` input type. - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `parent: Optional[bool]` Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types. For `file_array`, the parent asset is the first item in the array. - `placeholder: Optional[str]` Placeholder text for the input. Only available for 'string' input type. - `prompt: Optional[bool]` Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type. - `prompt_spark: Optional[bool]` Whether the input is used with prompt spark. Only available for `string` input type. - `required: Optional[ModelInputRequired]` Set of rules that describes when this input is required: - `always`: Input is always required - `ifNotDefined`: Input is required when another specified input is not defined - `ifDefined`: Input is required when another specified input is defined - `conditionalValues`: Input is required when another input has a specific value By default, the input is not required. - `always: Optional[bool]` Whether the input is always required - `conditional_values: Optional[object]` Makes this input required when another input has a specific value: - Key: name of the input to check - Value: operation and allowed values that trigger the requirement - `if_defined: Optional[object]` Makes this input required when another input is defined: - Key: name of the input that must be defined - Value: message to display when this input is required - `if_not_defined: Optional[object]` Makes this input required when another input is not defined: - Key: name of the input that must be undefined - Value: message to display when this input is required - `step: Optional[float]` The step increment for numeric inputs. Only available for `number` input type. - `model_keyword: Optional[str]` The model keyword, this is a legacy parameter, please use conceptPrompt in parameters - `name: Optional[str]` The model name (example: "Cinematic Realism") - `negative_prompt_embedding: Optional[str]` Fine-tune the model's inferences with negative prompt embedding - `owner_id: Optional[str]` The owner ID (example: "team_VFhihHKMRZyDDnZAJwLb2Q") - `parameters: Optional[ModelParameters]` The parameters of the model - `age: Optional[str]` Age group of the voice (for professional cloning) Only available for ElevenLabs voice training - `batch_size: Optional[float]` The batch size Less steps, and will increase the learning rate Only available for Flux LoRA training - `class_prompt: Optional[str]` The prompt to specify images in the same class as provided instance images Only available for SD15 training - `clone_type: Optional[str]` Type of voice cloning: "instant" (fast) or "professional" (higher quality, requires captcha) Only available for ElevenLabs voice training - `concept_prompt: Optional[str]` The prompt with identifier specifying the instance (or subject) of the class (example: "a daiton dog") Default value varies depending on the model type: - For SD1.5: "daiton" if no class is associated with the model - For SDXL: "daiton" - For Flux: "" - `gender: Optional[str]` Gender of the voice (for professional cloning) Only available for ElevenLabs voice training - `language: Optional[str]` Language of the audio samples (ISO 639-1 code) Only available for ElevenLabs voice training - `learning_rate: Optional[float]` Initial learning rate (after the potential warmup period) Default value varies depending on the model type: - For SD1.5 and SDXL: 0.000005 - For Flux: 0.0001 - `learning_rate_text_encoder: Optional[float]` Initial learning rate (after the potential warmup period) for the text encoder Maximum [Flux LoRA: 0.001] Default [SDXL: 0.00005 | Flux LoRA: 0.00001] Minimum [SDXL: 0 | Flux LoRA: 0.000001] - `learning_rate_unet: Optional[float]` Initial learning rate (after the potential warmup period) for the UNet Only available for SDXL LoRA training - `lr_scheduler: Optional[Literal["constant", "constant-with-warmup", "cosine", 3 more]]` The scheduler type to use (default: "constant") Only available for SD15 and SDXL LoRA training - `"constant"` - `"constant-with-warmup"` - `"cosine"` - `"cosine-with-restarts"` - `"linear"` - `"polynomial"` - `max_train_steps: Optional[float]` Maximum number of training steps to execute (default: varies depending on the model type) For SDXL LoRA training, please use `numTextTrainSteps` and `numUNetTrainSteps` instead Default value varies depending on the model type: - For SD1.5: round((number of training images * 225) / 3) - For SDXL: number of training images * 175 - For Flux: number of training images * 100 Maximum value varies depending on the model type: - For SD1.5 and SDXL: [0, 40000] - For Flux: [0, 10000] - `nb_epochs: Optional[float]` The number of epochs to train for Only available for Flux LoRA training - `nb_repeats: Optional[float]` The number of times to repeat the training Only available for Flux LoRA training - `num_text_train_steps: Optional[float]` The number of training steps for the text encoder Only available for SDXL LoRA training - `num_u_net_train_steps: Optional[float]` The number of training steps for the UNet Only available for SDXL LoRA training - `optimize_for: Optional[Literal["likeness"]]` Optimize the model training task for a specific type of input images. The available values are: - "likeness": optimize training for likeness or portrait (targets specific transformer blocks) - "all": train all transformer blocks - "none": train no specific transformer blocks This parameter controls which double and single transformer blocks are trained during the LoRA training process. Only available for Flux LoRA training - `"likeness"` - `prior_loss_weight: Optional[float]` The weight of prior preservation loss Only available for SD15 and SDXL LoRA training - `random_crop: Optional[bool]` Whether to random crop or center crop images before resizing to the working resolution Only available for SD15 and SDXL LoRA training - `random_crop_ratio: Optional[float]` Ratio of random crops Only available for SD15 and SDXL LoRA training - `random_crop_scale: Optional[float]` Scale of random crops Only available for SD15 and SDXL LoRA training - `rank: Optional[float]` The dimension of the LoRA update matrices Only available for SDXL (deprecated), Flux LoRA and Musubi training Default value varies depending on the model type: - For SDXL (deprecated): 64 - For Flux: 16 - For Musubi: 64 Each trainer enforces its own tighter limit (Flux LoRA: [2; 64], Musubi: [2; 128]) - `remove_background_noise: Optional[bool]` Whether to remove background noise from audio samples before cloning. When enabled, each sample must be at least 5 seconds long. Only available for ElevenLabs voice training - `sample_prompts: Optional[List[str]]` The prompts to use for each epoch Only available for Flux LoRA training - `sample_source_images: Optional[List[str]]` The sample prompt images (AssetIds) paired with samplePrompts Only available for Flux LoRA training Must be the same length as samplePrompts - `scale_lr: Optional[bool]` Whether to scale the learning rate Note: Legacy parameter, will be ignored Only available for SD15 and SDXL LoRA training - `seed: Optional[float]` Used to reproduce previous results. Default: randomly generated number. Only available for SD15 and SDXL LoRA training - `text_encoder_training_ratio: Optional[float]` Whether to train the text encoder or not Example: For 100 steps and a value of 0.2, it means that the text encoder will be trained for 20 steps and then the UNet for 80 steps Note: Legacy parameter, please use `numTextTrainSteps` and `numUNetTrainSteps` Only available for SD15 and SDXL LoRA training - `validation_frequency: Optional[float]` Validation frequency. Cannot be greater than maxTrainSteps value Only available for SD15 and SDXL LoRA training - `validation_prompt: Optional[str]` Validation prompt Only available for SD15 and SDXL LoRA training - `voice_description: Optional[str]` Description of the voice characteristics Only available for ElevenLabs voice training - `wandb_key: Optional[str]` The Weights And Bias key to use for logging. The maximum length is 40 characters - `parent_model_id: Optional[str]` The id of the parent model - `performance_stats: Optional[ModelPerformanceStats]` Aggregated performance stats - `variants: List[ModelPerformanceStatsVariant]` Performance metrics per variant - `capability: str` The generation capability (example: "txt2img", "img2video", "txt2audio") - `computed_at: str` When these stats were last computed (ISO date) - `variant_key: str` Unique variant identifier (example: "txt2img:1K", "img2video:2K", "txt2audio") - `arena_score: Optional[ModelPerformanceStatsVariantArenaScore]` External quality score from arena.ai leaderboard - `arena_category: str` Arena category (example: "text_to_image", "image_to_video") - `arena_model_name: str` Model name on arena.ai - `fetched_at: str` When this score was last fetched (ISO date) - `rank: float` Rank in the arena category - `rating: float` ELO rating - `rating_lower: float` ELO rating confidence interval lower bound - `rating_upper: float` ELO rating confidence interval upper bound - `votes: float` Number of human votes - `cost_per_asset_max_cu: Optional[float]` Maximum cost per output asset (CU) - `cost_per_asset_min_cu: Optional[float]` Minimum cost per output asset (CU) - `cost_per_asset_p50_cu: Optional[float]` Median cost per output asset (CU) - `inference_latency_p50_sec: Optional[float]` Inference latency P50 per output asset (seconds) - `inference_latency_p75_sec: Optional[float]` Inference latency P75 per output asset (seconds) - `resolution: Optional[str]` The resolution bucket (example: "0.5K", "1K", "2K", "4K") - `total_latency_p50_sec: Optional[float]` Total latency P50 per output asset, including queue time (seconds) - `total_latency_p75_sec: Optional[float]` Total latency P75 per output asset, including queue time (seconds) - `default: Optional[str]` Default variant key for quick model comparison - `prompt_embedding: Optional[str]` Fine-tune the model's inferences with prompt embedding - `short_description: Optional[str]` The model short description (example: "This model generates highly detailed cinematic scenes.") - `soft_deletion_on: Optional[str]` The date when the model will be soft deleted (only for Free plan) - `thumbnail: Optional[ModelThumbnail]` A thumbnail for your model - `asset_id: str` The AssetId of the image used as a thumbnail for your model (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the image used as a thumbnail for your model - `training_image_pairs: Optional[List[ModelTrainingImagePair]]` Array of training image pairs - `instruction: Optional[str]` The instruction for the image pair, source to target - `source_id: Optional[str]` The source asset ID (must be a training asset) - `target_id: Optional[str]` The target asset ID (must be a training asset) - `training_images: Optional[List[ModelTrainingImage]]` The URLs of the first 3 training images of the model. To retrieve the full set of images, get it by modelId - `id: str` The training image ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `automatic_captioning: str` Automatic captioning of the image - `created_at: str` The training image upload date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `description: str` Description for the image - `download_url: str` The URL of the image - `name: str` The original file name of the image (example: "my-training-image.jpg") - `training_progress: Optional[ModelTrainingProgress]` Additional information about the training progress of the model - `stage: Literal["pending", "pending-captcha", "queued-for-train", 2 more]` The stage of the request - `"pending"` - `"pending-captcha"` - `"queued-for-train"` - `"running-train"` - `"starting-train"` - `updated_at: float` Timestamp in milliseconds of the last time the training progress was updated - `position: Optional[float]` Position of the job in the queue (ie. the number of job in the queue before this one) - `progress: Optional[float]` The progress of the job - `remaining_time_ms: Optional[float]` The remaining time in milliseconds - `started_at: Optional[float]` The timestamp in millisecond marking the start of the process - `training_stats: Optional[ModelTrainingStats]` Additional information about the model's training - `ended_at: Optional[str]` The training end time as an ISO date string - `queue_duration: Optional[float]` The training queued duration in seconds - `started_at: Optional[str]` The training start time as an ISO date string - `train_duration: Optional[float]` The training duration in seconds - `ui_config: Optional[ModelUiConfig]` The UI configuration for the model - `input_properties: Optional[Dict[str, ModelUiConfigInputProperties]]` Configuration for the input properties - `collapsed: Optional[bool]` - `loras_component: Optional[ModelUiConfigLorasComponent]` Configuration for the loras component - `label: str` The label of the component - `model_input: str` The input name of the model (model_array) - `scale_input: str` The input name of the scale (number_array) - `model_id_input: Optional[str]` The input model id (example: a composition or a single LoRA modelId) If specified, the model id will be attached to the output asset as a metadata If the model-decomposer parser is specified on it, modelInput and scaleInput will be automatically populated - `presets: Optional[List[ModelUiConfigPreset]]` Configuration for the presets - `fields: List[str]` - `presets: object` - `resolution_component: Optional[ModelUiConfigResolutionComponent]` Configuration for the resolution component - `height_input: str` The input name of the height - `label: str` The label of the component - `presets: List[ModelUiConfigResolutionComponentPreset]` The resolution presets - `height: float` - `label: str` - `width: float` - `width_input: str` The input name of the width - `selects: Optional[Dict[str, object]]` Configuration for the selects - `trigger_generate: Optional[ModelUiConfigTriggerGenerate]` Configuration for the trigger generate button - `label: str` - `after: Optional[str]` The 'name' of the input where the trigger generate button will be displayed (after the input). Do not specify both position and after. - `position: Optional[Literal["bottom", "top"]]` The position of the trigger generate button. If position specified, the button will be displayed at the specified position. Do not specify both position and after. - `"bottom"` - `"top"` - `user_id: Optional[str]` (Deprecated) The user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") ### Example ```python import os from scenario_sdk import Scenario client = Scenario( api_key=os.environ.get("SCENARIO_SDK_API_KEY"), # This is the default and can be omitted api_secret=os.environ.get("SCENARIO_SDK_API_SECRET"), # This is the default and can be omitted ) response = client.models.copy( model_id="modelId", ) print(response.model) ``` #### Response ```json { "model": { "id": "id", "capabilities": [ "3d23d" ], "collectionIds": [ "string" ], "createdAt": "createdAt", "custom": true, "exampleAssetIds": [ "string" ], "privacy": "private", "source": "civitai", "status": "copying", "tags": [ "string" ], "trainingImagesNumber": 0, "type": "custom", "updatedAt": "updatedAt", "accessRestrictions": 0, "authorId": "authorId", "class": { "category": "category", "conceptPrompt": "conceptPrompt", "modelId": "modelId", "name": "name", "prompt": "prompt", "slug": "slug", "status": "published", "thumbnails": [ "string" ] }, "compliantModelIds": [ "string" ], "concepts": [ { "modelId": "modelId", "scale": -2, "modelEpoch": "modelEpoch" } ], "epoch": "epoch", "epochs": [ { "epoch": "epoch", "assets": [ { "assetId": "assetId", "url": "url" } ] } ], "inputs": [ { "name": "name", "type": "boolean", "allowedValues": [ {} ], "backgroundBehavior": "opaque", "color": true, "costImpact": true, "default": {}, "description": "description", "group": "group", "hint": "hint", "inputs": [ { "foo": "bar" } ], "kind": "3d", "label": "label", "maskFrom": "maskFrom", "max": 0, "maxLength": 0, "maxSize": 0, "min": 0, "minLength": 0, "modelTypes": [ "custom" ], "parent": true, "placeholder": "placeholder", "prompt": true, "promptSpark": true, "required": { "always": true, "conditionalValues": {}, "ifDefined": {}, "ifNotDefined": {} }, "step": 1 } ], "modelKeyword": "modelKeyword", "name": "name", "negativePromptEmbedding": "negativePromptEmbedding", "ownerId": "ownerId", "parameters": { "age": "age", "batchSize": 1, "classPrompt": "classPrompt", "cloneType": "cloneType", "conceptPrompt": "conceptPrompt", "gender": "gender", "language": "language", "learningRate": 1, "learningRateTextEncoder": 0.0005, "learningRateUnet": 1, "lrScheduler": "constant", "maxTrainSteps": 0, "nbEpochs": 1, "nbRepeats": 1, "numTextTrainSteps": 0, "numUNetTrainSteps": 0, "optimizeFor": "likeness", "priorLossWeight": 1, "randomCrop": true, "randomCropRatio": 0, "randomCropScale": 0, "rank": 2, "removeBackgroundNoise": true, "samplePrompts": [ "string" ], "sampleSourceImages": [ "string" ], "scaleLr": true, "seed": 0, "textEncoderTrainingRatio": 0, "validationFrequency": 0, "validationPrompt": "validationPrompt", "voiceDescription": "voiceDescription", "wandbKey": "wandbKey" }, "parentModelId": "parentModelId", "performanceStats": { "variants": [ { "capability": "capability", "computedAt": "computedAt", "variantKey": "variantKey", "arenaScore": { "arenaCategory": "arenaCategory", "arenaModelName": "arenaModelName", "fetchedAt": "fetchedAt", "rank": 0, "rating": 0, "ratingLower": 0, "ratingUpper": 0, "votes": 0 }, "costPerAssetMaxCU": 0, "costPerAssetMinCU": 0, "costPerAssetP50CU": 0, "inferenceLatencyP50Sec": 0, "inferenceLatencyP75Sec": 0, "resolution": "resolution", "totalLatencyP50Sec": 0, "totalLatencyP75Sec": 0 } ], "default": "default" }, "promptEmbedding": "promptEmbedding", "shortDescription": "shortDescription", "softDeletionOn": "softDeletionOn", "thumbnail": { "assetId": "assetId", "url": "url" }, "trainingImagePairs": [ { "instruction": "instruction", "sourceId": "sourceId", "targetId": "targetId" } ], "trainingImages": [ { "id": "id", "automaticCaptioning": "automaticCaptioning", "createdAt": "createdAt", "description": "description", "downloadUrl": "downloadUrl", "name": "name" } ], "trainingProgress": { "stage": "pending", "updatedAt": 0, "position": 0, "progress": 0, "remainingTimeMs": 0, "startedAt": 0 }, "trainingStats": { "endedAt": "endedAt", "queueDuration": 0, "startedAt": "startedAt", "trainDuration": 0 }, "uiConfig": { "inputProperties": { "foo": { "collapsed": true } }, "lorasComponent": { "label": "label", "modelInput": "modelInput", "scaleInput": "scaleInput", "modelIdInput": "modelIdInput" }, "presets": [ { "fields": [ "string" ], "presets": {} } ], "resolutionComponent": { "heightInput": "heightInput", "label": "label", "presets": [ { "height": 0, "label": "label", "width": 0 } ], "widthInput": "widthInput" }, "selects": { "foo": {} }, "triggerGenerate": { "label": "label", "after": "after", "position": "bottom" } }, "userId": "userId" } } ``` ## Download `models.download(strmodel_id, ModelDownloadParams**kwargs) -> ModelDownloadResponse` **post** `/models/{modelId}/download` Request a link to download the given `modelId` ### Parameters - `model_id: str` - `model_epoch: Optional[str]` The epoch hash of the model to download Only available for Flux Lora Trained models with epochs Will only apply to the main model in the download request If not set, the default (latest or setup at model level) epoch will be used ### Returns - `class ModelDownloadResponse: …` - `job_id: str` The job id associated with the download request ### Example ```python import os from scenario_sdk import Scenario client = Scenario( api_key=os.environ.get("SCENARIO_SDK_API_KEY"), # This is the default and can be omitted api_secret=os.environ.get("SCENARIO_SDK_API_SECRET"), # This is the default and can be omitted ) response = client.models.download( model_id="modelId", ) print(response.job_id) ``` #### Response ```json { "jobId": "jobId" } ``` ## Delete Images `models.delete_images(strmodel_id, ModelDeleteImagesParams**kwargs) -> object` **delete** `/models/{modelId}/images` Delete an image ### Parameters - `model_id: str` - `ids: Sequence[str]` The asset ids of the images to delete ### Returns - `object` ### Example ```python import os from scenario_sdk import Scenario client = Scenario( api_key=os.environ.get("SCENARIO_SDK_API_KEY"), # This is the default and can be omitted api_secret=os.environ.get("SCENARIO_SDK_API_SECRET"), # This is the default and can be omitted ) response = client.models.delete_images( model_id="modelId", ids=["string"], ) print(response) ``` #### Response ```json {} ``` ## Update Tags `models.update_tags(strmodel_id, ModelUpdateTagsParams**kwargs) -> ModelUpdateTagsResponse` **put** `/models/{modelId}/tags` Add/delete tags for the given `modelId` ### Parameters - `model_id: str` - `add: Optional[Sequence[str]]` The list of tags to add - `delete: Optional[Sequence[str]]` The list of tags to delete - `strict: Optional[bool]` If true, the function will throw an error if: - one of the tags to add already exists - one of the tags to delete is not found If false, the endpoint will behave as if it was idempotent ### Returns - `class ModelUpdateTagsResponse: …` - `added: List[str]` The list of added tags - `deleted: List[str]` The list of deleted tags ### Example ```python import os from scenario_sdk import Scenario client = Scenario( api_key=os.environ.get("SCENARIO_SDK_API_KEY"), # This is the default and can be omitted api_secret=os.environ.get("SCENARIO_SDK_API_SECRET"), # This is the default and can be omitted ) response = client.models.update_tags( model_id="modelId", ) print(response.added) ``` #### Response ```json { "added": [ "string" ], "deleted": [ "string" ] } ``` ## Transfer `models.transfer(strmodel_id, ModelTransferParams**kwargs) -> ModelTransferResponse` **post** `/models/{modelId}/transfer` Transfer (with a copy or a full ownership change) a model to a new owner, including all of its training images ### Parameters - `model_id: str` - `destination_project_id: str` The id of the project to copy and transfer the model to - `destination_team_id: Optional[str]` The id of the team to copy and transfer the model to ### Returns - `class ModelTransferResponse: …` - `model: Model` - `id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `capabilities: List[Literal["3d23d", "audio2audio", "audio2video", 29 more]]` List of model capabilities (example: ["txt2img", "img2img", "txt2img_ip_adapter", ...]) - `"3d23d"` - `"audio2audio"` - `"audio2video"` - `"controlnet"` - `"controlnet_img2img"` - `"controlnet_inpaint"` - `"controlnet_inpaint_ip_adapter"` - `"controlnet_ip_adapter"` - `"controlnet_reference"` - `"controlnet_texture"` - `"img23d"` - `"img2img"` - `"img2img_ip_adapter"` - `"img2img_texture"` - `"img2txt"` - `"img2video"` - `"inpaint"` - `"inpaint_ip_adapter"` - `"outpaint"` - `"reference"` - `"reference_texture"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2img_ip_adapter"` - `"txt2img_texture"` - `"txt2txt"` - `"txt2video"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `collection_ids: List[str]` A list of CollectionId this model belongs to - `created_at: str` The model creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `custom: bool` Whether the model is a custom model and can be used only with POST /generate/custom/{modelId} endpoint - `example_asset_ids: List[str]` List of all example asset IDs setup by the model owner - `privacy: Literal["private", "public", "unlisted"]` The privacy of the model (default: private) - `"private"` - `"public"` - `"unlisted"` - `source: Literal["civitai", "huggingface", "other", "scenario"]` The source of the model - `"civitai"` - `"huggingface"` - `"other"` - `"scenario"` - `status: Literal["copying", "failed", "new", 3 more]` The model status - `"copying"` - `"failed"` - `"new"` - `"trained"` - `"training"` - `"training-canceled"` - `tags: List[str]` The associated tags (example: ["sci-fi", "landscape"]) - `training_images_number: float` The total number of training images - `type: Literal["custom", "elevenlabs-voice", "flux.1", 34 more]` The model type (example: "flux.1-lora") - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `updated_at: str` The model last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `access_restrictions: Optional[Literal[0, 100, 25, 2 more]]` The access restrictions of the model 0: Free plan 25: Creator plan 50: Pro plan 75: Team plan 100: Enterprise plan - `0` - `100` - `25` - `50` - `75` - `author_id: Optional[str]` The author user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") - `class_: Optional[ModelClass]` The class of the model - `category: str` The category slug of the class (example: "art-style") - `concept_prompt: str` The concept prompt of the class (example: "a sks character design") - `model_id: str` The model ID of the class (example: "stable-diffusion-v1-5") - `name: str` The class name (example: "Character Design") - `prompt: str` The class prompt (example: "a character design") - `slug: str` The class slug (example: "art-style-character-design") - `status: Literal["published", "unpublished"]` The class status (only published classes are listed, but unpublished classes can still appear in existing models) - `"published"` - `"unpublished"` - `thumbnails: List[str]` Some example images URLs to showcase the class - `compliant_model_ids: Optional[List[str]]` List of base model IDs compliant with the model (example: ["flux.1-dev", "flux.1-schnell"]) This attribute is mainly used for Flux LoRA models - `concepts: Optional[List[ModelConcept]]` The concepts is required for the type model: composition - `model_id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: float` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `model_epoch: Optional[str]` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `epoch: Optional[str]` The epoch of the model. Only available for Flux Lora Trained models. If not set, uses the final model epoch (latest) - `epochs: Optional[List[ModelEpoch]]` The epochs of the model. Only available for Flux Lora Trained models. - `epoch: str` The epoch hash to identify the epoch - `assets: Optional[List[ModelEpochAsset]]` The assets of the epoch if sample prompts as been supplied during training - `asset_id: str` The AssetId of the image during training (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the asset - `inputs: Optional[List[ModelInput]]` The inputs of the model. Only used for custom models. To retrieve this list, get it by modelId with GET /models/{modelId} - `name: str` The name that must be user to call the model through the API - `type: Literal["boolean", "file", "file_array", 7 more]` The data type of the input - `"boolean"` - `"file"` - `"file_array"` - `"inputs_array"` - `"model"` - `"model_array"` - `"number"` - `"number_array"` - `"string"` - `"string_array"` - `allowed_values: Optional[List[object]]` The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown. - `background_behavior: Optional[Literal["opaque", "transparent"]]` Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`. - `"opaque"` - `"transparent"` - `color: Optional[bool]` Whether the input is a color or not. Only available for `string` input type. - `cost_impact: Optional[bool]` Whether this input affects the model's cost calculation - `default: Optional[object]` The default value for the input - `description: Optional[str]` Help text displayed in the UI to provide additional information about the input - `group: Optional[str]` Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI. - `hint: Optional[str]` Hint text displayed in the UI as a tooltip to guide the user - `inputs: Optional[List[Dict[str, object]]]` The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs. - `kind: Optional[Literal["3d", "audio", "document", 4 more]]` The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `label: Optional[str]` The label displayed in the UI for this input - `mask_from: Optional[str]` The name of the file input field to use as the mask source - `max: Optional[float]` The maximum allowed value. Only available for `number` and `array` input types. - `max_length: Optional[float]` The maximum allowed length for `string` inputs. Also applies to each item in `string_array`. - `max_size: Optional[float]` The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time. - `min: Optional[float]` The minimum allowed value. Only available for `number` and array input types. - `min_length: Optional[float]` The minimum allowed length for string inputs. Also applies to each item in `string_array`. - `model_types: Optional[List[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]]` The allowed model types for this input. Example: `["flux.1-lora"]`. Only available for `model_array` input type. - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `parent: Optional[bool]` Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types. For `file_array`, the parent asset is the first item in the array. - `placeholder: Optional[str]` Placeholder text for the input. Only available for 'string' input type. - `prompt: Optional[bool]` Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type. - `prompt_spark: Optional[bool]` Whether the input is used with prompt spark. Only available for `string` input type. - `required: Optional[ModelInputRequired]` Set of rules that describes when this input is required: - `always`: Input is always required - `ifNotDefined`: Input is required when another specified input is not defined - `ifDefined`: Input is required when another specified input is defined - `conditionalValues`: Input is required when another input has a specific value By default, the input is not required. - `always: Optional[bool]` Whether the input is always required - `conditional_values: Optional[object]` Makes this input required when another input has a specific value: - Key: name of the input to check - Value: operation and allowed values that trigger the requirement - `if_defined: Optional[object]` Makes this input required when another input is defined: - Key: name of the input that must be defined - Value: message to display when this input is required - `if_not_defined: Optional[object]` Makes this input required when another input is not defined: - Key: name of the input that must be undefined - Value: message to display when this input is required - `step: Optional[float]` The step increment for numeric inputs. Only available for `number` input type. - `model_keyword: Optional[str]` The model keyword, this is a legacy parameter, please use conceptPrompt in parameters - `name: Optional[str]` The model name (example: "Cinematic Realism") - `negative_prompt_embedding: Optional[str]` Fine-tune the model's inferences with negative prompt embedding - `owner_id: Optional[str]` The owner ID (example: "team_VFhihHKMRZyDDnZAJwLb2Q") - `parameters: Optional[ModelParameters]` The parameters of the model - `age: Optional[str]` Age group of the voice (for professional cloning) Only available for ElevenLabs voice training - `batch_size: Optional[float]` The batch size Less steps, and will increase the learning rate Only available for Flux LoRA training - `class_prompt: Optional[str]` The prompt to specify images in the same class as provided instance images Only available for SD15 training - `clone_type: Optional[str]` Type of voice cloning: "instant" (fast) or "professional" (higher quality, requires captcha) Only available for ElevenLabs voice training - `concept_prompt: Optional[str]` The prompt with identifier specifying the instance (or subject) of the class (example: "a daiton dog") Default value varies depending on the model type: - For SD1.5: "daiton" if no class is associated with the model - For SDXL: "daiton" - For Flux: "" - `gender: Optional[str]` Gender of the voice (for professional cloning) Only available for ElevenLabs voice training - `language: Optional[str]` Language of the audio samples (ISO 639-1 code) Only available for ElevenLabs voice training - `learning_rate: Optional[float]` Initial learning rate (after the potential warmup period) Default value varies depending on the model type: - For SD1.5 and SDXL: 0.000005 - For Flux: 0.0001 - `learning_rate_text_encoder: Optional[float]` Initial learning rate (after the potential warmup period) for the text encoder Maximum [Flux LoRA: 0.001] Default [SDXL: 0.00005 | Flux LoRA: 0.00001] Minimum [SDXL: 0 | Flux LoRA: 0.000001] - `learning_rate_unet: Optional[float]` Initial learning rate (after the potential warmup period) for the UNet Only available for SDXL LoRA training - `lr_scheduler: Optional[Literal["constant", "constant-with-warmup", "cosine", 3 more]]` The scheduler type to use (default: "constant") Only available for SD15 and SDXL LoRA training - `"constant"` - `"constant-with-warmup"` - `"cosine"` - `"cosine-with-restarts"` - `"linear"` - `"polynomial"` - `max_train_steps: Optional[float]` Maximum number of training steps to execute (default: varies depending on the model type) For SDXL LoRA training, please use `numTextTrainSteps` and `numUNetTrainSteps` instead Default value varies depending on the model type: - For SD1.5: round((number of training images * 225) / 3) - For SDXL: number of training images * 175 - For Flux: number of training images * 100 Maximum value varies depending on the model type: - For SD1.5 and SDXL: [0, 40000] - For Flux: [0, 10000] - `nb_epochs: Optional[float]` The number of epochs to train for Only available for Flux LoRA training - `nb_repeats: Optional[float]` The number of times to repeat the training Only available for Flux LoRA training - `num_text_train_steps: Optional[float]` The number of training steps for the text encoder Only available for SDXL LoRA training - `num_u_net_train_steps: Optional[float]` The number of training steps for the UNet Only available for SDXL LoRA training - `optimize_for: Optional[Literal["likeness"]]` Optimize the model training task for a specific type of input images. The available values are: - "likeness": optimize training for likeness or portrait (targets specific transformer blocks) - "all": train all transformer blocks - "none": train no specific transformer blocks This parameter controls which double and single transformer blocks are trained during the LoRA training process. Only available for Flux LoRA training - `"likeness"` - `prior_loss_weight: Optional[float]` The weight of prior preservation loss Only available for SD15 and SDXL LoRA training - `random_crop: Optional[bool]` Whether to random crop or center crop images before resizing to the working resolution Only available for SD15 and SDXL LoRA training - `random_crop_ratio: Optional[float]` Ratio of random crops Only available for SD15 and SDXL LoRA training - `random_crop_scale: Optional[float]` Scale of random crops Only available for SD15 and SDXL LoRA training - `rank: Optional[float]` The dimension of the LoRA update matrices Only available for SDXL (deprecated), Flux LoRA and Musubi training Default value varies depending on the model type: - For SDXL (deprecated): 64 - For Flux: 16 - For Musubi: 64 Each trainer enforces its own tighter limit (Flux LoRA: [2; 64], Musubi: [2; 128]) - `remove_background_noise: Optional[bool]` Whether to remove background noise from audio samples before cloning. When enabled, each sample must be at least 5 seconds long. Only available for ElevenLabs voice training - `sample_prompts: Optional[List[str]]` The prompts to use for each epoch Only available for Flux LoRA training - `sample_source_images: Optional[List[str]]` The sample prompt images (AssetIds) paired with samplePrompts Only available for Flux LoRA training Must be the same length as samplePrompts - `scale_lr: Optional[bool]` Whether to scale the learning rate Note: Legacy parameter, will be ignored Only available for SD15 and SDXL LoRA training - `seed: Optional[float]` Used to reproduce previous results. Default: randomly generated number. Only available for SD15 and SDXL LoRA training - `text_encoder_training_ratio: Optional[float]` Whether to train the text encoder or not Example: For 100 steps and a value of 0.2, it means that the text encoder will be trained for 20 steps and then the UNet for 80 steps Note: Legacy parameter, please use `numTextTrainSteps` and `numUNetTrainSteps` Only available for SD15 and SDXL LoRA training - `validation_frequency: Optional[float]` Validation frequency. Cannot be greater than maxTrainSteps value Only available for SD15 and SDXL LoRA training - `validation_prompt: Optional[str]` Validation prompt Only available for SD15 and SDXL LoRA training - `voice_description: Optional[str]` Description of the voice characteristics Only available for ElevenLabs voice training - `wandb_key: Optional[str]` The Weights And Bias key to use for logging. The maximum length is 40 characters - `parent_model_id: Optional[str]` The id of the parent model - `performance_stats: Optional[ModelPerformanceStats]` Aggregated performance stats - `variants: List[ModelPerformanceStatsVariant]` Performance metrics per variant - `capability: str` The generation capability (example: "txt2img", "img2video", "txt2audio") - `computed_at: str` When these stats were last computed (ISO date) - `variant_key: str` Unique variant identifier (example: "txt2img:1K", "img2video:2K", "txt2audio") - `arena_score: Optional[ModelPerformanceStatsVariantArenaScore]` External quality score from arena.ai leaderboard - `arena_category: str` Arena category (example: "text_to_image", "image_to_video") - `arena_model_name: str` Model name on arena.ai - `fetched_at: str` When this score was last fetched (ISO date) - `rank: float` Rank in the arena category - `rating: float` ELO rating - `rating_lower: float` ELO rating confidence interval lower bound - `rating_upper: float` ELO rating confidence interval upper bound - `votes: float` Number of human votes - `cost_per_asset_max_cu: Optional[float]` Maximum cost per output asset (CU) - `cost_per_asset_min_cu: Optional[float]` Minimum cost per output asset (CU) - `cost_per_asset_p50_cu: Optional[float]` Median cost per output asset (CU) - `inference_latency_p50_sec: Optional[float]` Inference latency P50 per output asset (seconds) - `inference_latency_p75_sec: Optional[float]` Inference latency P75 per output asset (seconds) - `resolution: Optional[str]` The resolution bucket (example: "0.5K", "1K", "2K", "4K") - `total_latency_p50_sec: Optional[float]` Total latency P50 per output asset, including queue time (seconds) - `total_latency_p75_sec: Optional[float]` Total latency P75 per output asset, including queue time (seconds) - `default: Optional[str]` Default variant key for quick model comparison - `prompt_embedding: Optional[str]` Fine-tune the model's inferences with prompt embedding - `short_description: Optional[str]` The model short description (example: "This model generates highly detailed cinematic scenes.") - `soft_deletion_on: Optional[str]` The date when the model will be soft deleted (only for Free plan) - `thumbnail: Optional[ModelThumbnail]` A thumbnail for your model - `asset_id: str` The AssetId of the image used as a thumbnail for your model (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the image used as a thumbnail for your model - `training_image_pairs: Optional[List[ModelTrainingImagePair]]` Array of training image pairs - `instruction: Optional[str]` The instruction for the image pair, source to target - `source_id: Optional[str]` The source asset ID (must be a training asset) - `target_id: Optional[str]` The target asset ID (must be a training asset) - `training_images: Optional[List[ModelTrainingImage]]` The URLs of the first 3 training images of the model. To retrieve the full set of images, get it by modelId - `id: str` The training image ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `automatic_captioning: str` Automatic captioning of the image - `created_at: str` The training image upload date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `description: str` Description for the image - `download_url: str` The URL of the image - `name: str` The original file name of the image (example: "my-training-image.jpg") - `training_progress: Optional[ModelTrainingProgress]` Additional information about the training progress of the model - `stage: Literal["pending", "pending-captcha", "queued-for-train", 2 more]` The stage of the request - `"pending"` - `"pending-captcha"` - `"queued-for-train"` - `"running-train"` - `"starting-train"` - `updated_at: float` Timestamp in milliseconds of the last time the training progress was updated - `position: Optional[float]` Position of the job in the queue (ie. the number of job in the queue before this one) - `progress: Optional[float]` The progress of the job - `remaining_time_ms: Optional[float]` The remaining time in milliseconds - `started_at: Optional[float]` The timestamp in millisecond marking the start of the process - `training_stats: Optional[ModelTrainingStats]` Additional information about the model's training - `ended_at: Optional[str]` The training end time as an ISO date string - `queue_duration: Optional[float]` The training queued duration in seconds - `started_at: Optional[str]` The training start time as an ISO date string - `train_duration: Optional[float]` The training duration in seconds - `ui_config: Optional[ModelUiConfig]` The UI configuration for the model - `input_properties: Optional[Dict[str, ModelUiConfigInputProperties]]` Configuration for the input properties - `collapsed: Optional[bool]` - `loras_component: Optional[ModelUiConfigLorasComponent]` Configuration for the loras component - `label: str` The label of the component - `model_input: str` The input name of the model (model_array) - `scale_input: str` The input name of the scale (number_array) - `model_id_input: Optional[str]` The input model id (example: a composition or a single LoRA modelId) If specified, the model id will be attached to the output asset as a metadata If the model-decomposer parser is specified on it, modelInput and scaleInput will be automatically populated - `presets: Optional[List[ModelUiConfigPreset]]` Configuration for the presets - `fields: List[str]` - `presets: object` - `resolution_component: Optional[ModelUiConfigResolutionComponent]` Configuration for the resolution component - `height_input: str` The input name of the height - `label: str` The label of the component - `presets: List[ModelUiConfigResolutionComponentPreset]` The resolution presets - `height: float` - `label: str` - `width: float` - `width_input: str` The input name of the width - `selects: Optional[Dict[str, object]]` Configuration for the selects - `trigger_generate: Optional[ModelUiConfigTriggerGenerate]` Configuration for the trigger generate button - `label: str` - `after: Optional[str]` The 'name' of the input where the trigger generate button will be displayed (after the input). Do not specify both position and after. - `position: Optional[Literal["bottom", "top"]]` The position of the trigger generate button. If position specified, the button will be displayed at the specified position. Do not specify both position and after. - `"bottom"` - `"top"` - `user_id: Optional[str]` (Deprecated) The user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") ### Example ```python import os from scenario_sdk import Scenario client = Scenario( api_key=os.environ.get("SCENARIO_SDK_API_KEY"), # This is the default and can be omitted api_secret=os.environ.get("SCENARIO_SDK_API_SECRET"), # This is the default and can be omitted ) response = client.models.transfer( model_id="modelId", destination_project_id="destinationProjectId", ) print(response.model) ``` #### Response ```json { "model": { "id": "id", "capabilities": [ "3d23d" ], "collectionIds": [ "string" ], "createdAt": "createdAt", "custom": true, "exampleAssetIds": [ "string" ], "privacy": "private", "source": "civitai", "status": "copying", "tags": [ "string" ], "trainingImagesNumber": 0, "type": "custom", "updatedAt": "updatedAt", "accessRestrictions": 0, "authorId": "authorId", "class": { "category": "category", "conceptPrompt": "conceptPrompt", "modelId": "modelId", "name": "name", "prompt": "prompt", "slug": "slug", "status": "published", "thumbnails": [ "string" ] }, "compliantModelIds": [ "string" ], "concepts": [ { "modelId": "modelId", "scale": -2, "modelEpoch": "modelEpoch" } ], "epoch": "epoch", "epochs": [ { "epoch": "epoch", "assets": [ { "assetId": "assetId", "url": "url" } ] } ], "inputs": [ { "name": "name", "type": "boolean", "allowedValues": [ {} ], "backgroundBehavior": "opaque", "color": true, "costImpact": true, "default": {}, "description": "description", "group": "group", "hint": "hint", "inputs": [ { "foo": "bar" } ], "kind": "3d", "label": "label", "maskFrom": "maskFrom", "max": 0, "maxLength": 0, "maxSize": 0, "min": 0, "minLength": 0, "modelTypes": [ "custom" ], "parent": true, "placeholder": "placeholder", "prompt": true, "promptSpark": true, "required": { "always": true, "conditionalValues": {}, "ifDefined": {}, "ifNotDefined": {} }, "step": 1 } ], "modelKeyword": "modelKeyword", "name": "name", "negativePromptEmbedding": "negativePromptEmbedding", "ownerId": "ownerId", "parameters": { "age": "age", "batchSize": 1, "classPrompt": "classPrompt", "cloneType": "cloneType", "conceptPrompt": "conceptPrompt", "gender": "gender", "language": "language", "learningRate": 1, "learningRateTextEncoder": 0.0005, "learningRateUnet": 1, "lrScheduler": "constant", "maxTrainSteps": 0, "nbEpochs": 1, "nbRepeats": 1, "numTextTrainSteps": 0, "numUNetTrainSteps": 0, "optimizeFor": "likeness", "priorLossWeight": 1, "randomCrop": true, "randomCropRatio": 0, "randomCropScale": 0, "rank": 2, "removeBackgroundNoise": true, "samplePrompts": [ "string" ], "sampleSourceImages": [ "string" ], "scaleLr": true, "seed": 0, "textEncoderTrainingRatio": 0, "validationFrequency": 0, "validationPrompt": "validationPrompt", "voiceDescription": "voiceDescription", "wandbKey": "wandbKey" }, "parentModelId": "parentModelId", "performanceStats": { "variants": [ { "capability": "capability", "computedAt": "computedAt", "variantKey": "variantKey", "arenaScore": { "arenaCategory": "arenaCategory", "arenaModelName": "arenaModelName", "fetchedAt": "fetchedAt", "rank": 0, "rating": 0, "ratingLower": 0, "ratingUpper": 0, "votes": 0 }, "costPerAssetMaxCU": 0, "costPerAssetMinCU": 0, "costPerAssetP50CU": 0, "inferenceLatencyP50Sec": 0, "inferenceLatencyP75Sec": 0, "resolution": "resolution", "totalLatencyP50Sec": 0, "totalLatencyP75Sec": 0 } ], "default": "default" }, "promptEmbedding": "promptEmbedding", "shortDescription": "shortDescription", "softDeletionOn": "softDeletionOn", "thumbnail": { "assetId": "assetId", "url": "url" }, "trainingImagePairs": [ { "instruction": "instruction", "sourceId": "sourceId", "targetId": "targetId" } ], "trainingImages": [ { "id": "id", "automaticCaptioning": "automaticCaptioning", "createdAt": "createdAt", "description": "description", "downloadUrl": "downloadUrl", "name": "name" } ], "trainingProgress": { "stage": "pending", "updatedAt": 0, "position": 0, "progress": 0, "remainingTimeMs": 0, "startedAt": 0 }, "trainingStats": { "endedAt": "endedAt", "queueDuration": 0, "startedAt": "startedAt", "trainDuration": 0 }, "uiConfig": { "inputProperties": { "foo": { "collapsed": true } }, "lorasComponent": { "label": "label", "modelInput": "modelInput", "scaleInput": "scaleInput", "modelIdInput": "modelIdInput" }, "presets": [ { "fields": [ "string" ], "presets": {} } ], "resolutionComponent": { "heightInput": "heightInput", "label": "label", "presets": [ { "height": 0, "label": "label", "width": 0 } ], "widthInput": "widthInput" }, "selects": { "foo": {} }, "triggerGenerate": { "label": "label", "after": "after", "position": "bottom" } }, "userId": "userId" } } ``` ## Domain Types ### Model List Response - `class ModelListResponse: …` - `id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `capabilities: List[Literal["3d23d", "audio2audio", "audio2video", 29 more]]` List of model capabilities (example: ["txt2img", "img2img", "txt2img_ip_adapter", ...]) - `"3d23d"` - `"audio2audio"` - `"audio2video"` - `"controlnet"` - `"controlnet_img2img"` - `"controlnet_inpaint"` - `"controlnet_inpaint_ip_adapter"` - `"controlnet_ip_adapter"` - `"controlnet_reference"` - `"controlnet_texture"` - `"img23d"` - `"img2img"` - `"img2img_ip_adapter"` - `"img2img_texture"` - `"img2txt"` - `"img2video"` - `"inpaint"` - `"inpaint_ip_adapter"` - `"outpaint"` - `"reference"` - `"reference_texture"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2img_ip_adapter"` - `"txt2img_texture"` - `"txt2txt"` - `"txt2video"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `collection_ids: List[str]` A list of CollectionId this model belongs to - `created_at: str` The model creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `custom: bool` Whether the model is a custom model and can be used only with POST /generate/custom/{modelId} endpoint - `example_asset_ids: List[str]` List of all example asset IDs setup by the model owner - `privacy: Literal["private", "public", "unlisted"]` The privacy of the model (default: private) - `"private"` - `"public"` - `"unlisted"` - `source: Literal["civitai", "huggingface", "other", "scenario"]` The source of the model - `"civitai"` - `"huggingface"` - `"other"` - `"scenario"` - `status: Literal["copying", "failed", "new", 3 more]` The model status - `"copying"` - `"failed"` - `"new"` - `"trained"` - `"training"` - `"training-canceled"` - `tags: List[str]` The associated tags (example: ["sci-fi", "landscape"]) - `training_images_number: float` The total number of training images - `type: Literal["custom", "elevenlabs-voice", "flux.1", 34 more]` The model type (example: "flux.1-lora") - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `updated_at: str` The model last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `access_restrictions: Optional[Literal[0, 100, 25, 2 more]]` The access restrictions of the model 0: Free plan 25: Creator plan 50: Pro plan 75: Team plan 100: Enterprise plan - `0` - `100` - `25` - `50` - `75` - `author_id: Optional[str]` The author user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") - `class_: Optional[Class]` The class of the model - `category: str` The category slug of the class (example: "art-style") - `concept_prompt: str` The concept prompt of the class (example: "a sks character design") - `model_id: str` The model ID of the class (example: "stable-diffusion-v1-5") - `name: str` The class name (example: "Character Design") - `prompt: str` The class prompt (example: "a character design") - `slug: str` The class slug (example: "art-style-character-design") - `status: Literal["published", "unpublished"]` The class status (only published classes are listed, but unpublished classes can still appear in existing models) - `"published"` - `"unpublished"` - `thumbnails: List[str]` Some example images URLs to showcase the class - `compliant_model_ids: Optional[List[str]]` List of base model IDs compliant with the model (example: ["flux.1-dev", "flux.1-schnell"]) This attribute is mainly used for Flux LoRA models - `concepts: Optional[List[Concept]]` The concepts is required for the type model: composition - `model_id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: float` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `model_epoch: Optional[str]` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `epoch: Optional[str]` The epoch of the model. Only available for Flux Lora Trained models. If not set, uses the final model epoch (latest) - `epochs: Optional[List[Epoch]]` The epochs of the model. Only available for Flux Lora Trained models. - `epoch: str` The epoch hash to identify the epoch - `assets: Optional[List[EpochAsset]]` The assets of the epoch if sample prompts as been supplied during training - `asset_id: str` The AssetId of the image during training (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the asset - `inputs: Optional[List[Input]]` The inputs of the model. Only used for custom models. To retrieve this list, get it by modelId with GET /models/{modelId} - `name: str` The name that must be user to call the model through the API - `type: Literal["boolean", "file", "file_array", 7 more]` The data type of the input - `"boolean"` - `"file"` - `"file_array"` - `"inputs_array"` - `"model"` - `"model_array"` - `"number"` - `"number_array"` - `"string"` - `"string_array"` - `allowed_values: Optional[List[object]]` The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown. - `background_behavior: Optional[Literal["opaque", "transparent"]]` Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`. - `"opaque"` - `"transparent"` - `color: Optional[bool]` Whether the input is a color or not. Only available for `string` input type. - `cost_impact: Optional[bool]` Whether this input affects the model's cost calculation - `default: Optional[object]` The default value for the input - `description: Optional[str]` Help text displayed in the UI to provide additional information about the input - `group: Optional[str]` Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI. - `hint: Optional[str]` Hint text displayed in the UI as a tooltip to guide the user - `inputs: Optional[List[Dict[str, object]]]` The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs. - `kind: Optional[Literal["3d", "audio", "document", 4 more]]` The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `label: Optional[str]` The label displayed in the UI for this input - `mask_from: Optional[str]` The name of the file input field to use as the mask source - `max: Optional[float]` The maximum allowed value. Only available for `number` and `array` input types. - `max_length: Optional[float]` The maximum allowed length for `string` inputs. Also applies to each item in `string_array`. - `max_size: Optional[float]` The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time. - `min: Optional[float]` The minimum allowed value. Only available for `number` and array input types. - `min_length: Optional[float]` The minimum allowed length for string inputs. Also applies to each item in `string_array`. - `model_types: Optional[List[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]]` The allowed model types for this input. Example: `["flux.1-lora"]`. Only available for `model_array` input type. - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `parent: Optional[bool]` Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types. For `file_array`, the parent asset is the first item in the array. - `placeholder: Optional[str]` Placeholder text for the input. Only available for 'string' input type. - `prompt: Optional[bool]` Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type. - `prompt_spark: Optional[bool]` Whether the input is used with prompt spark. Only available for `string` input type. - `required: Optional[InputRequired]` Set of rules that describes when this input is required: - `always`: Input is always required - `ifNotDefined`: Input is required when another specified input is not defined - `ifDefined`: Input is required when another specified input is defined - `conditionalValues`: Input is required when another input has a specific value By default, the input is not required. - `always: Optional[bool]` Whether the input is always required - `conditional_values: Optional[object]` Makes this input required when another input has a specific value: - Key: name of the input to check - Value: operation and allowed values that trigger the requirement - `if_defined: Optional[object]` Makes this input required when another input is defined: - Key: name of the input that must be defined - Value: message to display when this input is required - `if_not_defined: Optional[object]` Makes this input required when another input is not defined: - Key: name of the input that must be undefined - Value: message to display when this input is required - `step: Optional[float]` The step increment for numeric inputs. Only available for `number` input type. - `model_keyword: Optional[str]` The model keyword, this is a legacy parameter, please use conceptPrompt in parameters - `name: Optional[str]` The model name (example: "Cinematic Realism") - `negative_prompt_embedding: Optional[str]` Fine-tune the model's inferences with negative prompt embedding - `owner_id: Optional[str]` The owner ID (example: "team_VFhihHKMRZyDDnZAJwLb2Q") - `parameters: Optional[Parameters]` The parameters of the model - `age: Optional[str]` Age group of the voice (for professional cloning) Only available for ElevenLabs voice training - `batch_size: Optional[float]` The batch size Less steps, and will increase the learning rate Only available for Flux LoRA training - `class_prompt: Optional[str]` The prompt to specify images in the same class as provided instance images Only available for SD15 training - `clone_type: Optional[str]` Type of voice cloning: "instant" (fast) or "professional" (higher quality, requires captcha) Only available for ElevenLabs voice training - `concept_prompt: Optional[str]` The prompt with identifier specifying the instance (or subject) of the class (example: "a daiton dog") Default value varies depending on the model type: - For SD1.5: "daiton" if no class is associated with the model - For SDXL: "daiton" - For Flux: "" - `gender: Optional[str]` Gender of the voice (for professional cloning) Only available for ElevenLabs voice training - `language: Optional[str]` Language of the audio samples (ISO 639-1 code) Only available for ElevenLabs voice training - `learning_rate: Optional[float]` Initial learning rate (after the potential warmup period) Default value varies depending on the model type: - For SD1.5 and SDXL: 0.000005 - For Flux: 0.0001 - `learning_rate_text_encoder: Optional[float]` Initial learning rate (after the potential warmup period) for the text encoder Maximum [Flux LoRA: 0.001] Default [SDXL: 0.00005 | Flux LoRA: 0.00001] Minimum [SDXL: 0 | Flux LoRA: 0.000001] - `learning_rate_unet: Optional[float]` Initial learning rate (after the potential warmup period) for the UNet Only available for SDXL LoRA training - `lr_scheduler: Optional[Literal["constant", "constant-with-warmup", "cosine", 3 more]]` The scheduler type to use (default: "constant") Only available for SD15 and SDXL LoRA training - `"constant"` - `"constant-with-warmup"` - `"cosine"` - `"cosine-with-restarts"` - `"linear"` - `"polynomial"` - `max_train_steps: Optional[float]` Maximum number of training steps to execute (default: varies depending on the model type) For SDXL LoRA training, please use `numTextTrainSteps` and `numUNetTrainSteps` instead Default value varies depending on the model type: - For SD1.5: round((number of training images * 225) / 3) - For SDXL: number of training images * 175 - For Flux: number of training images * 100 Maximum value varies depending on the model type: - For SD1.5 and SDXL: [0, 40000] - For Flux: [0, 10000] - `nb_epochs: Optional[float]` The number of epochs to train for Only available for Flux LoRA training - `nb_repeats: Optional[float]` The number of times to repeat the training Only available for Flux LoRA training - `num_text_train_steps: Optional[float]` The number of training steps for the text encoder Only available for SDXL LoRA training - `num_u_net_train_steps: Optional[float]` The number of training steps for the UNet Only available for SDXL LoRA training - `optimize_for: Optional[Literal["likeness"]]` Optimize the model training task for a specific type of input images. The available values are: - "likeness": optimize training for likeness or portrait (targets specific transformer blocks) - "all": train all transformer blocks - "none": train no specific transformer blocks This parameter controls which double and single transformer blocks are trained during the LoRA training process. Only available for Flux LoRA training - `"likeness"` - `prior_loss_weight: Optional[float]` The weight of prior preservation loss Only available for SD15 and SDXL LoRA training - `random_crop: Optional[bool]` Whether to random crop or center crop images before resizing to the working resolution Only available for SD15 and SDXL LoRA training - `random_crop_ratio: Optional[float]` Ratio of random crops Only available for SD15 and SDXL LoRA training - `random_crop_scale: Optional[float]` Scale of random crops Only available for SD15 and SDXL LoRA training - `rank: Optional[float]` The dimension of the LoRA update matrices Only available for SDXL (deprecated), Flux LoRA and Musubi training Default value varies depending on the model type: - For SDXL (deprecated): 64 - For Flux: 16 - For Musubi: 64 Each trainer enforces its own tighter limit (Flux LoRA: [2; 64], Musubi: [2; 128]) - `remove_background_noise: Optional[bool]` Whether to remove background noise from audio samples before cloning. When enabled, each sample must be at least 5 seconds long. Only available for ElevenLabs voice training - `sample_prompts: Optional[List[str]]` The prompts to use for each epoch Only available for Flux LoRA training - `sample_source_images: Optional[List[str]]` The sample prompt images (AssetIds) paired with samplePrompts Only available for Flux LoRA training Must be the same length as samplePrompts - `scale_lr: Optional[bool]` Whether to scale the learning rate Note: Legacy parameter, will be ignored Only available for SD15 and SDXL LoRA training - `seed: Optional[float]` Used to reproduce previous results. Default: randomly generated number. Only available for SD15 and SDXL LoRA training - `text_encoder_training_ratio: Optional[float]` Whether to train the text encoder or not Example: For 100 steps and a value of 0.2, it means that the text encoder will be trained for 20 steps and then the UNet for 80 steps Note: Legacy parameter, please use `numTextTrainSteps` and `numUNetTrainSteps` Only available for SD15 and SDXL LoRA training - `validation_frequency: Optional[float]` Validation frequency. Cannot be greater than maxTrainSteps value Only available for SD15 and SDXL LoRA training - `validation_prompt: Optional[str]` Validation prompt Only available for SD15 and SDXL LoRA training - `voice_description: Optional[str]` Description of the voice characteristics Only available for ElevenLabs voice training - `wandb_key: Optional[str]` The Weights And Bias key to use for logging. The maximum length is 40 characters - `parent_model_id: Optional[str]` The id of the parent model - `performance_stats: Optional[PerformanceStats]` Aggregated performance stats - `variants: List[PerformanceStatsVariant]` Performance metrics per variant - `capability: str` The generation capability (example: "txt2img", "img2video", "txt2audio") - `computed_at: str` When these stats were last computed (ISO date) - `variant_key: str` Unique variant identifier (example: "txt2img:1K", "img2video:2K", "txt2audio") - `arena_score: Optional[PerformanceStatsVariantArenaScore]` External quality score from arena.ai leaderboard - `arena_category: str` Arena category (example: "text_to_image", "image_to_video") - `arena_model_name: str` Model name on arena.ai - `fetched_at: str` When this score was last fetched (ISO date) - `rank: float` Rank in the arena category - `rating: float` ELO rating - `rating_lower: float` ELO rating confidence interval lower bound - `rating_upper: float` ELO rating confidence interval upper bound - `votes: float` Number of human votes - `cost_per_asset_max_cu: Optional[float]` Maximum cost per output asset (CU) - `cost_per_asset_min_cu: Optional[float]` Minimum cost per output asset (CU) - `cost_per_asset_p50_cu: Optional[float]` Median cost per output asset (CU) - `inference_latency_p50_sec: Optional[float]` Inference latency P50 per output asset (seconds) - `inference_latency_p75_sec: Optional[float]` Inference latency P75 per output asset (seconds) - `resolution: Optional[str]` The resolution bucket (example: "0.5K", "1K", "2K", "4K") - `total_latency_p50_sec: Optional[float]` Total latency P50 per output asset, including queue time (seconds) - `total_latency_p75_sec: Optional[float]` Total latency P75 per output asset, including queue time (seconds) - `default: Optional[str]` Default variant key for quick model comparison - `prompt_embedding: Optional[str]` Fine-tune the model's inferences with prompt embedding - `short_description: Optional[str]` The model short description (example: "This model generates highly detailed cinematic scenes.") - `soft_deletion_on: Optional[str]` The date when the model will be soft deleted (only for Free plan) - `thumbnail: Optional[Thumbnail]` A thumbnail for your model - `asset_id: str` The AssetId of the image used as a thumbnail for your model (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the image used as a thumbnail for your model - `training_image_pairs: Optional[List[TrainingImagePair]]` Array of training image pairs - `instruction: Optional[str]` The instruction for the image pair, source to target - `source_id: Optional[str]` The source asset ID (must be a training asset) - `target_id: Optional[str]` The target asset ID (must be a training asset) - `training_images: Optional[List[TrainingImage]]` The URLs of the first 3 training images of the model. To retrieve the full set of images, get it by modelId - `id: str` The training image ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `automatic_captioning: str` Automatic captioning of the image - `created_at: str` The training image upload date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `description: str` Description for the image - `download_url: str` The URL of the image - `name: str` The original file name of the image (example: "my-training-image.jpg") - `training_progress: Optional[TrainingProgress]` Additional information about the training progress of the model - `stage: Literal["pending", "pending-captcha", "queued-for-train", 2 more]` The stage of the request - `"pending"` - `"pending-captcha"` - `"queued-for-train"` - `"running-train"` - `"starting-train"` - `updated_at: float` Timestamp in milliseconds of the last time the training progress was updated - `position: Optional[float]` Position of the job in the queue (ie. the number of job in the queue before this one) - `progress: Optional[float]` The progress of the job - `remaining_time_ms: Optional[float]` The remaining time in milliseconds - `started_at: Optional[float]` The timestamp in millisecond marking the start of the process - `training_stats: Optional[TrainingStats]` Additional information about the model's training - `ended_at: Optional[str]` The training end time as an ISO date string - `queue_duration: Optional[float]` The training queued duration in seconds - `started_at: Optional[str]` The training start time as an ISO date string - `train_duration: Optional[float]` The training duration in seconds - `ui_config: Optional[UiConfig]` The UI configuration for the model - `input_properties: Optional[Dict[str, UiConfigInputProperties]]` Configuration for the input properties - `collapsed: Optional[bool]` - `loras_component: Optional[UiConfigLorasComponent]` Configuration for the loras component - `label: str` The label of the component - `model_input: str` The input name of the model (model_array) - `scale_input: str` The input name of the scale (number_array) - `model_id_input: Optional[str]` The input model id (example: a composition or a single LoRA modelId) If specified, the model id will be attached to the output asset as a metadata If the model-decomposer parser is specified on it, modelInput and scaleInput will be automatically populated - `presets: Optional[List[UiConfigPreset]]` Configuration for the presets - `fields: List[str]` - `presets: object` - `resolution_component: Optional[UiConfigResolutionComponent]` Configuration for the resolution component - `height_input: str` The input name of the height - `label: str` The label of the component - `presets: List[UiConfigResolutionComponentPreset]` The resolution presets - `height: float` - `label: str` - `width: float` - `width_input: str` The input name of the width - `selects: Optional[Dict[str, object]]` Configuration for the selects - `trigger_generate: Optional[UiConfigTriggerGenerate]` Configuration for the trigger generate button - `label: str` - `after: Optional[str]` The 'name' of the input where the trigger generate button will be displayed (after the input). Do not specify both position and after. - `position: Optional[Literal["bottom", "top"]]` The position of the trigger generate button. If position specified, the button will be displayed at the specified position. Do not specify both position and after. - `"bottom"` - `"top"` - `user_id: Optional[str]` (Deprecated) The user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") ### Model Create Response - `class ModelCreateResponse: …` - `model: Model` - `id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `capabilities: List[Literal["3d23d", "audio2audio", "audio2video", 29 more]]` List of model capabilities (example: ["txt2img", "img2img", "txt2img_ip_adapter", ...]) - `"3d23d"` - `"audio2audio"` - `"audio2video"` - `"controlnet"` - `"controlnet_img2img"` - `"controlnet_inpaint"` - `"controlnet_inpaint_ip_adapter"` - `"controlnet_ip_adapter"` - `"controlnet_reference"` - `"controlnet_texture"` - `"img23d"` - `"img2img"` - `"img2img_ip_adapter"` - `"img2img_texture"` - `"img2txt"` - `"img2video"` - `"inpaint"` - `"inpaint_ip_adapter"` - `"outpaint"` - `"reference"` - `"reference_texture"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2img_ip_adapter"` - `"txt2img_texture"` - `"txt2txt"` - `"txt2video"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `collection_ids: List[str]` A list of CollectionId this model belongs to - `created_at: str` The model creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `custom: bool` Whether the model is a custom model and can be used only with POST /generate/custom/{modelId} endpoint - `example_asset_ids: List[str]` List of all example asset IDs setup by the model owner - `privacy: Literal["private", "public", "unlisted"]` The privacy of the model (default: private) - `"private"` - `"public"` - `"unlisted"` - `source: Literal["civitai", "huggingface", "other", "scenario"]` The source of the model - `"civitai"` - `"huggingface"` - `"other"` - `"scenario"` - `status: Literal["copying", "failed", "new", 3 more]` The model status - `"copying"` - `"failed"` - `"new"` - `"trained"` - `"training"` - `"training-canceled"` - `tags: List[str]` The associated tags (example: ["sci-fi", "landscape"]) - `training_images_number: float` The total number of training images - `type: Literal["custom", "elevenlabs-voice", "flux.1", 34 more]` The model type (example: "flux.1-lora") - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `updated_at: str` The model last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `access_restrictions: Optional[Literal[0, 100, 25, 2 more]]` The access restrictions of the model 0: Free plan 25: Creator plan 50: Pro plan 75: Team plan 100: Enterprise plan - `0` - `100` - `25` - `50` - `75` - `author_id: Optional[str]` The author user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") - `class_: Optional[ModelClass]` The class of the model - `category: str` The category slug of the class (example: "art-style") - `concept_prompt: str` The concept prompt of the class (example: "a sks character design") - `model_id: str` The model ID of the class (example: "stable-diffusion-v1-5") - `name: str` The class name (example: "Character Design") - `prompt: str` The class prompt (example: "a character design") - `slug: str` The class slug (example: "art-style-character-design") - `status: Literal["published", "unpublished"]` The class status (only published classes are listed, but unpublished classes can still appear in existing models) - `"published"` - `"unpublished"` - `thumbnails: List[str]` Some example images URLs to showcase the class - `compliant_model_ids: Optional[List[str]]` List of base model IDs compliant with the model (example: ["flux.1-dev", "flux.1-schnell"]) This attribute is mainly used for Flux LoRA models - `concepts: Optional[List[ModelConcept]]` The concepts is required for the type model: composition - `model_id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: float` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `model_epoch: Optional[str]` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `epoch: Optional[str]` The epoch of the model. Only available for Flux Lora Trained models. If not set, uses the final model epoch (latest) - `epochs: Optional[List[ModelEpoch]]` The epochs of the model. Only available for Flux Lora Trained models. - `epoch: str` The epoch hash to identify the epoch - `assets: Optional[List[ModelEpochAsset]]` The assets of the epoch if sample prompts as been supplied during training - `asset_id: str` The AssetId of the image during training (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the asset - `inputs: Optional[List[ModelInput]]` The inputs of the model. Only used for custom models. To retrieve this list, get it by modelId with GET /models/{modelId} - `name: str` The name that must be user to call the model through the API - `type: Literal["boolean", "file", "file_array", 7 more]` The data type of the input - `"boolean"` - `"file"` - `"file_array"` - `"inputs_array"` - `"model"` - `"model_array"` - `"number"` - `"number_array"` - `"string"` - `"string_array"` - `allowed_values: Optional[List[object]]` The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown. - `background_behavior: Optional[Literal["opaque", "transparent"]]` Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`. - `"opaque"` - `"transparent"` - `color: Optional[bool]` Whether the input is a color or not. Only available for `string` input type. - `cost_impact: Optional[bool]` Whether this input affects the model's cost calculation - `default: Optional[object]` The default value for the input - `description: Optional[str]` Help text displayed in the UI to provide additional information about the input - `group: Optional[str]` Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI. - `hint: Optional[str]` Hint text displayed in the UI as a tooltip to guide the user - `inputs: Optional[List[Dict[str, object]]]` The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs. - `kind: Optional[Literal["3d", "audio", "document", 4 more]]` The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `label: Optional[str]` The label displayed in the UI for this input - `mask_from: Optional[str]` The name of the file input field to use as the mask source - `max: Optional[float]` The maximum allowed value. Only available for `number` and `array` input types. - `max_length: Optional[float]` The maximum allowed length for `string` inputs. Also applies to each item in `string_array`. - `max_size: Optional[float]` The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time. - `min: Optional[float]` The minimum allowed value. Only available for `number` and array input types. - `min_length: Optional[float]` The minimum allowed length for string inputs. Also applies to each item in `string_array`. - `model_types: Optional[List[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]]` The allowed model types for this input. Example: `["flux.1-lora"]`. Only available for `model_array` input type. - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `parent: Optional[bool]` Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types. For `file_array`, the parent asset is the first item in the array. - `placeholder: Optional[str]` Placeholder text for the input. Only available for 'string' input type. - `prompt: Optional[bool]` Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type. - `prompt_spark: Optional[bool]` Whether the input is used with prompt spark. Only available for `string` input type. - `required: Optional[ModelInputRequired]` Set of rules that describes when this input is required: - `always`: Input is always required - `ifNotDefined`: Input is required when another specified input is not defined - `ifDefined`: Input is required when another specified input is defined - `conditionalValues`: Input is required when another input has a specific value By default, the input is not required. - `always: Optional[bool]` Whether the input is always required - `conditional_values: Optional[object]` Makes this input required when another input has a specific value: - Key: name of the input to check - Value: operation and allowed values that trigger the requirement - `if_defined: Optional[object]` Makes this input required when another input is defined: - Key: name of the input that must be defined - Value: message to display when this input is required - `if_not_defined: Optional[object]` Makes this input required when another input is not defined: - Key: name of the input that must be undefined - Value: message to display when this input is required - `step: Optional[float]` The step increment for numeric inputs. Only available for `number` input type. - `model_keyword: Optional[str]` The model keyword, this is a legacy parameter, please use conceptPrompt in parameters - `name: Optional[str]` The model name (example: "Cinematic Realism") - `negative_prompt_embedding: Optional[str]` Fine-tune the model's inferences with negative prompt embedding - `owner_id: Optional[str]` The owner ID (example: "team_VFhihHKMRZyDDnZAJwLb2Q") - `parameters: Optional[ModelParameters]` The parameters of the model - `age: Optional[str]` Age group of the voice (for professional cloning) Only available for ElevenLabs voice training - `batch_size: Optional[float]` The batch size Less steps, and will increase the learning rate Only available for Flux LoRA training - `class_prompt: Optional[str]` The prompt to specify images in the same class as provided instance images Only available for SD15 training - `clone_type: Optional[str]` Type of voice cloning: "instant" (fast) or "professional" (higher quality, requires captcha) Only available for ElevenLabs voice training - `concept_prompt: Optional[str]` The prompt with identifier specifying the instance (or subject) of the class (example: "a daiton dog") Default value varies depending on the model type: - For SD1.5: "daiton" if no class is associated with the model - For SDXL: "daiton" - For Flux: "" - `gender: Optional[str]` Gender of the voice (for professional cloning) Only available for ElevenLabs voice training - `language: Optional[str]` Language of the audio samples (ISO 639-1 code) Only available for ElevenLabs voice training - `learning_rate: Optional[float]` Initial learning rate (after the potential warmup period) Default value varies depending on the model type: - For SD1.5 and SDXL: 0.000005 - For Flux: 0.0001 - `learning_rate_text_encoder: Optional[float]` Initial learning rate (after the potential warmup period) for the text encoder Maximum [Flux LoRA: 0.001] Default [SDXL: 0.00005 | Flux LoRA: 0.00001] Minimum [SDXL: 0 | Flux LoRA: 0.000001] - `learning_rate_unet: Optional[float]` Initial learning rate (after the potential warmup period) for the UNet Only available for SDXL LoRA training - `lr_scheduler: Optional[Literal["constant", "constant-with-warmup", "cosine", 3 more]]` The scheduler type to use (default: "constant") Only available for SD15 and SDXL LoRA training - `"constant"` - `"constant-with-warmup"` - `"cosine"` - `"cosine-with-restarts"` - `"linear"` - `"polynomial"` - `max_train_steps: Optional[float]` Maximum number of training steps to execute (default: varies depending on the model type) For SDXL LoRA training, please use `numTextTrainSteps` and `numUNetTrainSteps` instead Default value varies depending on the model type: - For SD1.5: round((number of training images * 225) / 3) - For SDXL: number of training images * 175 - For Flux: number of training images * 100 Maximum value varies depending on the model type: - For SD1.5 and SDXL: [0, 40000] - For Flux: [0, 10000] - `nb_epochs: Optional[float]` The number of epochs to train for Only available for Flux LoRA training - `nb_repeats: Optional[float]` The number of times to repeat the training Only available for Flux LoRA training - `num_text_train_steps: Optional[float]` The number of training steps for the text encoder Only available for SDXL LoRA training - `num_u_net_train_steps: Optional[float]` The number of training steps for the UNet Only available for SDXL LoRA training - `optimize_for: Optional[Literal["likeness"]]` Optimize the model training task for a specific type of input images. The available values are: - "likeness": optimize training for likeness or portrait (targets specific transformer blocks) - "all": train all transformer blocks - "none": train no specific transformer blocks This parameter controls which double and single transformer blocks are trained during the LoRA training process. Only available for Flux LoRA training - `"likeness"` - `prior_loss_weight: Optional[float]` The weight of prior preservation loss Only available for SD15 and SDXL LoRA training - `random_crop: Optional[bool]` Whether to random crop or center crop images before resizing to the working resolution Only available for SD15 and SDXL LoRA training - `random_crop_ratio: Optional[float]` Ratio of random crops Only available for SD15 and SDXL LoRA training - `random_crop_scale: Optional[float]` Scale of random crops Only available for SD15 and SDXL LoRA training - `rank: Optional[float]` The dimension of the LoRA update matrices Only available for SDXL (deprecated), Flux LoRA and Musubi training Default value varies depending on the model type: - For SDXL (deprecated): 64 - For Flux: 16 - For Musubi: 64 Each trainer enforces its own tighter limit (Flux LoRA: [2; 64], Musubi: [2; 128]) - `remove_background_noise: Optional[bool]` Whether to remove background noise from audio samples before cloning. When enabled, each sample must be at least 5 seconds long. Only available for ElevenLabs voice training - `sample_prompts: Optional[List[str]]` The prompts to use for each epoch Only available for Flux LoRA training - `sample_source_images: Optional[List[str]]` The sample prompt images (AssetIds) paired with samplePrompts Only available for Flux LoRA training Must be the same length as samplePrompts - `scale_lr: Optional[bool]` Whether to scale the learning rate Note: Legacy parameter, will be ignored Only available for SD15 and SDXL LoRA training - `seed: Optional[float]` Used to reproduce previous results. Default: randomly generated number. Only available for SD15 and SDXL LoRA training - `text_encoder_training_ratio: Optional[float]` Whether to train the text encoder or not Example: For 100 steps and a value of 0.2, it means that the text encoder will be trained for 20 steps and then the UNet for 80 steps Note: Legacy parameter, please use `numTextTrainSteps` and `numUNetTrainSteps` Only available for SD15 and SDXL LoRA training - `validation_frequency: Optional[float]` Validation frequency. Cannot be greater than maxTrainSteps value Only available for SD15 and SDXL LoRA training - `validation_prompt: Optional[str]` Validation prompt Only available for SD15 and SDXL LoRA training - `voice_description: Optional[str]` Description of the voice characteristics Only available for ElevenLabs voice training - `wandb_key: Optional[str]` The Weights And Bias key to use for logging. The maximum length is 40 characters - `parent_model_id: Optional[str]` The id of the parent model - `performance_stats: Optional[ModelPerformanceStats]` Aggregated performance stats - `variants: List[ModelPerformanceStatsVariant]` Performance metrics per variant - `capability: str` The generation capability (example: "txt2img", "img2video", "txt2audio") - `computed_at: str` When these stats were last computed (ISO date) - `variant_key: str` Unique variant identifier (example: "txt2img:1K", "img2video:2K", "txt2audio") - `arena_score: Optional[ModelPerformanceStatsVariantArenaScore]` External quality score from arena.ai leaderboard - `arena_category: str` Arena category (example: "text_to_image", "image_to_video") - `arena_model_name: str` Model name on arena.ai - `fetched_at: str` When this score was last fetched (ISO date) - `rank: float` Rank in the arena category - `rating: float` ELO rating - `rating_lower: float` ELO rating confidence interval lower bound - `rating_upper: float` ELO rating confidence interval upper bound - `votes: float` Number of human votes - `cost_per_asset_max_cu: Optional[float]` Maximum cost per output asset (CU) - `cost_per_asset_min_cu: Optional[float]` Minimum cost per output asset (CU) - `cost_per_asset_p50_cu: Optional[float]` Median cost per output asset (CU) - `inference_latency_p50_sec: Optional[float]` Inference latency P50 per output asset (seconds) - `inference_latency_p75_sec: Optional[float]` Inference latency P75 per output asset (seconds) - `resolution: Optional[str]` The resolution bucket (example: "0.5K", "1K", "2K", "4K") - `total_latency_p50_sec: Optional[float]` Total latency P50 per output asset, including queue time (seconds) - `total_latency_p75_sec: Optional[float]` Total latency P75 per output asset, including queue time (seconds) - `default: Optional[str]` Default variant key for quick model comparison - `prompt_embedding: Optional[str]` Fine-tune the model's inferences with prompt embedding - `short_description: Optional[str]` The model short description (example: "This model generates highly detailed cinematic scenes.") - `soft_deletion_on: Optional[str]` The date when the model will be soft deleted (only for Free plan) - `thumbnail: Optional[ModelThumbnail]` A thumbnail for your model - `asset_id: str` The AssetId of the image used as a thumbnail for your model (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the image used as a thumbnail for your model - `training_image_pairs: Optional[List[ModelTrainingImagePair]]` Array of training image pairs - `instruction: Optional[str]` The instruction for the image pair, source to target - `source_id: Optional[str]` The source asset ID (must be a training asset) - `target_id: Optional[str]` The target asset ID (must be a training asset) - `training_images: Optional[List[ModelTrainingImage]]` The URLs of the first 3 training images of the model. To retrieve the full set of images, get it by modelId - `id: str` The training image ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `automatic_captioning: str` Automatic captioning of the image - `created_at: str` The training image upload date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `description: str` Description for the image - `download_url: str` The URL of the image - `name: str` The original file name of the image (example: "my-training-image.jpg") - `training_progress: Optional[ModelTrainingProgress]` Additional information about the training progress of the model - `stage: Literal["pending", "pending-captcha", "queued-for-train", 2 more]` The stage of the request - `"pending"` - `"pending-captcha"` - `"queued-for-train"` - `"running-train"` - `"starting-train"` - `updated_at: float` Timestamp in milliseconds of the last time the training progress was updated - `position: Optional[float]` Position of the job in the queue (ie. the number of job in the queue before this one) - `progress: Optional[float]` The progress of the job - `remaining_time_ms: Optional[float]` The remaining time in milliseconds - `started_at: Optional[float]` The timestamp in millisecond marking the start of the process - `training_stats: Optional[ModelTrainingStats]` Additional information about the model's training - `ended_at: Optional[str]` The training end time as an ISO date string - `queue_duration: Optional[float]` The training queued duration in seconds - `started_at: Optional[str]` The training start time as an ISO date string - `train_duration: Optional[float]` The training duration in seconds - `ui_config: Optional[ModelUiConfig]` The UI configuration for the model - `input_properties: Optional[Dict[str, ModelUiConfigInputProperties]]` Configuration for the input properties - `collapsed: Optional[bool]` - `loras_component: Optional[ModelUiConfigLorasComponent]` Configuration for the loras component - `label: str` The label of the component - `model_input: str` The input name of the model (model_array) - `scale_input: str` The input name of the scale (number_array) - `model_id_input: Optional[str]` The input model id (example: a composition or a single LoRA modelId) If specified, the model id will be attached to the output asset as a metadata If the model-decomposer parser is specified on it, modelInput and scaleInput will be automatically populated - `presets: Optional[List[ModelUiConfigPreset]]` Configuration for the presets - `fields: List[str]` - `presets: object` - `resolution_component: Optional[ModelUiConfigResolutionComponent]` Configuration for the resolution component - `height_input: str` The input name of the height - `label: str` The label of the component - `presets: List[ModelUiConfigResolutionComponentPreset]` The resolution presets - `height: float` - `label: str` - `width: float` - `width_input: str` The input name of the width - `selects: Optional[Dict[str, object]]` Configuration for the selects - `trigger_generate: Optional[ModelUiConfigTriggerGenerate]` Configuration for the trigger generate button - `label: str` - `after: Optional[str]` The 'name' of the input where the trigger generate button will be displayed (after the input). Do not specify both position and after. - `position: Optional[Literal["bottom", "top"]]` The position of the trigger generate button. If position specified, the button will be displayed at the specified position. Do not specify both position and after. - `"bottom"` - `"top"` - `user_id: Optional[str]` (Deprecated) The user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") ### Model Get Bulk Response - `class ModelGetBulkResponse: …` - `models: List[Model]` - `id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `privacy: Literal["private", "public", "unlisted"]` The privacy of the model (default: private) - `"private"` - `"public"` - `"unlisted"` - `type: Literal["custom", "elevenlabs-voice", "flux.1", 34 more]` The model type (example: "flux.1-lora") - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `access_restrictions: Optional[Literal[0, 100, 25, 2 more]]` The access restrictions of the model 0: Free plan 25: Creator plan 50: Pro plan 75: Team plan 100: Enterprise plan - `0` - `100` - `25` - `50` - `75` - `author_id: Optional[str]` The author user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") - `capabilities: Optional[List[Literal["3d23d", "audio2audio", "audio2video", 29 more]]]` List of model capabilities (example: ["txt2img", "img2img", "txt2img_ip_adapter", ...]) - `"3d23d"` - `"audio2audio"` - `"audio2video"` - `"controlnet"` - `"controlnet_img2img"` - `"controlnet_inpaint"` - `"controlnet_inpaint_ip_adapter"` - `"controlnet_ip_adapter"` - `"controlnet_reference"` - `"controlnet_texture"` - `"img23d"` - `"img2img"` - `"img2img_ip_adapter"` - `"img2img_texture"` - `"img2txt"` - `"img2video"` - `"inpaint"` - `"inpaint_ip_adapter"` - `"outpaint"` - `"reference"` - `"reference_texture"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2img_ip_adapter"` - `"txt2img_texture"` - `"txt2txt"` - `"txt2video"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `class_: Optional[ModelClass]` The class of the model - `category: str` The category slug of the class (example: "art-style") - `concept_prompt: str` The concept prompt of the class (example: "a sks character design") - `model_id: str` The model ID of the class (example: "stable-diffusion-v1-5") - `name: str` The class name (example: "Character Design") - `prompt: str` The class prompt (example: "a character design") - `slug: str` The class slug (example: "art-style-character-design") - `status: Literal["published", "unpublished"]` The class status (only published classes are listed, but unpublished classes can still appear in existing models) - `"published"` - `"unpublished"` - `thumbnails: List[str]` Some example images URLs to showcase the class - `collection_ids: Optional[List[str]]` A list of CollectionId this model belongs to - `compliant_model_ids: Optional[List[str]]` List of base model IDs compliant with the model (example: ["flux.1-dev", "flux.1-schnell"]) This attribute is mainly used for Flux LoRA models - `concepts: Optional[List[ModelConcept]]` The concepts is required for the type model: composition - `model_id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: float` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `model_epoch: Optional[str]` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `created_at: Optional[str]` The model creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `custom: Optional[bool]` Whether the model is a custom model and can be used only with POST /generate/custom/{modelId} endpoint - `epoch: Optional[str]` The epoch of the model. Only available for Flux Lora Trained models. If not set, uses the final model epoch (latest) - `epochs: Optional[List[ModelEpoch]]` The epochs of the model. Only available for Flux Lora Trained models. - `epoch: str` The epoch hash to identify the epoch - `assets: Optional[List[ModelEpochAsset]]` The assets of the epoch if sample prompts as been supplied during training - `asset_id: str` The AssetId of the image during training (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the asset - `example_asset_ids: Optional[List[str]]` List of all example asset IDs setup by the model owner - `inputs: Optional[List[ModelInput]]` The inputs of the model. Only used for custom models. To retrieve this list, get it by modelId with GET /models/{modelId} - `name: str` The name that must be user to call the model through the API - `type: Literal["boolean", "file", "file_array", 7 more]` The data type of the input - `"boolean"` - `"file"` - `"file_array"` - `"inputs_array"` - `"model"` - `"model_array"` - `"number"` - `"number_array"` - `"string"` - `"string_array"` - `allowed_values: Optional[List[object]]` The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown. - `background_behavior: Optional[Literal["opaque", "transparent"]]` Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`. - `"opaque"` - `"transparent"` - `color: Optional[bool]` Whether the input is a color or not. Only available for `string` input type. - `cost_impact: Optional[bool]` Whether this input affects the model's cost calculation - `default: Optional[object]` The default value for the input - `description: Optional[str]` Help text displayed in the UI to provide additional information about the input - `group: Optional[str]` Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI. - `hint: Optional[str]` Hint text displayed in the UI as a tooltip to guide the user - `inputs: Optional[List[Dict[str, object]]]` The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs. - `kind: Optional[Literal["3d", "audio", "document", 4 more]]` The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `label: Optional[str]` The label displayed in the UI for this input - `mask_from: Optional[str]` The name of the file input field to use as the mask source - `max: Optional[float]` The maximum allowed value. Only available for `number` and `array` input types. - `max_length: Optional[float]` The maximum allowed length for `string` inputs. Also applies to each item in `string_array`. - `max_size: Optional[float]` The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time. - `min: Optional[float]` The minimum allowed value. Only available for `number` and array input types. - `min_length: Optional[float]` The minimum allowed length for string inputs. Also applies to each item in `string_array`. - `model_types: Optional[List[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]]` The allowed model types for this input. Example: `["flux.1-lora"]`. Only available for `model_array` input type. - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `parent: Optional[bool]` Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types. For `file_array`, the parent asset is the first item in the array. - `placeholder: Optional[str]` Placeholder text for the input. Only available for 'string' input type. - `prompt: Optional[bool]` Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type. - `prompt_spark: Optional[bool]` Whether the input is used with prompt spark. Only available for `string` input type. - `required: Optional[ModelInputRequired]` Set of rules that describes when this input is required: - `always`: Input is always required - `ifNotDefined`: Input is required when another specified input is not defined - `ifDefined`: Input is required when another specified input is defined - `conditionalValues`: Input is required when another input has a specific value By default, the input is not required. - `always: Optional[bool]` Whether the input is always required - `conditional_values: Optional[object]` Makes this input required when another input has a specific value: - Key: name of the input to check - Value: operation and allowed values that trigger the requirement - `if_defined: Optional[object]` Makes this input required when another input is defined: - Key: name of the input that must be defined - Value: message to display when this input is required - `if_not_defined: Optional[object]` Makes this input required when another input is not defined: - Key: name of the input that must be undefined - Value: message to display when this input is required - `step: Optional[float]` The step increment for numeric inputs. Only available for `number` input type. - `model_keyword: Optional[str]` The model keyword, this is a legacy parameter, please use conceptPrompt in parameters - `name: Optional[str]` The model name (example: "Cinematic Realism") - `negative_prompt_embedding: Optional[str]` Fine-tune the model's inferences with negative prompt embedding - `owner_id: Optional[str]` The owner ID (example: "team_VFhihHKMRZyDDnZAJwLb2Q") - `parameters: Optional[ModelParameters]` The parameters of the model - `age: Optional[str]` Age group of the voice (for professional cloning) Only available for ElevenLabs voice training - `batch_size: Optional[float]` The batch size Less steps, and will increase the learning rate Only available for Flux LoRA training - `class_prompt: Optional[str]` The prompt to specify images in the same class as provided instance images Only available for SD15 training - `clone_type: Optional[str]` Type of voice cloning: "instant" (fast) or "professional" (higher quality, requires captcha) Only available for ElevenLabs voice training - `concept_prompt: Optional[str]` The prompt with identifier specifying the instance (or subject) of the class (example: "a daiton dog") Default value varies depending on the model type: - For SD1.5: "daiton" if no class is associated with the model - For SDXL: "daiton" - For Flux: "" - `gender: Optional[str]` Gender of the voice (for professional cloning) Only available for ElevenLabs voice training - `language: Optional[str]` Language of the audio samples (ISO 639-1 code) Only available for ElevenLabs voice training - `learning_rate: Optional[float]` Initial learning rate (after the potential warmup period) Default value varies depending on the model type: - For SD1.5 and SDXL: 0.000005 - For Flux: 0.0001 - `learning_rate_text_encoder: Optional[float]` Initial learning rate (after the potential warmup period) for the text encoder Maximum [Flux LoRA: 0.001] Default [SDXL: 0.00005 | Flux LoRA: 0.00001] Minimum [SDXL: 0 | Flux LoRA: 0.000001] - `learning_rate_unet: Optional[float]` Initial learning rate (after the potential warmup period) for the UNet Only available for SDXL LoRA training - `lr_scheduler: Optional[Literal["constant", "constant-with-warmup", "cosine", 3 more]]` The scheduler type to use (default: "constant") Only available for SD15 and SDXL LoRA training - `"constant"` - `"constant-with-warmup"` - `"cosine"` - `"cosine-with-restarts"` - `"linear"` - `"polynomial"` - `max_train_steps: Optional[float]` Maximum number of training steps to execute (default: varies depending on the model type) For SDXL LoRA training, please use `numTextTrainSteps` and `numUNetTrainSteps` instead Default value varies depending on the model type: - For SD1.5: round((number of training images * 225) / 3) - For SDXL: number of training images * 175 - For Flux: number of training images * 100 Maximum value varies depending on the model type: - For SD1.5 and SDXL: [0, 40000] - For Flux: [0, 10000] - `nb_epochs: Optional[float]` The number of epochs to train for Only available for Flux LoRA training - `nb_repeats: Optional[float]` The number of times to repeat the training Only available for Flux LoRA training - `num_text_train_steps: Optional[float]` The number of training steps for the text encoder Only available for SDXL LoRA training - `num_u_net_train_steps: Optional[float]` The number of training steps for the UNet Only available for SDXL LoRA training - `optimize_for: Optional[Literal["likeness"]]` Optimize the model training task for a specific type of input images. The available values are: - "likeness": optimize training for likeness or portrait (targets specific transformer blocks) - "all": train all transformer blocks - "none": train no specific transformer blocks This parameter controls which double and single transformer blocks are trained during the LoRA training process. Only available for Flux LoRA training - `"likeness"` - `prior_loss_weight: Optional[float]` The weight of prior preservation loss Only available for SD15 and SDXL LoRA training - `random_crop: Optional[bool]` Whether to random crop or center crop images before resizing to the working resolution Only available for SD15 and SDXL LoRA training - `random_crop_ratio: Optional[float]` Ratio of random crops Only available for SD15 and SDXL LoRA training - `random_crop_scale: Optional[float]` Scale of random crops Only available for SD15 and SDXL LoRA training - `rank: Optional[float]` The dimension of the LoRA update matrices Only available for SDXL (deprecated), Flux LoRA and Musubi training Default value varies depending on the model type: - For SDXL (deprecated): 64 - For Flux: 16 - For Musubi: 64 Each trainer enforces its own tighter limit (Flux LoRA: [2; 64], Musubi: [2; 128]) - `remove_background_noise: Optional[bool]` Whether to remove background noise from audio samples before cloning. When enabled, each sample must be at least 5 seconds long. Only available for ElevenLabs voice training - `sample_prompts: Optional[List[str]]` The prompts to use for each epoch Only available for Flux LoRA training - `sample_source_images: Optional[List[str]]` The sample prompt images (AssetIds) paired with samplePrompts Only available for Flux LoRA training Must be the same length as samplePrompts - `scale_lr: Optional[bool]` Whether to scale the learning rate Note: Legacy parameter, will be ignored Only available for SD15 and SDXL LoRA training - `seed: Optional[float]` Used to reproduce previous results. Default: randomly generated number. Only available for SD15 and SDXL LoRA training - `text_encoder_training_ratio: Optional[float]` Whether to train the text encoder or not Example: For 100 steps and a value of 0.2, it means that the text encoder will be trained for 20 steps and then the UNet for 80 steps Note: Legacy parameter, please use `numTextTrainSteps` and `numUNetTrainSteps` Only available for SD15 and SDXL LoRA training - `validation_frequency: Optional[float]` Validation frequency. Cannot be greater than maxTrainSteps value Only available for SD15 and SDXL LoRA training - `validation_prompt: Optional[str]` Validation prompt Only available for SD15 and SDXL LoRA training - `voice_description: Optional[str]` Description of the voice characteristics Only available for ElevenLabs voice training - `wandb_key: Optional[str]` The Weights And Bias key to use for logging. The maximum length is 40 characters - `parent_model_id: Optional[str]` The id of the parent model - `performance_stats: Optional[ModelPerformanceStats]` Aggregated performance stats - `variants: List[ModelPerformanceStatsVariant]` Performance metrics per variant - `capability: str` The generation capability (example: "txt2img", "img2video", "txt2audio") - `computed_at: str` When these stats were last computed (ISO date) - `variant_key: str` Unique variant identifier (example: "txt2img:1K", "img2video:2K", "txt2audio") - `arena_score: Optional[ModelPerformanceStatsVariantArenaScore]` External quality score from arena.ai leaderboard - `arena_category: str` Arena category (example: "text_to_image", "image_to_video") - `arena_model_name: str` Model name on arena.ai - `fetched_at: str` When this score was last fetched (ISO date) - `rank: float` Rank in the arena category - `rating: float` ELO rating - `rating_lower: float` ELO rating confidence interval lower bound - `rating_upper: float` ELO rating confidence interval upper bound - `votes: float` Number of human votes - `cost_per_asset_max_cu: Optional[float]` Maximum cost per output asset (CU) - `cost_per_asset_min_cu: Optional[float]` Minimum cost per output asset (CU) - `cost_per_asset_p50_cu: Optional[float]` Median cost per output asset (CU) - `inference_latency_p50_sec: Optional[float]` Inference latency P50 per output asset (seconds) - `inference_latency_p75_sec: Optional[float]` Inference latency P75 per output asset (seconds) - `resolution: Optional[str]` The resolution bucket (example: "0.5K", "1K", "2K", "4K") - `total_latency_p50_sec: Optional[float]` Total latency P50 per output asset, including queue time (seconds) - `total_latency_p75_sec: Optional[float]` Total latency P75 per output asset, including queue time (seconds) - `default: Optional[str]` Default variant key for quick model comparison - `prompt_embedding: Optional[str]` Fine-tune the model's inferences with prompt embedding - `short_description: Optional[str]` The model short description (example: "This model generates highly detailed cinematic scenes.") - `soft_deletion_on: Optional[str]` The date when the model will be soft deleted (only for Free plan) - `source: Optional[Literal["civitai", "huggingface", "other", "scenario"]]` The source of the model - `"civitai"` - `"huggingface"` - `"other"` - `"scenario"` - `status: Optional[Literal["copying", "failed", "new", 3 more]]` The model status - `"copying"` - `"failed"` - `"new"` - `"trained"` - `"training"` - `"training-canceled"` - `tags: Optional[List[str]]` The associated tags (example: ["sci-fi", "landscape"]) - `thumbnail: Optional[ModelThumbnail]` A thumbnail for your model - `asset_id: str` The AssetId of the image used as a thumbnail for your model (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the image used as a thumbnail for your model - `training_image_pairs: Optional[List[ModelTrainingImagePair]]` Array of training image pairs - `instruction: Optional[str]` The instruction for the image pair, source to target - `source_id: Optional[str]` The source asset ID (must be a training asset) - `target_id: Optional[str]` The target asset ID (must be a training asset) - `training_images: Optional[List[ModelTrainingImage]]` The URLs of the first 3 training images of the model. To retrieve the full set of images, get it by modelId - `id: str` The training image ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `automatic_captioning: str` Automatic captioning of the image - `created_at: str` The training image upload date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `description: str` Description for the image - `download_url: str` The URL of the image - `name: str` The original file name of the image (example: "my-training-image.jpg") - `training_images_number: Optional[float]` The total number of training images - `training_progress: Optional[ModelTrainingProgress]` Additional information about the training progress of the model - `stage: Literal["pending", "pending-captcha", "queued-for-train", 2 more]` The stage of the request - `"pending"` - `"pending-captcha"` - `"queued-for-train"` - `"running-train"` - `"starting-train"` - `updated_at: float` Timestamp in milliseconds of the last time the training progress was updated - `position: Optional[float]` Position of the job in the queue (ie. the number of job in the queue before this one) - `progress: Optional[float]` The progress of the job - `remaining_time_ms: Optional[float]` The remaining time in milliseconds - `started_at: Optional[float]` The timestamp in millisecond marking the start of the process - `training_stats: Optional[ModelTrainingStats]` Additional information about the model's training - `ended_at: Optional[str]` The training end time as an ISO date string - `queue_duration: Optional[float]` The training queued duration in seconds - `started_at: Optional[str]` The training start time as an ISO date string - `train_duration: Optional[float]` The training duration in seconds - `ui_config: Optional[ModelUiConfig]` The UI configuration for the model - `input_properties: Optional[Dict[str, ModelUiConfigInputProperties]]` Configuration for the input properties - `collapsed: Optional[bool]` - `loras_component: Optional[ModelUiConfigLorasComponent]` Configuration for the loras component - `label: str` The label of the component - `model_input: str` The input name of the model (model_array) - `scale_input: str` The input name of the scale (number_array) - `model_id_input: Optional[str]` The input model id (example: a composition or a single LoRA modelId) If specified, the model id will be attached to the output asset as a metadata If the model-decomposer parser is specified on it, modelInput and scaleInput will be automatically populated - `presets: Optional[List[ModelUiConfigPreset]]` Configuration for the presets - `fields: List[str]` - `presets: object` - `resolution_component: Optional[ModelUiConfigResolutionComponent]` Configuration for the resolution component - `height_input: str` The input name of the height - `label: str` The label of the component - `presets: List[ModelUiConfigResolutionComponentPreset]` The resolution presets - `height: float` - `label: str` - `width: float` - `width_input: str` The input name of the width - `selects: Optional[Dict[str, object]]` Configuration for the selects - `trigger_generate: Optional[ModelUiConfigTriggerGenerate]` Configuration for the trigger generate button - `label: str` - `after: Optional[str]` The 'name' of the input where the trigger generate button will be displayed (after the input). Do not specify both position and after. - `position: Optional[Literal["bottom", "top"]]` The position of the trigger generate button. If position specified, the button will be displayed at the specified position. Do not specify both position and after. - `"bottom"` - `"top"` - `updated_at: Optional[str]` The model last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `user_id: Optional[str]` (Deprecated) The user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") ### Model Retrieve Response - `class ModelRetrieveResponse: …` - `model: Model` - `id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `capabilities: List[Literal["3d23d", "audio2audio", "audio2video", 29 more]]` List of model capabilities (example: ["txt2img", "img2img", "txt2img_ip_adapter", ...]) - `"3d23d"` - `"audio2audio"` - `"audio2video"` - `"controlnet"` - `"controlnet_img2img"` - `"controlnet_inpaint"` - `"controlnet_inpaint_ip_adapter"` - `"controlnet_ip_adapter"` - `"controlnet_reference"` - `"controlnet_texture"` - `"img23d"` - `"img2img"` - `"img2img_ip_adapter"` - `"img2img_texture"` - `"img2txt"` - `"img2video"` - `"inpaint"` - `"inpaint_ip_adapter"` - `"outpaint"` - `"reference"` - `"reference_texture"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2img_ip_adapter"` - `"txt2img_texture"` - `"txt2txt"` - `"txt2video"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `collection_ids: List[str]` A list of CollectionId this model belongs to - `created_at: str` The model creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `custom: bool` Whether the model is a custom model and can be used only with POST /generate/custom/{modelId} endpoint - `example_asset_ids: List[str]` List of all example asset IDs setup by the model owner - `privacy: Literal["private", "public", "unlisted"]` The privacy of the model (default: private) - `"private"` - `"public"` - `"unlisted"` - `source: Literal["civitai", "huggingface", "other", "scenario"]` The source of the model - `"civitai"` - `"huggingface"` - `"other"` - `"scenario"` - `status: Literal["copying", "failed", "new", 3 more]` The model status - `"copying"` - `"failed"` - `"new"` - `"trained"` - `"training"` - `"training-canceled"` - `tags: List[str]` The associated tags (example: ["sci-fi", "landscape"]) - `training_images_number: float` The total number of training images - `type: Literal["custom", "elevenlabs-voice", "flux.1", 34 more]` The model type (example: "flux.1-lora") - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `updated_at: str` The model last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `access_restrictions: Optional[Literal[0, 100, 25, 2 more]]` The access restrictions of the model 0: Free plan 25: Creator plan 50: Pro plan 75: Team plan 100: Enterprise plan - `0` - `100` - `25` - `50` - `75` - `author_id: Optional[str]` The author user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") - `class_: Optional[ModelClass]` The class of the model - `category: str` The category slug of the class (example: "art-style") - `concept_prompt: str` The concept prompt of the class (example: "a sks character design") - `model_id: str` The model ID of the class (example: "stable-diffusion-v1-5") - `name: str` The class name (example: "Character Design") - `prompt: str` The class prompt (example: "a character design") - `slug: str` The class slug (example: "art-style-character-design") - `status: Literal["published", "unpublished"]` The class status (only published classes are listed, but unpublished classes can still appear in existing models) - `"published"` - `"unpublished"` - `thumbnails: List[str]` Some example images URLs to showcase the class - `compliant_model_ids: Optional[List[str]]` List of base model IDs compliant with the model (example: ["flux.1-dev", "flux.1-schnell"]) This attribute is mainly used for Flux LoRA models - `concepts: Optional[List[ModelConcept]]` The concepts is required for the type model: composition - `model_id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: float` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `model_epoch: Optional[str]` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `epoch: Optional[str]` The epoch of the model. Only available for Flux Lora Trained models. If not set, uses the final model epoch (latest) - `epochs: Optional[List[ModelEpoch]]` The epochs of the model. Only available for Flux Lora Trained models. - `epoch: str` The epoch hash to identify the epoch - `assets: Optional[List[ModelEpochAsset]]` The assets of the epoch if sample prompts as been supplied during training - `asset_id: str` The AssetId of the image during training (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the asset - `inputs: Optional[List[ModelInput]]` The inputs of the model. Only used for custom models. To retrieve this list, get it by modelId with GET /models/{modelId} - `name: str` The name that must be user to call the model through the API - `type: Literal["boolean", "file", "file_array", 7 more]` The data type of the input - `"boolean"` - `"file"` - `"file_array"` - `"inputs_array"` - `"model"` - `"model_array"` - `"number"` - `"number_array"` - `"string"` - `"string_array"` - `allowed_values: Optional[List[object]]` The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown. - `background_behavior: Optional[Literal["opaque", "transparent"]]` Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`. - `"opaque"` - `"transparent"` - `color: Optional[bool]` Whether the input is a color or not. Only available for `string` input type. - `cost_impact: Optional[bool]` Whether this input affects the model's cost calculation - `default: Optional[object]` The default value for the input - `description: Optional[str]` Help text displayed in the UI to provide additional information about the input - `group: Optional[str]` Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI. - `hint: Optional[str]` Hint text displayed in the UI as a tooltip to guide the user - `inputs: Optional[List[Dict[str, object]]]` The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs. - `kind: Optional[Literal["3d", "audio", "document", 4 more]]` The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `label: Optional[str]` The label displayed in the UI for this input - `mask_from: Optional[str]` The name of the file input field to use as the mask source - `max: Optional[float]` The maximum allowed value. Only available for `number` and `array` input types. - `max_length: Optional[float]` The maximum allowed length for `string` inputs. Also applies to each item in `string_array`. - `max_size: Optional[float]` The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time. - `min: Optional[float]` The minimum allowed value. Only available for `number` and array input types. - `min_length: Optional[float]` The minimum allowed length for string inputs. Also applies to each item in `string_array`. - `model_types: Optional[List[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]]` The allowed model types for this input. Example: `["flux.1-lora"]`. Only available for `model_array` input type. - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `parent: Optional[bool]` Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types. For `file_array`, the parent asset is the first item in the array. - `placeholder: Optional[str]` Placeholder text for the input. Only available for 'string' input type. - `prompt: Optional[bool]` Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type. - `prompt_spark: Optional[bool]` Whether the input is used with prompt spark. Only available for `string` input type. - `required: Optional[ModelInputRequired]` Set of rules that describes when this input is required: - `always`: Input is always required - `ifNotDefined`: Input is required when another specified input is not defined - `ifDefined`: Input is required when another specified input is defined - `conditionalValues`: Input is required when another input has a specific value By default, the input is not required. - `always: Optional[bool]` Whether the input is always required - `conditional_values: Optional[object]` Makes this input required when another input has a specific value: - Key: name of the input to check - Value: operation and allowed values that trigger the requirement - `if_defined: Optional[object]` Makes this input required when another input is defined: - Key: name of the input that must be defined - Value: message to display when this input is required - `if_not_defined: Optional[object]` Makes this input required when another input is not defined: - Key: name of the input that must be undefined - Value: message to display when this input is required - `step: Optional[float]` The step increment for numeric inputs. Only available for `number` input type. - `model_keyword: Optional[str]` The model keyword, this is a legacy parameter, please use conceptPrompt in parameters - `name: Optional[str]` The model name (example: "Cinematic Realism") - `negative_prompt_embedding: Optional[str]` Fine-tune the model's inferences with negative prompt embedding - `owner_id: Optional[str]` The owner ID (example: "team_VFhihHKMRZyDDnZAJwLb2Q") - `parameters: Optional[ModelParameters]` The parameters of the model - `age: Optional[str]` Age group of the voice (for professional cloning) Only available for ElevenLabs voice training - `batch_size: Optional[float]` The batch size Less steps, and will increase the learning rate Only available for Flux LoRA training - `class_prompt: Optional[str]` The prompt to specify images in the same class as provided instance images Only available for SD15 training - `clone_type: Optional[str]` Type of voice cloning: "instant" (fast) or "professional" (higher quality, requires captcha) Only available for ElevenLabs voice training - `concept_prompt: Optional[str]` The prompt with identifier specifying the instance (or subject) of the class (example: "a daiton dog") Default value varies depending on the model type: - For SD1.5: "daiton" if no class is associated with the model - For SDXL: "daiton" - For Flux: "" - `gender: Optional[str]` Gender of the voice (for professional cloning) Only available for ElevenLabs voice training - `language: Optional[str]` Language of the audio samples (ISO 639-1 code) Only available for ElevenLabs voice training - `learning_rate: Optional[float]` Initial learning rate (after the potential warmup period) Default value varies depending on the model type: - For SD1.5 and SDXL: 0.000005 - For Flux: 0.0001 - `learning_rate_text_encoder: Optional[float]` Initial learning rate (after the potential warmup period) for the text encoder Maximum [Flux LoRA: 0.001] Default [SDXL: 0.00005 | Flux LoRA: 0.00001] Minimum [SDXL: 0 | Flux LoRA: 0.000001] - `learning_rate_unet: Optional[float]` Initial learning rate (after the potential warmup period) for the UNet Only available for SDXL LoRA training - `lr_scheduler: Optional[Literal["constant", "constant-with-warmup", "cosine", 3 more]]` The scheduler type to use (default: "constant") Only available for SD15 and SDXL LoRA training - `"constant"` - `"constant-with-warmup"` - `"cosine"` - `"cosine-with-restarts"` - `"linear"` - `"polynomial"` - `max_train_steps: Optional[float]` Maximum number of training steps to execute (default: varies depending on the model type) For SDXL LoRA training, please use `numTextTrainSteps` and `numUNetTrainSteps` instead Default value varies depending on the model type: - For SD1.5: round((number of training images * 225) / 3) - For SDXL: number of training images * 175 - For Flux: number of training images * 100 Maximum value varies depending on the model type: - For SD1.5 and SDXL: [0, 40000] - For Flux: [0, 10000] - `nb_epochs: Optional[float]` The number of epochs to train for Only available for Flux LoRA training - `nb_repeats: Optional[float]` The number of times to repeat the training Only available for Flux LoRA training - `num_text_train_steps: Optional[float]` The number of training steps for the text encoder Only available for SDXL LoRA training - `num_u_net_train_steps: Optional[float]` The number of training steps for the UNet Only available for SDXL LoRA training - `optimize_for: Optional[Literal["likeness"]]` Optimize the model training task for a specific type of input images. The available values are: - "likeness": optimize training for likeness or portrait (targets specific transformer blocks) - "all": train all transformer blocks - "none": train no specific transformer blocks This parameter controls which double and single transformer blocks are trained during the LoRA training process. Only available for Flux LoRA training - `"likeness"` - `prior_loss_weight: Optional[float]` The weight of prior preservation loss Only available for SD15 and SDXL LoRA training - `random_crop: Optional[bool]` Whether to random crop or center crop images before resizing to the working resolution Only available for SD15 and SDXL LoRA training - `random_crop_ratio: Optional[float]` Ratio of random crops Only available for SD15 and SDXL LoRA training - `random_crop_scale: Optional[float]` Scale of random crops Only available for SD15 and SDXL LoRA training - `rank: Optional[float]` The dimension of the LoRA update matrices Only available for SDXL (deprecated), Flux LoRA and Musubi training Default value varies depending on the model type: - For SDXL (deprecated): 64 - For Flux: 16 - For Musubi: 64 Each trainer enforces its own tighter limit (Flux LoRA: [2; 64], Musubi: [2; 128]) - `remove_background_noise: Optional[bool]` Whether to remove background noise from audio samples before cloning. When enabled, each sample must be at least 5 seconds long. Only available for ElevenLabs voice training - `sample_prompts: Optional[List[str]]` The prompts to use for each epoch Only available for Flux LoRA training - `sample_source_images: Optional[List[str]]` The sample prompt images (AssetIds) paired with samplePrompts Only available for Flux LoRA training Must be the same length as samplePrompts - `scale_lr: Optional[bool]` Whether to scale the learning rate Note: Legacy parameter, will be ignored Only available for SD15 and SDXL LoRA training - `seed: Optional[float]` Used to reproduce previous results. Default: randomly generated number. Only available for SD15 and SDXL LoRA training - `text_encoder_training_ratio: Optional[float]` Whether to train the text encoder or not Example: For 100 steps and a value of 0.2, it means that the text encoder will be trained for 20 steps and then the UNet for 80 steps Note: Legacy parameter, please use `numTextTrainSteps` and `numUNetTrainSteps` Only available for SD15 and SDXL LoRA training - `validation_frequency: Optional[float]` Validation frequency. Cannot be greater than maxTrainSteps value Only available for SD15 and SDXL LoRA training - `validation_prompt: Optional[str]` Validation prompt Only available for SD15 and SDXL LoRA training - `voice_description: Optional[str]` Description of the voice characteristics Only available for ElevenLabs voice training - `wandb_key: Optional[str]` The Weights And Bias key to use for logging. The maximum length is 40 characters - `parent_model_id: Optional[str]` The id of the parent model - `performance_stats: Optional[ModelPerformanceStats]` Aggregated performance stats - `variants: List[ModelPerformanceStatsVariant]` Performance metrics per variant - `capability: str` The generation capability (example: "txt2img", "img2video", "txt2audio") - `computed_at: str` When these stats were last computed (ISO date) - `variant_key: str` Unique variant identifier (example: "txt2img:1K", "img2video:2K", "txt2audio") - `arena_score: Optional[ModelPerformanceStatsVariantArenaScore]` External quality score from arena.ai leaderboard - `arena_category: str` Arena category (example: "text_to_image", "image_to_video") - `arena_model_name: str` Model name on arena.ai - `fetched_at: str` When this score was last fetched (ISO date) - `rank: float` Rank in the arena category - `rating: float` ELO rating - `rating_lower: float` ELO rating confidence interval lower bound - `rating_upper: float` ELO rating confidence interval upper bound - `votes: float` Number of human votes - `cost_per_asset_max_cu: Optional[float]` Maximum cost per output asset (CU) - `cost_per_asset_min_cu: Optional[float]` Minimum cost per output asset (CU) - `cost_per_asset_p50_cu: Optional[float]` Median cost per output asset (CU) - `inference_latency_p50_sec: Optional[float]` Inference latency P50 per output asset (seconds) - `inference_latency_p75_sec: Optional[float]` Inference latency P75 per output asset (seconds) - `resolution: Optional[str]` The resolution bucket (example: "0.5K", "1K", "2K", "4K") - `total_latency_p50_sec: Optional[float]` Total latency P50 per output asset, including queue time (seconds) - `total_latency_p75_sec: Optional[float]` Total latency P75 per output asset, including queue time (seconds) - `default: Optional[str]` Default variant key for quick model comparison - `prompt_embedding: Optional[str]` Fine-tune the model's inferences with prompt embedding - `short_description: Optional[str]` The model short description (example: "This model generates highly detailed cinematic scenes.") - `soft_deletion_on: Optional[str]` The date when the model will be soft deleted (only for Free plan) - `thumbnail: Optional[ModelThumbnail]` A thumbnail for your model - `asset_id: str` The AssetId of the image used as a thumbnail for your model (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the image used as a thumbnail for your model - `training_image_pairs: Optional[List[ModelTrainingImagePair]]` Array of training image pairs - `instruction: Optional[str]` The instruction for the image pair, source to target - `source_id: Optional[str]` The source asset ID (must be a training asset) - `target_id: Optional[str]` The target asset ID (must be a training asset) - `training_images: Optional[List[ModelTrainingImage]]` The URLs of the first 3 training images of the model. To retrieve the full set of images, get it by modelId - `id: str` The training image ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `automatic_captioning: str` Automatic captioning of the image - `created_at: str` The training image upload date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `description: str` Description for the image - `download_url: str` The URL of the image - `name: str` The original file name of the image (example: "my-training-image.jpg") - `training_progress: Optional[ModelTrainingProgress]` Additional information about the training progress of the model - `stage: Literal["pending", "pending-captcha", "queued-for-train", 2 more]` The stage of the request - `"pending"` - `"pending-captcha"` - `"queued-for-train"` - `"running-train"` - `"starting-train"` - `updated_at: float` Timestamp in milliseconds of the last time the training progress was updated - `position: Optional[float]` Position of the job in the queue (ie. the number of job in the queue before this one) - `progress: Optional[float]` The progress of the job - `remaining_time_ms: Optional[float]` The remaining time in milliseconds - `started_at: Optional[float]` The timestamp in millisecond marking the start of the process - `training_stats: Optional[ModelTrainingStats]` Additional information about the model's training - `ended_at: Optional[str]` The training end time as an ISO date string - `queue_duration: Optional[float]` The training queued duration in seconds - `started_at: Optional[str]` The training start time as an ISO date string - `train_duration: Optional[float]` The training duration in seconds - `ui_config: Optional[ModelUiConfig]` The UI configuration for the model - `input_properties: Optional[Dict[str, ModelUiConfigInputProperties]]` Configuration for the input properties - `collapsed: Optional[bool]` - `loras_component: Optional[ModelUiConfigLorasComponent]` Configuration for the loras component - `label: str` The label of the component - `model_input: str` The input name of the model (model_array) - `scale_input: str` The input name of the scale (number_array) - `model_id_input: Optional[str]` The input model id (example: a composition or a single LoRA modelId) If specified, the model id will be attached to the output asset as a metadata If the model-decomposer parser is specified on it, modelInput and scaleInput will be automatically populated - `presets: Optional[List[ModelUiConfigPreset]]` Configuration for the presets - `fields: List[str]` - `presets: object` - `resolution_component: Optional[ModelUiConfigResolutionComponent]` Configuration for the resolution component - `height_input: str` The input name of the height - `label: str` The label of the component - `presets: List[ModelUiConfigResolutionComponentPreset]` The resolution presets - `height: float` - `label: str` - `width: float` - `width_input: str` The input name of the width - `selects: Optional[Dict[str, object]]` Configuration for the selects - `trigger_generate: Optional[ModelUiConfigTriggerGenerate]` Configuration for the trigger generate button - `label: str` - `after: Optional[str]` The 'name' of the input where the trigger generate button will be displayed (after the input). Do not specify both position and after. - `position: Optional[Literal["bottom", "top"]]` The position of the trigger generate button. If position specified, the button will be displayed at the specified position. Do not specify both position and after. - `"bottom"` - `"top"` - `user_id: Optional[str]` (Deprecated) The user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") ### Model Update Response - `class ModelUpdateResponse: …` - `model: Model` - `id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `capabilities: List[Literal["3d23d", "audio2audio", "audio2video", 29 more]]` List of model capabilities (example: ["txt2img", "img2img", "txt2img_ip_adapter", ...]) - `"3d23d"` - `"audio2audio"` - `"audio2video"` - `"controlnet"` - `"controlnet_img2img"` - `"controlnet_inpaint"` - `"controlnet_inpaint_ip_adapter"` - `"controlnet_ip_adapter"` - `"controlnet_reference"` - `"controlnet_texture"` - `"img23d"` - `"img2img"` - `"img2img_ip_adapter"` - `"img2img_texture"` - `"img2txt"` - `"img2video"` - `"inpaint"` - `"inpaint_ip_adapter"` - `"outpaint"` - `"reference"` - `"reference_texture"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2img_ip_adapter"` - `"txt2img_texture"` - `"txt2txt"` - `"txt2video"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `collection_ids: List[str]` A list of CollectionId this model belongs to - `created_at: str` The model creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `custom: bool` Whether the model is a custom model and can be used only with POST /generate/custom/{modelId} endpoint - `example_asset_ids: List[str]` List of all example asset IDs setup by the model owner - `privacy: Literal["private", "public", "unlisted"]` The privacy of the model (default: private) - `"private"` - `"public"` - `"unlisted"` - `source: Literal["civitai", "huggingface", "other", "scenario"]` The source of the model - `"civitai"` - `"huggingface"` - `"other"` - `"scenario"` - `status: Literal["copying", "failed", "new", 3 more]` The model status - `"copying"` - `"failed"` - `"new"` - `"trained"` - `"training"` - `"training-canceled"` - `tags: List[str]` The associated tags (example: ["sci-fi", "landscape"]) - `training_images_number: float` The total number of training images - `type: Literal["custom", "elevenlabs-voice", "flux.1", 34 more]` The model type (example: "flux.1-lora") - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `updated_at: str` The model last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `access_restrictions: Optional[Literal[0, 100, 25, 2 more]]` The access restrictions of the model 0: Free plan 25: Creator plan 50: Pro plan 75: Team plan 100: Enterprise plan - `0` - `100` - `25` - `50` - `75` - `author_id: Optional[str]` The author user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") - `class_: Optional[ModelClass]` The class of the model - `category: str` The category slug of the class (example: "art-style") - `concept_prompt: str` The concept prompt of the class (example: "a sks character design") - `model_id: str` The model ID of the class (example: "stable-diffusion-v1-5") - `name: str` The class name (example: "Character Design") - `prompt: str` The class prompt (example: "a character design") - `slug: str` The class slug (example: "art-style-character-design") - `status: Literal["published", "unpublished"]` The class status (only published classes are listed, but unpublished classes can still appear in existing models) - `"published"` - `"unpublished"` - `thumbnails: List[str]` Some example images URLs to showcase the class - `compliant_model_ids: Optional[List[str]]` List of base model IDs compliant with the model (example: ["flux.1-dev", "flux.1-schnell"]) This attribute is mainly used for Flux LoRA models - `concepts: Optional[List[ModelConcept]]` The concepts is required for the type model: composition - `model_id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: float` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `model_epoch: Optional[str]` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `epoch: Optional[str]` The epoch of the model. Only available for Flux Lora Trained models. If not set, uses the final model epoch (latest) - `epochs: Optional[List[ModelEpoch]]` The epochs of the model. Only available for Flux Lora Trained models. - `epoch: str` The epoch hash to identify the epoch - `assets: Optional[List[ModelEpochAsset]]` The assets of the epoch if sample prompts as been supplied during training - `asset_id: str` The AssetId of the image during training (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the asset - `inputs: Optional[List[ModelInput]]` The inputs of the model. Only used for custom models. To retrieve this list, get it by modelId with GET /models/{modelId} - `name: str` The name that must be user to call the model through the API - `type: Literal["boolean", "file", "file_array", 7 more]` The data type of the input - `"boolean"` - `"file"` - `"file_array"` - `"inputs_array"` - `"model"` - `"model_array"` - `"number"` - `"number_array"` - `"string"` - `"string_array"` - `allowed_values: Optional[List[object]]` The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown. - `background_behavior: Optional[Literal["opaque", "transparent"]]` Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`. - `"opaque"` - `"transparent"` - `color: Optional[bool]` Whether the input is a color or not. Only available for `string` input type. - `cost_impact: Optional[bool]` Whether this input affects the model's cost calculation - `default: Optional[object]` The default value for the input - `description: Optional[str]` Help text displayed in the UI to provide additional information about the input - `group: Optional[str]` Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI. - `hint: Optional[str]` Hint text displayed in the UI as a tooltip to guide the user - `inputs: Optional[List[Dict[str, object]]]` The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs. - `kind: Optional[Literal["3d", "audio", "document", 4 more]]` The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `label: Optional[str]` The label displayed in the UI for this input - `mask_from: Optional[str]` The name of the file input field to use as the mask source - `max: Optional[float]` The maximum allowed value. Only available for `number` and `array` input types. - `max_length: Optional[float]` The maximum allowed length for `string` inputs. Also applies to each item in `string_array`. - `max_size: Optional[float]` The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time. - `min: Optional[float]` The minimum allowed value. Only available for `number` and array input types. - `min_length: Optional[float]` The minimum allowed length for string inputs. Also applies to each item in `string_array`. - `model_types: Optional[List[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]]` The allowed model types for this input. Example: `["flux.1-lora"]`. Only available for `model_array` input type. - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `parent: Optional[bool]` Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types. For `file_array`, the parent asset is the first item in the array. - `placeholder: Optional[str]` Placeholder text for the input. Only available for 'string' input type. - `prompt: Optional[bool]` Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type. - `prompt_spark: Optional[bool]` Whether the input is used with prompt spark. Only available for `string` input type. - `required: Optional[ModelInputRequired]` Set of rules that describes when this input is required: - `always`: Input is always required - `ifNotDefined`: Input is required when another specified input is not defined - `ifDefined`: Input is required when another specified input is defined - `conditionalValues`: Input is required when another input has a specific value By default, the input is not required. - `always: Optional[bool]` Whether the input is always required - `conditional_values: Optional[object]` Makes this input required when another input has a specific value: - Key: name of the input to check - Value: operation and allowed values that trigger the requirement - `if_defined: Optional[object]` Makes this input required when another input is defined: - Key: name of the input that must be defined - Value: message to display when this input is required - `if_not_defined: Optional[object]` Makes this input required when another input is not defined: - Key: name of the input that must be undefined - Value: message to display when this input is required - `step: Optional[float]` The step increment for numeric inputs. Only available for `number` input type. - `model_keyword: Optional[str]` The model keyword, this is a legacy parameter, please use conceptPrompt in parameters - `name: Optional[str]` The model name (example: "Cinematic Realism") - `negative_prompt_embedding: Optional[str]` Fine-tune the model's inferences with negative prompt embedding - `owner_id: Optional[str]` The owner ID (example: "team_VFhihHKMRZyDDnZAJwLb2Q") - `parameters: Optional[ModelParameters]` The parameters of the model - `age: Optional[str]` Age group of the voice (for professional cloning) Only available for ElevenLabs voice training - `batch_size: Optional[float]` The batch size Less steps, and will increase the learning rate Only available for Flux LoRA training - `class_prompt: Optional[str]` The prompt to specify images in the same class as provided instance images Only available for SD15 training - `clone_type: Optional[str]` Type of voice cloning: "instant" (fast) or "professional" (higher quality, requires captcha) Only available for ElevenLabs voice training - `concept_prompt: Optional[str]` The prompt with identifier specifying the instance (or subject) of the class (example: "a daiton dog") Default value varies depending on the model type: - For SD1.5: "daiton" if no class is associated with the model - For SDXL: "daiton" - For Flux: "" - `gender: Optional[str]` Gender of the voice (for professional cloning) Only available for ElevenLabs voice training - `language: Optional[str]` Language of the audio samples (ISO 639-1 code) Only available for ElevenLabs voice training - `learning_rate: Optional[float]` Initial learning rate (after the potential warmup period) Default value varies depending on the model type: - For SD1.5 and SDXL: 0.000005 - For Flux: 0.0001 - `learning_rate_text_encoder: Optional[float]` Initial learning rate (after the potential warmup period) for the text encoder Maximum [Flux LoRA: 0.001] Default [SDXL: 0.00005 | Flux LoRA: 0.00001] Minimum [SDXL: 0 | Flux LoRA: 0.000001] - `learning_rate_unet: Optional[float]` Initial learning rate (after the potential warmup period) for the UNet Only available for SDXL LoRA training - `lr_scheduler: Optional[Literal["constant", "constant-with-warmup", "cosine", 3 more]]` The scheduler type to use (default: "constant") Only available for SD15 and SDXL LoRA training - `"constant"` - `"constant-with-warmup"` - `"cosine"` - `"cosine-with-restarts"` - `"linear"` - `"polynomial"` - `max_train_steps: Optional[float]` Maximum number of training steps to execute (default: varies depending on the model type) For SDXL LoRA training, please use `numTextTrainSteps` and `numUNetTrainSteps` instead Default value varies depending on the model type: - For SD1.5: round((number of training images * 225) / 3) - For SDXL: number of training images * 175 - For Flux: number of training images * 100 Maximum value varies depending on the model type: - For SD1.5 and SDXL: [0, 40000] - For Flux: [0, 10000] - `nb_epochs: Optional[float]` The number of epochs to train for Only available for Flux LoRA training - `nb_repeats: Optional[float]` The number of times to repeat the training Only available for Flux LoRA training - `num_text_train_steps: Optional[float]` The number of training steps for the text encoder Only available for SDXL LoRA training - `num_u_net_train_steps: Optional[float]` The number of training steps for the UNet Only available for SDXL LoRA training - `optimize_for: Optional[Literal["likeness"]]` Optimize the model training task for a specific type of input images. The available values are: - "likeness": optimize training for likeness or portrait (targets specific transformer blocks) - "all": train all transformer blocks - "none": train no specific transformer blocks This parameter controls which double and single transformer blocks are trained during the LoRA training process. Only available for Flux LoRA training - `"likeness"` - `prior_loss_weight: Optional[float]` The weight of prior preservation loss Only available for SD15 and SDXL LoRA training - `random_crop: Optional[bool]` Whether to random crop or center crop images before resizing to the working resolution Only available for SD15 and SDXL LoRA training - `random_crop_ratio: Optional[float]` Ratio of random crops Only available for SD15 and SDXL LoRA training - `random_crop_scale: Optional[float]` Scale of random crops Only available for SD15 and SDXL LoRA training - `rank: Optional[float]` The dimension of the LoRA update matrices Only available for SDXL (deprecated), Flux LoRA and Musubi training Default value varies depending on the model type: - For SDXL (deprecated): 64 - For Flux: 16 - For Musubi: 64 Each trainer enforces its own tighter limit (Flux LoRA: [2; 64], Musubi: [2; 128]) - `remove_background_noise: Optional[bool]` Whether to remove background noise from audio samples before cloning. When enabled, each sample must be at least 5 seconds long. Only available for ElevenLabs voice training - `sample_prompts: Optional[List[str]]` The prompts to use for each epoch Only available for Flux LoRA training - `sample_source_images: Optional[List[str]]` The sample prompt images (AssetIds) paired with samplePrompts Only available for Flux LoRA training Must be the same length as samplePrompts - `scale_lr: Optional[bool]` Whether to scale the learning rate Note: Legacy parameter, will be ignored Only available for SD15 and SDXL LoRA training - `seed: Optional[float]` Used to reproduce previous results. Default: randomly generated number. Only available for SD15 and SDXL LoRA training - `text_encoder_training_ratio: Optional[float]` Whether to train the text encoder or not Example: For 100 steps and a value of 0.2, it means that the text encoder will be trained for 20 steps and then the UNet for 80 steps Note: Legacy parameter, please use `numTextTrainSteps` and `numUNetTrainSteps` Only available for SD15 and SDXL LoRA training - `validation_frequency: Optional[float]` Validation frequency. Cannot be greater than maxTrainSteps value Only available for SD15 and SDXL LoRA training - `validation_prompt: Optional[str]` Validation prompt Only available for SD15 and SDXL LoRA training - `voice_description: Optional[str]` Description of the voice characteristics Only available for ElevenLabs voice training - `wandb_key: Optional[str]` The Weights And Bias key to use for logging. The maximum length is 40 characters - `parent_model_id: Optional[str]` The id of the parent model - `performance_stats: Optional[ModelPerformanceStats]` Aggregated performance stats - `variants: List[ModelPerformanceStatsVariant]` Performance metrics per variant - `capability: str` The generation capability (example: "txt2img", "img2video", "txt2audio") - `computed_at: str` When these stats were last computed (ISO date) - `variant_key: str` Unique variant identifier (example: "txt2img:1K", "img2video:2K", "txt2audio") - `arena_score: Optional[ModelPerformanceStatsVariantArenaScore]` External quality score from arena.ai leaderboard - `arena_category: str` Arena category (example: "text_to_image", "image_to_video") - `arena_model_name: str` Model name on arena.ai - `fetched_at: str` When this score was last fetched (ISO date) - `rank: float` Rank in the arena category - `rating: float` ELO rating - `rating_lower: float` ELO rating confidence interval lower bound - `rating_upper: float` ELO rating confidence interval upper bound - `votes: float` Number of human votes - `cost_per_asset_max_cu: Optional[float]` Maximum cost per output asset (CU) - `cost_per_asset_min_cu: Optional[float]` Minimum cost per output asset (CU) - `cost_per_asset_p50_cu: Optional[float]` Median cost per output asset (CU) - `inference_latency_p50_sec: Optional[float]` Inference latency P50 per output asset (seconds) - `inference_latency_p75_sec: Optional[float]` Inference latency P75 per output asset (seconds) - `resolution: Optional[str]` The resolution bucket (example: "0.5K", "1K", "2K", "4K") - `total_latency_p50_sec: Optional[float]` Total latency P50 per output asset, including queue time (seconds) - `total_latency_p75_sec: Optional[float]` Total latency P75 per output asset, including queue time (seconds) - `default: Optional[str]` Default variant key for quick model comparison - `prompt_embedding: Optional[str]` Fine-tune the model's inferences with prompt embedding - `short_description: Optional[str]` The model short description (example: "This model generates highly detailed cinematic scenes.") - `soft_deletion_on: Optional[str]` The date when the model will be soft deleted (only for Free plan) - `thumbnail: Optional[ModelThumbnail]` A thumbnail for your model - `asset_id: str` The AssetId of the image used as a thumbnail for your model (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the image used as a thumbnail for your model - `training_image_pairs: Optional[List[ModelTrainingImagePair]]` Array of training image pairs - `instruction: Optional[str]` The instruction for the image pair, source to target - `source_id: Optional[str]` The source asset ID (must be a training asset) - `target_id: Optional[str]` The target asset ID (must be a training asset) - `training_images: Optional[List[ModelTrainingImage]]` The URLs of the first 3 training images of the model. To retrieve the full set of images, get it by modelId - `id: str` The training image ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `automatic_captioning: str` Automatic captioning of the image - `created_at: str` The training image upload date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `description: str` Description for the image - `download_url: str` The URL of the image - `name: str` The original file name of the image (example: "my-training-image.jpg") - `training_progress: Optional[ModelTrainingProgress]` Additional information about the training progress of the model - `stage: Literal["pending", "pending-captcha", "queued-for-train", 2 more]` The stage of the request - `"pending"` - `"pending-captcha"` - `"queued-for-train"` - `"running-train"` - `"starting-train"` - `updated_at: float` Timestamp in milliseconds of the last time the training progress was updated - `position: Optional[float]` Position of the job in the queue (ie. the number of job in the queue before this one) - `progress: Optional[float]` The progress of the job - `remaining_time_ms: Optional[float]` The remaining time in milliseconds - `started_at: Optional[float]` The timestamp in millisecond marking the start of the process - `training_stats: Optional[ModelTrainingStats]` Additional information about the model's training - `ended_at: Optional[str]` The training end time as an ISO date string - `queue_duration: Optional[float]` The training queued duration in seconds - `started_at: Optional[str]` The training start time as an ISO date string - `train_duration: Optional[float]` The training duration in seconds - `ui_config: Optional[ModelUiConfig]` The UI configuration for the model - `input_properties: Optional[Dict[str, ModelUiConfigInputProperties]]` Configuration for the input properties - `collapsed: Optional[bool]` - `loras_component: Optional[ModelUiConfigLorasComponent]` Configuration for the loras component - `label: str` The label of the component - `model_input: str` The input name of the model (model_array) - `scale_input: str` The input name of the scale (number_array) - `model_id_input: Optional[str]` The input model id (example: a composition or a single LoRA modelId) If specified, the model id will be attached to the output asset as a metadata If the model-decomposer parser is specified on it, modelInput and scaleInput will be automatically populated - `presets: Optional[List[ModelUiConfigPreset]]` Configuration for the presets - `fields: List[str]` - `presets: object` - `resolution_component: Optional[ModelUiConfigResolutionComponent]` Configuration for the resolution component - `height_input: str` The input name of the height - `label: str` The label of the component - `presets: List[ModelUiConfigResolutionComponentPreset]` The resolution presets - `height: float` - `label: str` - `width: float` - `width_input: str` The input name of the width - `selects: Optional[Dict[str, object]]` Configuration for the selects - `trigger_generate: Optional[ModelUiConfigTriggerGenerate]` Configuration for the trigger generate button - `label: str` - `after: Optional[str]` The 'name' of the input where the trigger generate button will be displayed (after the input). Do not specify both position and after. - `position: Optional[Literal["bottom", "top"]]` The position of the trigger generate button. If position specified, the button will be displayed at the specified position. Do not specify both position and after. - `"bottom"` - `"top"` - `user_id: Optional[str]` (Deprecated) The user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") ### Model Copy Response - `class ModelCopyResponse: …` - `model: Model` - `id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `capabilities: List[Literal["3d23d", "audio2audio", "audio2video", 29 more]]` List of model capabilities (example: ["txt2img", "img2img", "txt2img_ip_adapter", ...]) - `"3d23d"` - `"audio2audio"` - `"audio2video"` - `"controlnet"` - `"controlnet_img2img"` - `"controlnet_inpaint"` - `"controlnet_inpaint_ip_adapter"` - `"controlnet_ip_adapter"` - `"controlnet_reference"` - `"controlnet_texture"` - `"img23d"` - `"img2img"` - `"img2img_ip_adapter"` - `"img2img_texture"` - `"img2txt"` - `"img2video"` - `"inpaint"` - `"inpaint_ip_adapter"` - `"outpaint"` - `"reference"` - `"reference_texture"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2img_ip_adapter"` - `"txt2img_texture"` - `"txt2txt"` - `"txt2video"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `collection_ids: List[str]` A list of CollectionId this model belongs to - `created_at: str` The model creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `custom: bool` Whether the model is a custom model and can be used only with POST /generate/custom/{modelId} endpoint - `example_asset_ids: List[str]` List of all example asset IDs setup by the model owner - `privacy: Literal["private", "public", "unlisted"]` The privacy of the model (default: private) - `"private"` - `"public"` - `"unlisted"` - `source: Literal["civitai", "huggingface", "other", "scenario"]` The source of the model - `"civitai"` - `"huggingface"` - `"other"` - `"scenario"` - `status: Literal["copying", "failed", "new", 3 more]` The model status - `"copying"` - `"failed"` - `"new"` - `"trained"` - `"training"` - `"training-canceled"` - `tags: List[str]` The associated tags (example: ["sci-fi", "landscape"]) - `training_images_number: float` The total number of training images - `type: Literal["custom", "elevenlabs-voice", "flux.1", 34 more]` The model type (example: "flux.1-lora") - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `updated_at: str` The model last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `access_restrictions: Optional[Literal[0, 100, 25, 2 more]]` The access restrictions of the model 0: Free plan 25: Creator plan 50: Pro plan 75: Team plan 100: Enterprise plan - `0` - `100` - `25` - `50` - `75` - `author_id: Optional[str]` The author user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") - `class_: Optional[ModelClass]` The class of the model - `category: str` The category slug of the class (example: "art-style") - `concept_prompt: str` The concept prompt of the class (example: "a sks character design") - `model_id: str` The model ID of the class (example: "stable-diffusion-v1-5") - `name: str` The class name (example: "Character Design") - `prompt: str` The class prompt (example: "a character design") - `slug: str` The class slug (example: "art-style-character-design") - `status: Literal["published", "unpublished"]` The class status (only published classes are listed, but unpublished classes can still appear in existing models) - `"published"` - `"unpublished"` - `thumbnails: List[str]` Some example images URLs to showcase the class - `compliant_model_ids: Optional[List[str]]` List of base model IDs compliant with the model (example: ["flux.1-dev", "flux.1-schnell"]) This attribute is mainly used for Flux LoRA models - `concepts: Optional[List[ModelConcept]]` The concepts is required for the type model: composition - `model_id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: float` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `model_epoch: Optional[str]` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `epoch: Optional[str]` The epoch of the model. Only available for Flux Lora Trained models. If not set, uses the final model epoch (latest) - `epochs: Optional[List[ModelEpoch]]` The epochs of the model. Only available for Flux Lora Trained models. - `epoch: str` The epoch hash to identify the epoch - `assets: Optional[List[ModelEpochAsset]]` The assets of the epoch if sample prompts as been supplied during training - `asset_id: str` The AssetId of the image during training (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the asset - `inputs: Optional[List[ModelInput]]` The inputs of the model. Only used for custom models. To retrieve this list, get it by modelId with GET /models/{modelId} - `name: str` The name that must be user to call the model through the API - `type: Literal["boolean", "file", "file_array", 7 more]` The data type of the input - `"boolean"` - `"file"` - `"file_array"` - `"inputs_array"` - `"model"` - `"model_array"` - `"number"` - `"number_array"` - `"string"` - `"string_array"` - `allowed_values: Optional[List[object]]` The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown. - `background_behavior: Optional[Literal["opaque", "transparent"]]` Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`. - `"opaque"` - `"transparent"` - `color: Optional[bool]` Whether the input is a color or not. Only available for `string` input type. - `cost_impact: Optional[bool]` Whether this input affects the model's cost calculation - `default: Optional[object]` The default value for the input - `description: Optional[str]` Help text displayed in the UI to provide additional information about the input - `group: Optional[str]` Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI. - `hint: Optional[str]` Hint text displayed in the UI as a tooltip to guide the user - `inputs: Optional[List[Dict[str, object]]]` The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs. - `kind: Optional[Literal["3d", "audio", "document", 4 more]]` The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `label: Optional[str]` The label displayed in the UI for this input - `mask_from: Optional[str]` The name of the file input field to use as the mask source - `max: Optional[float]` The maximum allowed value. Only available for `number` and `array` input types. - `max_length: Optional[float]` The maximum allowed length for `string` inputs. Also applies to each item in `string_array`. - `max_size: Optional[float]` The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time. - `min: Optional[float]` The minimum allowed value. Only available for `number` and array input types. - `min_length: Optional[float]` The minimum allowed length for string inputs. Also applies to each item in `string_array`. - `model_types: Optional[List[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]]` The allowed model types for this input. Example: `["flux.1-lora"]`. Only available for `model_array` input type. - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `parent: Optional[bool]` Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types. For `file_array`, the parent asset is the first item in the array. - `placeholder: Optional[str]` Placeholder text for the input. Only available for 'string' input type. - `prompt: Optional[bool]` Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type. - `prompt_spark: Optional[bool]` Whether the input is used with prompt spark. Only available for `string` input type. - `required: Optional[ModelInputRequired]` Set of rules that describes when this input is required: - `always`: Input is always required - `ifNotDefined`: Input is required when another specified input is not defined - `ifDefined`: Input is required when another specified input is defined - `conditionalValues`: Input is required when another input has a specific value By default, the input is not required. - `always: Optional[bool]` Whether the input is always required - `conditional_values: Optional[object]` Makes this input required when another input has a specific value: - Key: name of the input to check - Value: operation and allowed values that trigger the requirement - `if_defined: Optional[object]` Makes this input required when another input is defined: - Key: name of the input that must be defined - Value: message to display when this input is required - `if_not_defined: Optional[object]` Makes this input required when another input is not defined: - Key: name of the input that must be undefined - Value: message to display when this input is required - `step: Optional[float]` The step increment for numeric inputs. Only available for `number` input type. - `model_keyword: Optional[str]` The model keyword, this is a legacy parameter, please use conceptPrompt in parameters - `name: Optional[str]` The model name (example: "Cinematic Realism") - `negative_prompt_embedding: Optional[str]` Fine-tune the model's inferences with negative prompt embedding - `owner_id: Optional[str]` The owner ID (example: "team_VFhihHKMRZyDDnZAJwLb2Q") - `parameters: Optional[ModelParameters]` The parameters of the model - `age: Optional[str]` Age group of the voice (for professional cloning) Only available for ElevenLabs voice training - `batch_size: Optional[float]` The batch size Less steps, and will increase the learning rate Only available for Flux LoRA training - `class_prompt: Optional[str]` The prompt to specify images in the same class as provided instance images Only available for SD15 training - `clone_type: Optional[str]` Type of voice cloning: "instant" (fast) or "professional" (higher quality, requires captcha) Only available for ElevenLabs voice training - `concept_prompt: Optional[str]` The prompt with identifier specifying the instance (or subject) of the class (example: "a daiton dog") Default value varies depending on the model type: - For SD1.5: "daiton" if no class is associated with the model - For SDXL: "daiton" - For Flux: "" - `gender: Optional[str]` Gender of the voice (for professional cloning) Only available for ElevenLabs voice training - `language: Optional[str]` Language of the audio samples (ISO 639-1 code) Only available for ElevenLabs voice training - `learning_rate: Optional[float]` Initial learning rate (after the potential warmup period) Default value varies depending on the model type: - For SD1.5 and SDXL: 0.000005 - For Flux: 0.0001 - `learning_rate_text_encoder: Optional[float]` Initial learning rate (after the potential warmup period) for the text encoder Maximum [Flux LoRA: 0.001] Default [SDXL: 0.00005 | Flux LoRA: 0.00001] Minimum [SDXL: 0 | Flux LoRA: 0.000001] - `learning_rate_unet: Optional[float]` Initial learning rate (after the potential warmup period) for the UNet Only available for SDXL LoRA training - `lr_scheduler: Optional[Literal["constant", "constant-with-warmup", "cosine", 3 more]]` The scheduler type to use (default: "constant") Only available for SD15 and SDXL LoRA training - `"constant"` - `"constant-with-warmup"` - `"cosine"` - `"cosine-with-restarts"` - `"linear"` - `"polynomial"` - `max_train_steps: Optional[float]` Maximum number of training steps to execute (default: varies depending on the model type) For SDXL LoRA training, please use `numTextTrainSteps` and `numUNetTrainSteps` instead Default value varies depending on the model type: - For SD1.5: round((number of training images * 225) / 3) - For SDXL: number of training images * 175 - For Flux: number of training images * 100 Maximum value varies depending on the model type: - For SD1.5 and SDXL: [0, 40000] - For Flux: [0, 10000] - `nb_epochs: Optional[float]` The number of epochs to train for Only available for Flux LoRA training - `nb_repeats: Optional[float]` The number of times to repeat the training Only available for Flux LoRA training - `num_text_train_steps: Optional[float]` The number of training steps for the text encoder Only available for SDXL LoRA training - `num_u_net_train_steps: Optional[float]` The number of training steps for the UNet Only available for SDXL LoRA training - `optimize_for: Optional[Literal["likeness"]]` Optimize the model training task for a specific type of input images. The available values are: - "likeness": optimize training for likeness or portrait (targets specific transformer blocks) - "all": train all transformer blocks - "none": train no specific transformer blocks This parameter controls which double and single transformer blocks are trained during the LoRA training process. Only available for Flux LoRA training - `"likeness"` - `prior_loss_weight: Optional[float]` The weight of prior preservation loss Only available for SD15 and SDXL LoRA training - `random_crop: Optional[bool]` Whether to random crop or center crop images before resizing to the working resolution Only available for SD15 and SDXL LoRA training - `random_crop_ratio: Optional[float]` Ratio of random crops Only available for SD15 and SDXL LoRA training - `random_crop_scale: Optional[float]` Scale of random crops Only available for SD15 and SDXL LoRA training - `rank: Optional[float]` The dimension of the LoRA update matrices Only available for SDXL (deprecated), Flux LoRA and Musubi training Default value varies depending on the model type: - For SDXL (deprecated): 64 - For Flux: 16 - For Musubi: 64 Each trainer enforces its own tighter limit (Flux LoRA: [2; 64], Musubi: [2; 128]) - `remove_background_noise: Optional[bool]` Whether to remove background noise from audio samples before cloning. When enabled, each sample must be at least 5 seconds long. Only available for ElevenLabs voice training - `sample_prompts: Optional[List[str]]` The prompts to use for each epoch Only available for Flux LoRA training - `sample_source_images: Optional[List[str]]` The sample prompt images (AssetIds) paired with samplePrompts Only available for Flux LoRA training Must be the same length as samplePrompts - `scale_lr: Optional[bool]` Whether to scale the learning rate Note: Legacy parameter, will be ignored Only available for SD15 and SDXL LoRA training - `seed: Optional[float]` Used to reproduce previous results. Default: randomly generated number. Only available for SD15 and SDXL LoRA training - `text_encoder_training_ratio: Optional[float]` Whether to train the text encoder or not Example: For 100 steps and a value of 0.2, it means that the text encoder will be trained for 20 steps and then the UNet for 80 steps Note: Legacy parameter, please use `numTextTrainSteps` and `numUNetTrainSteps` Only available for SD15 and SDXL LoRA training - `validation_frequency: Optional[float]` Validation frequency. Cannot be greater than maxTrainSteps value Only available for SD15 and SDXL LoRA training - `validation_prompt: Optional[str]` Validation prompt Only available for SD15 and SDXL LoRA training - `voice_description: Optional[str]` Description of the voice characteristics Only available for ElevenLabs voice training - `wandb_key: Optional[str]` The Weights And Bias key to use for logging. The maximum length is 40 characters - `parent_model_id: Optional[str]` The id of the parent model - `performance_stats: Optional[ModelPerformanceStats]` Aggregated performance stats - `variants: List[ModelPerformanceStatsVariant]` Performance metrics per variant - `capability: str` The generation capability (example: "txt2img", "img2video", "txt2audio") - `computed_at: str` When these stats were last computed (ISO date) - `variant_key: str` Unique variant identifier (example: "txt2img:1K", "img2video:2K", "txt2audio") - `arena_score: Optional[ModelPerformanceStatsVariantArenaScore]` External quality score from arena.ai leaderboard - `arena_category: str` Arena category (example: "text_to_image", "image_to_video") - `arena_model_name: str` Model name on arena.ai - `fetched_at: str` When this score was last fetched (ISO date) - `rank: float` Rank in the arena category - `rating: float` ELO rating - `rating_lower: float` ELO rating confidence interval lower bound - `rating_upper: float` ELO rating confidence interval upper bound - `votes: float` Number of human votes - `cost_per_asset_max_cu: Optional[float]` Maximum cost per output asset (CU) - `cost_per_asset_min_cu: Optional[float]` Minimum cost per output asset (CU) - `cost_per_asset_p50_cu: Optional[float]` Median cost per output asset (CU) - `inference_latency_p50_sec: Optional[float]` Inference latency P50 per output asset (seconds) - `inference_latency_p75_sec: Optional[float]` Inference latency P75 per output asset (seconds) - `resolution: Optional[str]` The resolution bucket (example: "0.5K", "1K", "2K", "4K") - `total_latency_p50_sec: Optional[float]` Total latency P50 per output asset, including queue time (seconds) - `total_latency_p75_sec: Optional[float]` Total latency P75 per output asset, including queue time (seconds) - `default: Optional[str]` Default variant key for quick model comparison - `prompt_embedding: Optional[str]` Fine-tune the model's inferences with prompt embedding - `short_description: Optional[str]` The model short description (example: "This model generates highly detailed cinematic scenes.") - `soft_deletion_on: Optional[str]` The date when the model will be soft deleted (only for Free plan) - `thumbnail: Optional[ModelThumbnail]` A thumbnail for your model - `asset_id: str` The AssetId of the image used as a thumbnail for your model (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the image used as a thumbnail for your model - `training_image_pairs: Optional[List[ModelTrainingImagePair]]` Array of training image pairs - `instruction: Optional[str]` The instruction for the image pair, source to target - `source_id: Optional[str]` The source asset ID (must be a training asset) - `target_id: Optional[str]` The target asset ID (must be a training asset) - `training_images: Optional[List[ModelTrainingImage]]` The URLs of the first 3 training images of the model. To retrieve the full set of images, get it by modelId - `id: str` The training image ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `automatic_captioning: str` Automatic captioning of the image - `created_at: str` The training image upload date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `description: str` Description for the image - `download_url: str` The URL of the image - `name: str` The original file name of the image (example: "my-training-image.jpg") - `training_progress: Optional[ModelTrainingProgress]` Additional information about the training progress of the model - `stage: Literal["pending", "pending-captcha", "queued-for-train", 2 more]` The stage of the request - `"pending"` - `"pending-captcha"` - `"queued-for-train"` - `"running-train"` - `"starting-train"` - `updated_at: float` Timestamp in milliseconds of the last time the training progress was updated - `position: Optional[float]` Position of the job in the queue (ie. the number of job in the queue before this one) - `progress: Optional[float]` The progress of the job - `remaining_time_ms: Optional[float]` The remaining time in milliseconds - `started_at: Optional[float]` The timestamp in millisecond marking the start of the process - `training_stats: Optional[ModelTrainingStats]` Additional information about the model's training - `ended_at: Optional[str]` The training end time as an ISO date string - `queue_duration: Optional[float]` The training queued duration in seconds - `started_at: Optional[str]` The training start time as an ISO date string - `train_duration: Optional[float]` The training duration in seconds - `ui_config: Optional[ModelUiConfig]` The UI configuration for the model - `input_properties: Optional[Dict[str, ModelUiConfigInputProperties]]` Configuration for the input properties - `collapsed: Optional[bool]` - `loras_component: Optional[ModelUiConfigLorasComponent]` Configuration for the loras component - `label: str` The label of the component - `model_input: str` The input name of the model (model_array) - `scale_input: str` The input name of the scale (number_array) - `model_id_input: Optional[str]` The input model id (example: a composition or a single LoRA modelId) If specified, the model id will be attached to the output asset as a metadata If the model-decomposer parser is specified on it, modelInput and scaleInput will be automatically populated - `presets: Optional[List[ModelUiConfigPreset]]` Configuration for the presets - `fields: List[str]` - `presets: object` - `resolution_component: Optional[ModelUiConfigResolutionComponent]` Configuration for the resolution component - `height_input: str` The input name of the height - `label: str` The label of the component - `presets: List[ModelUiConfigResolutionComponentPreset]` The resolution presets - `height: float` - `label: str` - `width: float` - `width_input: str` The input name of the width - `selects: Optional[Dict[str, object]]` Configuration for the selects - `trigger_generate: Optional[ModelUiConfigTriggerGenerate]` Configuration for the trigger generate button - `label: str` - `after: Optional[str]` The 'name' of the input where the trigger generate button will be displayed (after the input). Do not specify both position and after. - `position: Optional[Literal["bottom", "top"]]` The position of the trigger generate button. If position specified, the button will be displayed at the specified position. Do not specify both position and after. - `"bottom"` - `"top"` - `user_id: Optional[str]` (Deprecated) The user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") ### Model Download Response - `class ModelDownloadResponse: …` - `job_id: str` The job id associated with the download request ### Model Update Tags Response - `class ModelUpdateTagsResponse: …` - `added: List[str]` The list of added tags - `deleted: List[str]` The list of deleted tags ### Model Transfer Response - `class ModelTransferResponse: …` - `model: Model` - `id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `capabilities: List[Literal["3d23d", "audio2audio", "audio2video", 29 more]]` List of model capabilities (example: ["txt2img", "img2img", "txt2img_ip_adapter", ...]) - `"3d23d"` - `"audio2audio"` - `"audio2video"` - `"controlnet"` - `"controlnet_img2img"` - `"controlnet_inpaint"` - `"controlnet_inpaint_ip_adapter"` - `"controlnet_ip_adapter"` - `"controlnet_reference"` - `"controlnet_texture"` - `"img23d"` - `"img2img"` - `"img2img_ip_adapter"` - `"img2img_texture"` - `"img2txt"` - `"img2video"` - `"inpaint"` - `"inpaint_ip_adapter"` - `"outpaint"` - `"reference"` - `"reference_texture"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2img_ip_adapter"` - `"txt2img_texture"` - `"txt2txt"` - `"txt2video"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `collection_ids: List[str]` A list of CollectionId this model belongs to - `created_at: str` The model creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `custom: bool` Whether the model is a custom model and can be used only with POST /generate/custom/{modelId} endpoint - `example_asset_ids: List[str]` List of all example asset IDs setup by the model owner - `privacy: Literal["private", "public", "unlisted"]` The privacy of the model (default: private) - `"private"` - `"public"` - `"unlisted"` - `source: Literal["civitai", "huggingface", "other", "scenario"]` The source of the model - `"civitai"` - `"huggingface"` - `"other"` - `"scenario"` - `status: Literal["copying", "failed", "new", 3 more]` The model status - `"copying"` - `"failed"` - `"new"` - `"trained"` - `"training"` - `"training-canceled"` - `tags: List[str]` The associated tags (example: ["sci-fi", "landscape"]) - `training_images_number: float` The total number of training images - `type: Literal["custom", "elevenlabs-voice", "flux.1", 34 more]` The model type (example: "flux.1-lora") - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `updated_at: str` The model last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `access_restrictions: Optional[Literal[0, 100, 25, 2 more]]` The access restrictions of the model 0: Free plan 25: Creator plan 50: Pro plan 75: Team plan 100: Enterprise plan - `0` - `100` - `25` - `50` - `75` - `author_id: Optional[str]` The author user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") - `class_: Optional[ModelClass]` The class of the model - `category: str` The category slug of the class (example: "art-style") - `concept_prompt: str` The concept prompt of the class (example: "a sks character design") - `model_id: str` The model ID of the class (example: "stable-diffusion-v1-5") - `name: str` The class name (example: "Character Design") - `prompt: str` The class prompt (example: "a character design") - `slug: str` The class slug (example: "art-style-character-design") - `status: Literal["published", "unpublished"]` The class status (only published classes are listed, but unpublished classes can still appear in existing models) - `"published"` - `"unpublished"` - `thumbnails: List[str]` Some example images URLs to showcase the class - `compliant_model_ids: Optional[List[str]]` List of base model IDs compliant with the model (example: ["flux.1-dev", "flux.1-schnell"]) This attribute is mainly used for Flux LoRA models - `concepts: Optional[List[ModelConcept]]` The concepts is required for the type model: composition - `model_id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: float` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `model_epoch: Optional[str]` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `epoch: Optional[str]` The epoch of the model. Only available for Flux Lora Trained models. If not set, uses the final model epoch (latest) - `epochs: Optional[List[ModelEpoch]]` The epochs of the model. Only available for Flux Lora Trained models. - `epoch: str` The epoch hash to identify the epoch - `assets: Optional[List[ModelEpochAsset]]` The assets of the epoch if sample prompts as been supplied during training - `asset_id: str` The AssetId of the image during training (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the asset - `inputs: Optional[List[ModelInput]]` The inputs of the model. Only used for custom models. To retrieve this list, get it by modelId with GET /models/{modelId} - `name: str` The name that must be user to call the model through the API - `type: Literal["boolean", "file", "file_array", 7 more]` The data type of the input - `"boolean"` - `"file"` - `"file_array"` - `"inputs_array"` - `"model"` - `"model_array"` - `"number"` - `"number_array"` - `"string"` - `"string_array"` - `allowed_values: Optional[List[object]]` The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown. - `background_behavior: Optional[Literal["opaque", "transparent"]]` Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`. - `"opaque"` - `"transparent"` - `color: Optional[bool]` Whether the input is a color or not. Only available for `string` input type. - `cost_impact: Optional[bool]` Whether this input affects the model's cost calculation - `default: Optional[object]` The default value for the input - `description: Optional[str]` Help text displayed in the UI to provide additional information about the input - `group: Optional[str]` Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI. - `hint: Optional[str]` Hint text displayed in the UI as a tooltip to guide the user - `inputs: Optional[List[Dict[str, object]]]` The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs. - `kind: Optional[Literal["3d", "audio", "document", 4 more]]` The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `label: Optional[str]` The label displayed in the UI for this input - `mask_from: Optional[str]` The name of the file input field to use as the mask source - `max: Optional[float]` The maximum allowed value. Only available for `number` and `array` input types. - `max_length: Optional[float]` The maximum allowed length for `string` inputs. Also applies to each item in `string_array`. - `max_size: Optional[float]` The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time. - `min: Optional[float]` The minimum allowed value. Only available for `number` and array input types. - `min_length: Optional[float]` The minimum allowed length for string inputs. Also applies to each item in `string_array`. - `model_types: Optional[List[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]]` The allowed model types for this input. Example: `["flux.1-lora"]`. Only available for `model_array` input type. - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `parent: Optional[bool]` Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types. For `file_array`, the parent asset is the first item in the array. - `placeholder: Optional[str]` Placeholder text for the input. Only available for 'string' input type. - `prompt: Optional[bool]` Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type. - `prompt_spark: Optional[bool]` Whether the input is used with prompt spark. Only available for `string` input type. - `required: Optional[ModelInputRequired]` Set of rules that describes when this input is required: - `always`: Input is always required - `ifNotDefined`: Input is required when another specified input is not defined - `ifDefined`: Input is required when another specified input is defined - `conditionalValues`: Input is required when another input has a specific value By default, the input is not required. - `always: Optional[bool]` Whether the input is always required - `conditional_values: Optional[object]` Makes this input required when another input has a specific value: - Key: name of the input to check - Value: operation and allowed values that trigger the requirement - `if_defined: Optional[object]` Makes this input required when another input is defined: - Key: name of the input that must be defined - Value: message to display when this input is required - `if_not_defined: Optional[object]` Makes this input required when another input is not defined: - Key: name of the input that must be undefined - Value: message to display when this input is required - `step: Optional[float]` The step increment for numeric inputs. Only available for `number` input type. - `model_keyword: Optional[str]` The model keyword, this is a legacy parameter, please use conceptPrompt in parameters - `name: Optional[str]` The model name (example: "Cinematic Realism") - `negative_prompt_embedding: Optional[str]` Fine-tune the model's inferences with negative prompt embedding - `owner_id: Optional[str]` The owner ID (example: "team_VFhihHKMRZyDDnZAJwLb2Q") - `parameters: Optional[ModelParameters]` The parameters of the model - `age: Optional[str]` Age group of the voice (for professional cloning) Only available for ElevenLabs voice training - `batch_size: Optional[float]` The batch size Less steps, and will increase the learning rate Only available for Flux LoRA training - `class_prompt: Optional[str]` The prompt to specify images in the same class as provided instance images Only available for SD15 training - `clone_type: Optional[str]` Type of voice cloning: "instant" (fast) or "professional" (higher quality, requires captcha) Only available for ElevenLabs voice training - `concept_prompt: Optional[str]` The prompt with identifier specifying the instance (or subject) of the class (example: "a daiton dog") Default value varies depending on the model type: - For SD1.5: "daiton" if no class is associated with the model - For SDXL: "daiton" - For Flux: "" - `gender: Optional[str]` Gender of the voice (for professional cloning) Only available for ElevenLabs voice training - `language: Optional[str]` Language of the audio samples (ISO 639-1 code) Only available for ElevenLabs voice training - `learning_rate: Optional[float]` Initial learning rate (after the potential warmup period) Default value varies depending on the model type: - For SD1.5 and SDXL: 0.000005 - For Flux: 0.0001 - `learning_rate_text_encoder: Optional[float]` Initial learning rate (after the potential warmup period) for the text encoder Maximum [Flux LoRA: 0.001] Default [SDXL: 0.00005 | Flux LoRA: 0.00001] Minimum [SDXL: 0 | Flux LoRA: 0.000001] - `learning_rate_unet: Optional[float]` Initial learning rate (after the potential warmup period) for the UNet Only available for SDXL LoRA training - `lr_scheduler: Optional[Literal["constant", "constant-with-warmup", "cosine", 3 more]]` The scheduler type to use (default: "constant") Only available for SD15 and SDXL LoRA training - `"constant"` - `"constant-with-warmup"` - `"cosine"` - `"cosine-with-restarts"` - `"linear"` - `"polynomial"` - `max_train_steps: Optional[float]` Maximum number of training steps to execute (default: varies depending on the model type) For SDXL LoRA training, please use `numTextTrainSteps` and `numUNetTrainSteps` instead Default value varies depending on the model type: - For SD1.5: round((number of training images * 225) / 3) - For SDXL: number of training images * 175 - For Flux: number of training images * 100 Maximum value varies depending on the model type: - For SD1.5 and SDXL: [0, 40000] - For Flux: [0, 10000] - `nb_epochs: Optional[float]` The number of epochs to train for Only available for Flux LoRA training - `nb_repeats: Optional[float]` The number of times to repeat the training Only available for Flux LoRA training - `num_text_train_steps: Optional[float]` The number of training steps for the text encoder Only available for SDXL LoRA training - `num_u_net_train_steps: Optional[float]` The number of training steps for the UNet Only available for SDXL LoRA training - `optimize_for: Optional[Literal["likeness"]]` Optimize the model training task for a specific type of input images. The available values are: - "likeness": optimize training for likeness or portrait (targets specific transformer blocks) - "all": train all transformer blocks - "none": train no specific transformer blocks This parameter controls which double and single transformer blocks are trained during the LoRA training process. Only available for Flux LoRA training - `"likeness"` - `prior_loss_weight: Optional[float]` The weight of prior preservation loss Only available for SD15 and SDXL LoRA training - `random_crop: Optional[bool]` Whether to random crop or center crop images before resizing to the working resolution Only available for SD15 and SDXL LoRA training - `random_crop_ratio: Optional[float]` Ratio of random crops Only available for SD15 and SDXL LoRA training - `random_crop_scale: Optional[float]` Scale of random crops Only available for SD15 and SDXL LoRA training - `rank: Optional[float]` The dimension of the LoRA update matrices Only available for SDXL (deprecated), Flux LoRA and Musubi training Default value varies depending on the model type: - For SDXL (deprecated): 64 - For Flux: 16 - For Musubi: 64 Each trainer enforces its own tighter limit (Flux LoRA: [2; 64], Musubi: [2; 128]) - `remove_background_noise: Optional[bool]` Whether to remove background noise from audio samples before cloning. When enabled, each sample must be at least 5 seconds long. Only available for ElevenLabs voice training - `sample_prompts: Optional[List[str]]` The prompts to use for each epoch Only available for Flux LoRA training - `sample_source_images: Optional[List[str]]` The sample prompt images (AssetIds) paired with samplePrompts Only available for Flux LoRA training Must be the same length as samplePrompts - `scale_lr: Optional[bool]` Whether to scale the learning rate Note: Legacy parameter, will be ignored Only available for SD15 and SDXL LoRA training - `seed: Optional[float]` Used to reproduce previous results. Default: randomly generated number. Only available for SD15 and SDXL LoRA training - `text_encoder_training_ratio: Optional[float]` Whether to train the text encoder or not Example: For 100 steps and a value of 0.2, it means that the text encoder will be trained for 20 steps and then the UNet for 80 steps Note: Legacy parameter, please use `numTextTrainSteps` and `numUNetTrainSteps` Only available for SD15 and SDXL LoRA training - `validation_frequency: Optional[float]` Validation frequency. Cannot be greater than maxTrainSteps value Only available for SD15 and SDXL LoRA training - `validation_prompt: Optional[str]` Validation prompt Only available for SD15 and SDXL LoRA training - `voice_description: Optional[str]` Description of the voice characteristics Only available for ElevenLabs voice training - `wandb_key: Optional[str]` The Weights And Bias key to use for logging. The maximum length is 40 characters - `parent_model_id: Optional[str]` The id of the parent model - `performance_stats: Optional[ModelPerformanceStats]` Aggregated performance stats - `variants: List[ModelPerformanceStatsVariant]` Performance metrics per variant - `capability: str` The generation capability (example: "txt2img", "img2video", "txt2audio") - `computed_at: str` When these stats were last computed (ISO date) - `variant_key: str` Unique variant identifier (example: "txt2img:1K", "img2video:2K", "txt2audio") - `arena_score: Optional[ModelPerformanceStatsVariantArenaScore]` External quality score from arena.ai leaderboard - `arena_category: str` Arena category (example: "text_to_image", "image_to_video") - `arena_model_name: str` Model name on arena.ai - `fetched_at: str` When this score was last fetched (ISO date) - `rank: float` Rank in the arena category - `rating: float` ELO rating - `rating_lower: float` ELO rating confidence interval lower bound - `rating_upper: float` ELO rating confidence interval upper bound - `votes: float` Number of human votes - `cost_per_asset_max_cu: Optional[float]` Maximum cost per output asset (CU) - `cost_per_asset_min_cu: Optional[float]` Minimum cost per output asset (CU) - `cost_per_asset_p50_cu: Optional[float]` Median cost per output asset (CU) - `inference_latency_p50_sec: Optional[float]` Inference latency P50 per output asset (seconds) - `inference_latency_p75_sec: Optional[float]` Inference latency P75 per output asset (seconds) - `resolution: Optional[str]` The resolution bucket (example: "0.5K", "1K", "2K", "4K") - `total_latency_p50_sec: Optional[float]` Total latency P50 per output asset, including queue time (seconds) - `total_latency_p75_sec: Optional[float]` Total latency P75 per output asset, including queue time (seconds) - `default: Optional[str]` Default variant key for quick model comparison - `prompt_embedding: Optional[str]` Fine-tune the model's inferences with prompt embedding - `short_description: Optional[str]` The model short description (example: "This model generates highly detailed cinematic scenes.") - `soft_deletion_on: Optional[str]` The date when the model will be soft deleted (only for Free plan) - `thumbnail: Optional[ModelThumbnail]` A thumbnail for your model - `asset_id: str` The AssetId of the image used as a thumbnail for your model (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the image used as a thumbnail for your model - `training_image_pairs: Optional[List[ModelTrainingImagePair]]` Array of training image pairs - `instruction: Optional[str]` The instruction for the image pair, source to target - `source_id: Optional[str]` The source asset ID (must be a training asset) - `target_id: Optional[str]` The target asset ID (must be a training asset) - `training_images: Optional[List[ModelTrainingImage]]` The URLs of the first 3 training images of the model. To retrieve the full set of images, get it by modelId - `id: str` The training image ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `automatic_captioning: str` Automatic captioning of the image - `created_at: str` The training image upload date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `description: str` Description for the image - `download_url: str` The URL of the image - `name: str` The original file name of the image (example: "my-training-image.jpg") - `training_progress: Optional[ModelTrainingProgress]` Additional information about the training progress of the model - `stage: Literal["pending", "pending-captcha", "queued-for-train", 2 more]` The stage of the request - `"pending"` - `"pending-captcha"` - `"queued-for-train"` - `"running-train"` - `"starting-train"` - `updated_at: float` Timestamp in milliseconds of the last time the training progress was updated - `position: Optional[float]` Position of the job in the queue (ie. the number of job in the queue before this one) - `progress: Optional[float]` The progress of the job - `remaining_time_ms: Optional[float]` The remaining time in milliseconds - `started_at: Optional[float]` The timestamp in millisecond marking the start of the process - `training_stats: Optional[ModelTrainingStats]` Additional information about the model's training - `ended_at: Optional[str]` The training end time as an ISO date string - `queue_duration: Optional[float]` The training queued duration in seconds - `started_at: Optional[str]` The training start time as an ISO date string - `train_duration: Optional[float]` The training duration in seconds - `ui_config: Optional[ModelUiConfig]` The UI configuration for the model - `input_properties: Optional[Dict[str, ModelUiConfigInputProperties]]` Configuration for the input properties - `collapsed: Optional[bool]` - `loras_component: Optional[ModelUiConfigLorasComponent]` Configuration for the loras component - `label: str` The label of the component - `model_input: str` The input name of the model (model_array) - `scale_input: str` The input name of the scale (number_array) - `model_id_input: Optional[str]` The input model id (example: a composition or a single LoRA modelId) If specified, the model id will be attached to the output asset as a metadata If the model-decomposer parser is specified on it, modelInput and scaleInput will be automatically populated - `presets: Optional[List[ModelUiConfigPreset]]` Configuration for the presets - `fields: List[str]` - `presets: object` - `resolution_component: Optional[ModelUiConfigResolutionComponent]` Configuration for the resolution component - `height_input: str` The input name of the height - `label: str` The label of the component - `presets: List[ModelUiConfigResolutionComponentPreset]` The resolution presets - `height: float` - `label: str` - `width: float` - `width_input: str` The input name of the width - `selects: Optional[Dict[str, object]]` Configuration for the selects - `trigger_generate: Optional[ModelUiConfigTriggerGenerate]` Configuration for the trigger generate button - `label: str` - `after: Optional[str]` The 'name' of the input where the trigger generate button will be displayed (after the input). Do not specify both position and after. - `position: Optional[Literal["bottom", "top"]]` The position of the trigger generate button. If position specified, the button will be displayed at the specified position. Do not specify both position and after. - `"bottom"` - `"top"` - `user_id: Optional[str]` (Deprecated) The user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") # Description ## Retrieve `models.description.retrieve(strmodel_id, DescriptionRetrieveParams**kwargs) -> DescriptionRetrieveResponse` **get** `/models/{modelId}/description` Get the description of the given `modelId` ### Parameters - `model_id: str` - `original_assets: Optional[bool]` If set to true, returns the original asset without transformation ### Returns - `class DescriptionRetrieveResponse: …` - `description: Description` - `assets: List[DescriptionAsset]` The list of assets referenced by the Markdown `{asset}` tag in the description. - `id: str` The asset ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `author_id: str` The author user ID (example: "dcf121faaa1a0a0bbbd9ca1b73d62aea") - `kind: Literal["3d", "audio", "document", 4 more]` The kind of asset - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `mime_type: str` The mime type of the asset (example: "image/png") - `owner_id: str` The owner (project) ID (example: "proj_23tlk332lkht3kl2" or "team_dlkhgs23tlk3hlkth32lkht3kl2" for old teams) - `privacy: Literal["private", "public", "unlisted"]` The privacy of the asset - `"private"` - `"public"` - `"unlisted"` - `properties: DescriptionAssetProperties` The properties of the asset, content may depend on the kind of asset returned - `size: float` - `animation_frame_count: Optional[float]` Number of animation frames if animations exist - `bitrate: Optional[float]` Bitrate of the media in bits per second - `bone_count: Optional[float]` Number of bones if skeleton exists - `channels: Optional[float]` Number of channels of the audio - `classification: Optional[Literal["effect", "interview", "music", 5 more]]` Classification of the audio - `"effect"` - `"interview"` - `"music"` - `"other"` - `"sound"` - `"speech"` - `"text"` - `"unknown"` - `codec_name: Optional[str]` Codec name of the media - `description: Optional[str]` Description of the audio - `dimensions: Optional[List[float]]` Bounding box dimensions [width, height, depth] - `duration: Optional[float]` Duration of the media in seconds - `face_count: Optional[float]` Number of faces/triangles in the mesh - `format: Optional[str]` Format of the mesh file (e.g. 'glb', etc.) - `frame_rate: Optional[float]` Frame rate of the video in frames per second - `has_animations: Optional[bool]` Whether the mesh has animations - `has_normals: Optional[bool]` Whether the mesh has normal vectors - `has_skeleton: Optional[bool]` Whether the mesh has bones/skeleton - `has_u_vs: Optional[bool]` Whether the mesh has UV coordinates - `height: Optional[float]` - `nb_frames: Optional[float]` Number of frames in the video - `sample_rate: Optional[float]` Sample rate of the media in Hz - `transcription: Optional[DescriptionAssetPropertiesTranscription]` Transcription of the audio - `text: str` - `vertex_count: Optional[float]` Number of vertices in the mesh - `width: Optional[float]` - `source: Literal["3d23d", "3d23d:texture", "3d:texture", 72 more]` source of the asset - `"3d23d"` - `"3d23d:texture"` - `"3d:texture"` - `"3d:texture:albedo"` - `"3d:texture:metallic"` - `"3d:texture:mtl"` - `"3d:texture:normal"` - `"3d:texture:roughness"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-control-net"` - `"inference-control-net-img"` - `"inference-control-net-inpainting"` - `"inference-control-net-inpainting-ip-adapter"` - `"inference-control-net-ip-adapter"` - `"inference-control-net-reference"` - `"inference-control-net-texture"` - `"inference-img"` - `"inference-img-ip-adapter"` - `"inference-img-texture"` - `"inference-in-paint"` - `"inference-in-paint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt"` - `"inference-txt-ip-adapter"` - `"inference-txt-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture:albedo"` - `"texture:ao"` - `"texture:edge"` - `"texture:height"` - `"texture:metallic"` - `"texture:normal"` - `"texture:smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `url: str` Signed URL to get the asset content - `original_file_url: Optional[str]` The original file url. Contains the url of the original file. without any conversion. Only available for some specific video, audio and threeD assets. Is only specified if the given asset data has been replaced with a new file during the creation of the asset. - `preview: Optional[DescriptionAssetPreview]` The asset's preview. Contains the assetId and the url of the preview. - `asset_id: str` - `url: str` - `thumbnail: Optional[DescriptionAssetThumbnail]` The asset's thumbnail. Contains the assetId and the url of the thumbnail. - `asset_id: str` - `url: str` - `models: List[DescriptionModel]` The list of models referenced by the Markdown `{model}` tag in the description. - `id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `privacy: Literal["private", "public", "unlisted"]` The privacy of the model (default: private) - `"private"` - `"public"` - `"unlisted"` - `type: Literal["custom", "elevenlabs-voice", "flux.1", 34 more]` The model type (example: "flux.1-lora") - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `author_id: Optional[str]` The author user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") - `name: Optional[str]` The model name (example: "Cinematic Realism") - `owner_id: Optional[str]` The owner ID (example: "team_VFhihHKMRZyDDnZAJwLb2Q") - `short_description: Optional[str]` The model short description (example: "This model generates highly detailed cinematic scenes.") - `value: str` The markdown description of the model (ex: `# My model`). We allow the `{asset:}` and `{model:}` tags. ### Example ```python import os from scenario_sdk import Scenario client = Scenario( api_key=os.environ.get("SCENARIO_SDK_API_KEY"), # This is the default and can be omitted api_secret=os.environ.get("SCENARIO_SDK_API_SECRET"), # This is the default and can be omitted ) description = client.models.description.retrieve( model_id="modelId", ) print(description.description) ``` #### Response ```json { "description": { "assets": [ { "id": "id", "authorId": "authorId", "kind": "3d", "mimeType": "mimeType", "ownerId": "ownerId", "privacy": "private", "properties": { "size": 0, "animationFrameCount": 0, "bitrate": 0, "boneCount": 0, "channels": 0, "classification": "effect", "codecName": "codecName", "description": "description", "dimensions": [ 0, 0, 0 ], "duration": 0, "faceCount": 0, "format": "format", "frameRate": 0, "hasAnimations": true, "hasNormals": true, "hasSkeleton": true, "hasUVs": true, "height": 0, "nbFrames": 0, "sampleRate": 0, "transcription": { "text": "text" }, "vertexCount": 0, "width": 0 }, "source": "3d23d", "url": "url", "originalFileUrl": "originalFileUrl", "preview": { "assetId": "assetId", "url": "url" }, "thumbnail": { "assetId": "assetId", "url": "url" } } ], "models": [ { "id": "id", "privacy": "private", "type": "custom", "authorId": "authorId", "name": "name", "ownerId": "ownerId", "shortDescription": "shortDescription" } ], "value": "value" } } ``` ## Update `models.description.update(strmodel_id, DescriptionUpdateParams**kwargs) -> DescriptionUpdateResponse` **put** `/models/{modelId}/description` Update the markdown description of the given `modelId` ### Parameters - `model_id: str` - `description: str` The markdown description of the model (ex: `# My model`). Set to `null` to delete the description. - `original_assets: Optional[bool]` If set to true, returns the original asset without transformation ### Returns - `class DescriptionUpdateResponse: …` - `description: Description` - `assets: List[DescriptionAsset]` The list of assets referenced by the Markdown `{asset}` tag in the description. - `id: str` The asset ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `author_id: str` The author user ID (example: "dcf121faaa1a0a0bbbd9ca1b73d62aea") - `kind: Literal["3d", "audio", "document", 4 more]` The kind of asset - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `mime_type: str` The mime type of the asset (example: "image/png") - `owner_id: str` The owner (project) ID (example: "proj_23tlk332lkht3kl2" or "team_dlkhgs23tlk3hlkth32lkht3kl2" for old teams) - `privacy: Literal["private", "public", "unlisted"]` The privacy of the asset - `"private"` - `"public"` - `"unlisted"` - `properties: DescriptionAssetProperties` The properties of the asset, content may depend on the kind of asset returned - `size: float` - `animation_frame_count: Optional[float]` Number of animation frames if animations exist - `bitrate: Optional[float]` Bitrate of the media in bits per second - `bone_count: Optional[float]` Number of bones if skeleton exists - `channels: Optional[float]` Number of channels of the audio - `classification: Optional[Literal["effect", "interview", "music", 5 more]]` Classification of the audio - `"effect"` - `"interview"` - `"music"` - `"other"` - `"sound"` - `"speech"` - `"text"` - `"unknown"` - `codec_name: Optional[str]` Codec name of the media - `description: Optional[str]` Description of the audio - `dimensions: Optional[List[float]]` Bounding box dimensions [width, height, depth] - `duration: Optional[float]` Duration of the media in seconds - `face_count: Optional[float]` Number of faces/triangles in the mesh - `format: Optional[str]` Format of the mesh file (e.g. 'glb', etc.) - `frame_rate: Optional[float]` Frame rate of the video in frames per second - `has_animations: Optional[bool]` Whether the mesh has animations - `has_normals: Optional[bool]` Whether the mesh has normal vectors - `has_skeleton: Optional[bool]` Whether the mesh has bones/skeleton - `has_u_vs: Optional[bool]` Whether the mesh has UV coordinates - `height: Optional[float]` - `nb_frames: Optional[float]` Number of frames in the video - `sample_rate: Optional[float]` Sample rate of the media in Hz - `transcription: Optional[DescriptionAssetPropertiesTranscription]` Transcription of the audio - `text: str` - `vertex_count: Optional[float]` Number of vertices in the mesh - `width: Optional[float]` - `source: Literal["3d23d", "3d23d:texture", "3d:texture", 72 more]` source of the asset - `"3d23d"` - `"3d23d:texture"` - `"3d:texture"` - `"3d:texture:albedo"` - `"3d:texture:metallic"` - `"3d:texture:mtl"` - `"3d:texture:normal"` - `"3d:texture:roughness"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-control-net"` - `"inference-control-net-img"` - `"inference-control-net-inpainting"` - `"inference-control-net-inpainting-ip-adapter"` - `"inference-control-net-ip-adapter"` - `"inference-control-net-reference"` - `"inference-control-net-texture"` - `"inference-img"` - `"inference-img-ip-adapter"` - `"inference-img-texture"` - `"inference-in-paint"` - `"inference-in-paint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt"` - `"inference-txt-ip-adapter"` - `"inference-txt-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture:albedo"` - `"texture:ao"` - `"texture:edge"` - `"texture:height"` - `"texture:metallic"` - `"texture:normal"` - `"texture:smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `url: str` Signed URL to get the asset content - `original_file_url: Optional[str]` The original file url. Contains the url of the original file. without any conversion. Only available for some specific video, audio and threeD assets. Is only specified if the given asset data has been replaced with a new file during the creation of the asset. - `preview: Optional[DescriptionAssetPreview]` The asset's preview. Contains the assetId and the url of the preview. - `asset_id: str` - `url: str` - `thumbnail: Optional[DescriptionAssetThumbnail]` The asset's thumbnail. Contains the assetId and the url of the thumbnail. - `asset_id: str` - `url: str` - `models: List[DescriptionModel]` The list of models referenced by the Markdown `{model}` tag in the description. - `id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `privacy: Literal["private", "public", "unlisted"]` The privacy of the model (default: private) - `"private"` - `"public"` - `"unlisted"` - `type: Literal["custom", "elevenlabs-voice", "flux.1", 34 more]` The model type (example: "flux.1-lora") - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `author_id: Optional[str]` The author user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") - `name: Optional[str]` The model name (example: "Cinematic Realism") - `owner_id: Optional[str]` The owner ID (example: "team_VFhihHKMRZyDDnZAJwLb2Q") - `short_description: Optional[str]` The model short description (example: "This model generates highly detailed cinematic scenes.") - `value: str` The markdown description of the model (ex: `# My model`). We allow the `{asset:}` and `{model:}` tags. ### Example ```python import os from scenario_sdk import Scenario client = Scenario( api_key=os.environ.get("SCENARIO_SDK_API_KEY"), # This is the default and can be omitted api_secret=os.environ.get("SCENARIO_SDK_API_SECRET"), # This is the default and can be omitted ) description = client.models.description.update( model_id="modelId", description="description", ) print(description.description) ``` #### Response ```json { "description": { "assets": [ { "id": "id", "authorId": "authorId", "kind": "3d", "mimeType": "mimeType", "ownerId": "ownerId", "privacy": "private", "properties": { "size": 0, "animationFrameCount": 0, "bitrate": 0, "boneCount": 0, "channels": 0, "classification": "effect", "codecName": "codecName", "description": "description", "dimensions": [ 0, 0, 0 ], "duration": 0, "faceCount": 0, "format": "format", "frameRate": 0, "hasAnimations": true, "hasNormals": true, "hasSkeleton": true, "hasUVs": true, "height": 0, "nbFrames": 0, "sampleRate": 0, "transcription": { "text": "text" }, "vertexCount": 0, "width": 0 }, "source": "3d23d", "url": "url", "originalFileUrl": "originalFileUrl", "preview": { "assetId": "assetId", "url": "url" }, "thumbnail": { "assetId": "assetId", "url": "url" } } ], "models": [ { "id": "id", "privacy": "private", "type": "custom", "authorId": "authorId", "name": "name", "ownerId": "ownerId", "shortDescription": "shortDescription" } ], "value": "value" } } ``` ## Domain Types ### Description Retrieve Response - `class DescriptionRetrieveResponse: …` - `description: Description` - `assets: List[DescriptionAsset]` The list of assets referenced by the Markdown `{asset}` tag in the description. - `id: str` The asset ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `author_id: str` The author user ID (example: "dcf121faaa1a0a0bbbd9ca1b73d62aea") - `kind: Literal["3d", "audio", "document", 4 more]` The kind of asset - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `mime_type: str` The mime type of the asset (example: "image/png") - `owner_id: str` The owner (project) ID (example: "proj_23tlk332lkht3kl2" or "team_dlkhgs23tlk3hlkth32lkht3kl2" for old teams) - `privacy: Literal["private", "public", "unlisted"]` The privacy of the asset - `"private"` - `"public"` - `"unlisted"` - `properties: DescriptionAssetProperties` The properties of the asset, content may depend on the kind of asset returned - `size: float` - `animation_frame_count: Optional[float]` Number of animation frames if animations exist - `bitrate: Optional[float]` Bitrate of the media in bits per second - `bone_count: Optional[float]` Number of bones if skeleton exists - `channels: Optional[float]` Number of channels of the audio - `classification: Optional[Literal["effect", "interview", "music", 5 more]]` Classification of the audio - `"effect"` - `"interview"` - `"music"` - `"other"` - `"sound"` - `"speech"` - `"text"` - `"unknown"` - `codec_name: Optional[str]` Codec name of the media - `description: Optional[str]` Description of the audio - `dimensions: Optional[List[float]]` Bounding box dimensions [width, height, depth] - `duration: Optional[float]` Duration of the media in seconds - `face_count: Optional[float]` Number of faces/triangles in the mesh - `format: Optional[str]` Format of the mesh file (e.g. 'glb', etc.) - `frame_rate: Optional[float]` Frame rate of the video in frames per second - `has_animations: Optional[bool]` Whether the mesh has animations - `has_normals: Optional[bool]` Whether the mesh has normal vectors - `has_skeleton: Optional[bool]` Whether the mesh has bones/skeleton - `has_u_vs: Optional[bool]` Whether the mesh has UV coordinates - `height: Optional[float]` - `nb_frames: Optional[float]` Number of frames in the video - `sample_rate: Optional[float]` Sample rate of the media in Hz - `transcription: Optional[DescriptionAssetPropertiesTranscription]` Transcription of the audio - `text: str` - `vertex_count: Optional[float]` Number of vertices in the mesh - `width: Optional[float]` - `source: Literal["3d23d", "3d23d:texture", "3d:texture", 72 more]` source of the asset - `"3d23d"` - `"3d23d:texture"` - `"3d:texture"` - `"3d:texture:albedo"` - `"3d:texture:metallic"` - `"3d:texture:mtl"` - `"3d:texture:normal"` - `"3d:texture:roughness"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-control-net"` - `"inference-control-net-img"` - `"inference-control-net-inpainting"` - `"inference-control-net-inpainting-ip-adapter"` - `"inference-control-net-ip-adapter"` - `"inference-control-net-reference"` - `"inference-control-net-texture"` - `"inference-img"` - `"inference-img-ip-adapter"` - `"inference-img-texture"` - `"inference-in-paint"` - `"inference-in-paint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt"` - `"inference-txt-ip-adapter"` - `"inference-txt-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture:albedo"` - `"texture:ao"` - `"texture:edge"` - `"texture:height"` - `"texture:metallic"` - `"texture:normal"` - `"texture:smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `url: str` Signed URL to get the asset content - `original_file_url: Optional[str]` The original file url. Contains the url of the original file. without any conversion. Only available for some specific video, audio and threeD assets. Is only specified if the given asset data has been replaced with a new file during the creation of the asset. - `preview: Optional[DescriptionAssetPreview]` The asset's preview. Contains the assetId and the url of the preview. - `asset_id: str` - `url: str` - `thumbnail: Optional[DescriptionAssetThumbnail]` The asset's thumbnail. Contains the assetId and the url of the thumbnail. - `asset_id: str` - `url: str` - `models: List[DescriptionModel]` The list of models referenced by the Markdown `{model}` tag in the description. - `id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `privacy: Literal["private", "public", "unlisted"]` The privacy of the model (default: private) - `"private"` - `"public"` - `"unlisted"` - `type: Literal["custom", "elevenlabs-voice", "flux.1", 34 more]` The model type (example: "flux.1-lora") - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `author_id: Optional[str]` The author user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") - `name: Optional[str]` The model name (example: "Cinematic Realism") - `owner_id: Optional[str]` The owner ID (example: "team_VFhihHKMRZyDDnZAJwLb2Q") - `short_description: Optional[str]` The model short description (example: "This model generates highly detailed cinematic scenes.") - `value: str` The markdown description of the model (ex: `# My model`). We allow the `{asset:}` and `{model:}` tags. ### Description Update Response - `class DescriptionUpdateResponse: …` - `description: Description` - `assets: List[DescriptionAsset]` The list of assets referenced by the Markdown `{asset}` tag in the description. - `id: str` The asset ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `author_id: str` The author user ID (example: "dcf121faaa1a0a0bbbd9ca1b73d62aea") - `kind: Literal["3d", "audio", "document", 4 more]` The kind of asset - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `mime_type: str` The mime type of the asset (example: "image/png") - `owner_id: str` The owner (project) ID (example: "proj_23tlk332lkht3kl2" or "team_dlkhgs23tlk3hlkth32lkht3kl2" for old teams) - `privacy: Literal["private", "public", "unlisted"]` The privacy of the asset - `"private"` - `"public"` - `"unlisted"` - `properties: DescriptionAssetProperties` The properties of the asset, content may depend on the kind of asset returned - `size: float` - `animation_frame_count: Optional[float]` Number of animation frames if animations exist - `bitrate: Optional[float]` Bitrate of the media in bits per second - `bone_count: Optional[float]` Number of bones if skeleton exists - `channels: Optional[float]` Number of channels of the audio - `classification: Optional[Literal["effect", "interview", "music", 5 more]]` Classification of the audio - `"effect"` - `"interview"` - `"music"` - `"other"` - `"sound"` - `"speech"` - `"text"` - `"unknown"` - `codec_name: Optional[str]` Codec name of the media - `description: Optional[str]` Description of the audio - `dimensions: Optional[List[float]]` Bounding box dimensions [width, height, depth] - `duration: Optional[float]` Duration of the media in seconds - `face_count: Optional[float]` Number of faces/triangles in the mesh - `format: Optional[str]` Format of the mesh file (e.g. 'glb', etc.) - `frame_rate: Optional[float]` Frame rate of the video in frames per second - `has_animations: Optional[bool]` Whether the mesh has animations - `has_normals: Optional[bool]` Whether the mesh has normal vectors - `has_skeleton: Optional[bool]` Whether the mesh has bones/skeleton - `has_u_vs: Optional[bool]` Whether the mesh has UV coordinates - `height: Optional[float]` - `nb_frames: Optional[float]` Number of frames in the video - `sample_rate: Optional[float]` Sample rate of the media in Hz - `transcription: Optional[DescriptionAssetPropertiesTranscription]` Transcription of the audio - `text: str` - `vertex_count: Optional[float]` Number of vertices in the mesh - `width: Optional[float]` - `source: Literal["3d23d", "3d23d:texture", "3d:texture", 72 more]` source of the asset - `"3d23d"` - `"3d23d:texture"` - `"3d:texture"` - `"3d:texture:albedo"` - `"3d:texture:metallic"` - `"3d:texture:mtl"` - `"3d:texture:normal"` - `"3d:texture:roughness"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-control-net"` - `"inference-control-net-img"` - `"inference-control-net-inpainting"` - `"inference-control-net-inpainting-ip-adapter"` - `"inference-control-net-ip-adapter"` - `"inference-control-net-reference"` - `"inference-control-net-texture"` - `"inference-img"` - `"inference-img-ip-adapter"` - `"inference-img-texture"` - `"inference-in-paint"` - `"inference-in-paint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt"` - `"inference-txt-ip-adapter"` - `"inference-txt-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture:albedo"` - `"texture:ao"` - `"texture:edge"` - `"texture:height"` - `"texture:metallic"` - `"texture:normal"` - `"texture:smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `url: str` Signed URL to get the asset content - `original_file_url: Optional[str]` The original file url. Contains the url of the original file. without any conversion. Only available for some specific video, audio and threeD assets. Is only specified if the given asset data has been replaced with a new file during the creation of the asset. - `preview: Optional[DescriptionAssetPreview]` The asset's preview. Contains the assetId and the url of the preview. - `asset_id: str` - `url: str` - `thumbnail: Optional[DescriptionAssetThumbnail]` The asset's thumbnail. Contains the assetId and the url of the thumbnail. - `asset_id: str` - `url: str` - `models: List[DescriptionModel]` The list of models referenced by the Markdown `{model}` tag in the description. - `id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `privacy: Literal["private", "public", "unlisted"]` The privacy of the model (default: private) - `"private"` - `"public"` - `"unlisted"` - `type: Literal["custom", "elevenlabs-voice", "flux.1", 34 more]` The model type (example: "flux.1-lora") - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `author_id: Optional[str]` The author user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") - `name: Optional[str]` The model name (example: "Cinematic Realism") - `owner_id: Optional[str]` The owner ID (example: "team_VFhihHKMRZyDDnZAJwLb2Q") - `short_description: Optional[str]` The model short description (example: "This model generates highly detailed cinematic scenes.") - `value: str` The markdown description of the model (ex: `# My model`). We allow the `{asset:}` and `{model:}` tags. # Examples ## List `models.examples.list(strmodel_id, ExampleListParams**kwargs) -> ExampleListResponse` **get** `/models/{modelId}/examples` List all examples of the given `modelId` ### Parameters - `model_id: str` - `original_assets: Optional[bool]` If set to true, returns the original asset without transformation ### Returns - `class ExampleListResponse: …` - `examples: List[Example]` - `asset: ExampleAsset` Asset generated by the inference - `id: str` The asset ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `author_id: str` The author user ID (example: "dcf121faaa1a0a0bbbd9ca1b73d62aea") - `collection_ids: List[str]` A list of CollectionId this asset belongs to - `created_at: str` The asset creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `edit_capabilities: List[Literal["DETECTION", "GENERATIVE_FILL", "PIXELATE", 8 more]]` List of edit capabilities - `"DETECTION"` - `"GENERATIVE_FILL"` - `"PIXELATE"` - `"PROMPT_EDITING"` - `"REFINE"` - `"REFRAME"` - `"REMOVE_BACKGROUND"` - `"SEGMENTATION"` - `"UPSCALE"` - `"UPSCALE_360"` - `"VECTORIZATION"` - `kind: Literal["3d", "audio", "document", 4 more]` The kind of asset - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `metadata: ExampleAssetMetadata` Metadata of the asset with some additional information - `kind: Literal["3d", "audio", "document", 4 more]` - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `type: Literal["3d-texture", "3d-texture-albedo", "3d-texture-metallic", 72 more]` The type of the asset. Ex: 'inference-txt2img' will represent an asset generated from a text to image model - `"3d-texture"` - `"3d-texture-albedo"` - `"3d-texture-metallic"` - `"3d-texture-mtl"` - `"3d-texture-normal"` - `"3d-texture-roughness"` - `"3d23d"` - `"3d23d-texture"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-controlnet"` - `"inference-controlnet-img2img"` - `"inference-controlnet-inpaint"` - `"inference-controlnet-inpaint-ip-adapter"` - `"inference-controlnet-ip-adapter"` - `"inference-controlnet-reference"` - `"inference-controlnet-texture"` - `"inference-img2img"` - `"inference-img2img-ip-adapter"` - `"inference-img2img-texture"` - `"inference-inpaint"` - `"inference-inpaint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt2img"` - `"inference-txt2img-ip-adapter"` - `"inference-txt2img-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture-albedo"` - `"texture-ao"` - `"texture-edge"` - `"texture-height"` - `"texture-metallic"` - `"texture-normal"` - `"texture-smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `angular: Optional[float]` How angular is the surface? 0 is like a sphere, 1 is like a mechanical object - `aspect_ratio: Optional[str]` The optional aspect ratio given for the generation, only applicable for some models - `background_opacity: Optional[float]` Int to set between 0 and 255 for the opacity of the background in the result images. - `base_model_id: Optional[str]` The baseModelId that maybe changed at inference time - `bbox: Optional[List[float]]` A bounding box around the object of interest, in the format [x1, y1, x2, y2]. - `better_quality: Optional[bool]` Remove small dark spots (i.e. “pepper”) and connect small bright cracks. - `canny_structure_image: Optional[str]` The control image already processed by canny detector. Must reference an existing AssetId. - `clustering: Optional[bool]` Activate clustering. - `color_correction: Optional[bool]` Ensure upscaled tile have the same color histogram as original tile. - `color_mode: Optional[str]` - `color_precision: Optional[float]` - `concepts: Optional[List[ExampleAssetMetadataConcept]]` Flux Kontext LoRA to style the image. For Flux Kontext Prompt Editing. - `model_id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: float` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `model_epoch: Optional[str]` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `contours: Optional[List[List[List[List[float]]]]]` - `control_end: Optional[float]` End step for control. - `copied_at: Optional[str]` The date when the asset was copied to a project - `corner_threshold: Optional[float]` - `creativity: Optional[float]` Allow the generation of "hallucinations" during the upscale process, which adds additional details and deviates from the original image. Default: optimized for your preset and style. - `creativity_decay: Optional[float]` Amount of decay in creativity over the upscale process. The lowest the value, the less the creativity will be preserved over the upscale process. - `default_parameters: Optional[bool]` If true, use the default parameters - `depth_fidelity: Optional[float]` The depth fidelity if a depth image provided - `depth_image: Optional[str]` The control image processed by depth estimator. Must reference an existing AssetId. - `details_level: Optional[float]` Amount of details to remove or add - `dilate: Optional[float]` The number of pixels to dilate the result masks. - `factor: Optional[float]` Contrast factor for Grayscale detector - `filter_speckle: Optional[float]` - `fractality: Optional[float]` Determine the scale at which the upscale process works. - With a small value, the upscale works at the largest scale, resulting in fewer added details and more coherent images. Ideal for portraits, for example. - With a large value, the upscale works at the smallest scale, resulting in more added details and more hallucinations. Ideal for landscapes, for example. (info): A small value is slower and more expensive to run. - `geometry_enforcement: Optional[float]` Apply extra control to the Skybox 360 geometry. The higher the value, the more the 360 geometry will influence the generated skybox image. Use with caution. Default is adapted to the other parameters. - `guidance: Optional[float]` The guidance used to generate this asset - `half_mode: Optional[bool]` - `hdr: Optional[float]` - `height: Optional[float]` - `high_threshold: Optional[float]` High threshold for Canny detector - `horizontal_expansion_ratio: Optional[float]` (deprecated) Horizontal expansion ratio. - `image: Optional[str]` The input image to process. Must reference an existing AssetId or be a data URL. - `image_fidelity: Optional[float]` Strengthen the similarity to the original image during the upscale. Default: optimized for your preset and style. - `image_type: Optional[Literal["seamfull", "skybox", "texture"]]` Preserve the seamless properties of skybox or texture images. Input has to be of same type (seamless). - `"seamfull"` - `"skybox"` - `"texture"` - `inference_id: Optional[str]` The id of the Inference describing how this image was generated - `input_fidelity: Optional[Literal["high", "low"]]` When set to `high`, allows to better preserve details from the input images in the output. This is especially useful when using images that contain elements like faces or logos that require accurate preservation in the generated image. You can provide multiple input images that will all be preserved with high fidelity, but keep in mind that the first image will be preserved with richer textures and finer details, so if you include elements such as faces, consider placing them in the first image. Only available for the `gpt-image-1` model. - `"high"` - `"low"` - `input_location: Optional[Literal["bottom", "left", "middle", 2 more]]` Location of the input image in the output. - `"bottom"` - `"left"` - `"middle"` - `"right"` - `"top"` - `invert: Optional[bool]` To invert the relief - `keypoint_threshold: Optional[float]` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `layer_difference: Optional[float]` - `length_threshold: Optional[float]` - `lock_expires_at: Optional[str]` The ISO timestamp when the lock on the canvas will expire - `low_threshold: Optional[float]` Low threshold for Canny detector - `mask: Optional[str]` The mask used for the asset generation or editing - `max_iterations: Optional[float]` - `max_threshold: Optional[float]` Maximum threshold for Grayscale conversion - `min_threshold: Optional[float]` Minimum threshold for Grayscale conversion - `modality: Optional[Literal["canny", "depth", "grayscale", 7 more]]` Modality to detect - `"canny"` - `"depth"` - `"grayscale"` - `"lineart_anime"` - `"mlsd"` - `"normal"` - `"pose"` - `"scribble"` - `"segmentation"` - `"sketch"` - `mode: Optional[str]` - `model_id: Optional[str]` The modelId used to generate this asset - `model_type: Optional[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]` The type of the generator used - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `name: Optional[str]` - `nb_masks: Optional[float]` - `negative_prompt: Optional[str]` The negative prompt used to generate this asset - `negative_prompt_strength: Optional[float]` Controls the influence of the negative prompt. Default 0 means the negative prompt has no effect. Higher values increase negative prompt influence. Must be > 0 if negativePrompt is provided. - `num_inference_steps: Optional[float]` The number of denoising steps for each image generation. - `num_outputs: Optional[float]` The number of outputs to generate. - `original_asset_id: Optional[str]` - `output_index: Optional[float]` - `overlap_percentage: Optional[float]` Overlap percentage for the output image. - `override_embeddings: Optional[bool]` Override the embeddings of the model. Only your prompt and negativePrompt will be used. Use with caution. - `parent_id: Optional[str]` - `parent_job_id: Optional[str]` - `path_precision: Optional[float]` - `points: Optional[List[List[float]]]` List of points (label, x, y) in the image where label = 0 for background and 1 for object. - `polished: Optional[float]` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `preset: Optional[str]` - `progress_percent: Optional[float]` - `prompt: Optional[str]` The prompt that guided the asset generation or editing - `prompt_fidelity: Optional[float]` Increase the fidelity to the prompt during upscale. Default: optimized for your preset and style. - `raised: Optional[float]` How raised is the surface? 0 is flat like water, 1 is like a very rough rock - `reference_images: Optional[List[str]]` The reference images used for the asset generation or editing - `refinement_steps: Optional[float]` Additional refinement steps before scaling. If scalingFactor == 1, the refinement process will be applied (1 + refinementSteps) times. If scalingFactor > 1, the refinement process will be applied refinementSteps times. - `remove_background: Optional[bool]` Remove background for Grayscale detector - `resize_option: Optional[float]` Size proportion of the input image in the output. - `result_contours: Optional[bool]` Boolean to output the contours. - `result_image: Optional[bool]` Boolean to able output the cut out object. - `result_mask: Optional[bool]` Boolean to able return the masks (binary image) in the response. - `root_parent_id: Optional[str]` - `save_flipbook: Optional[bool]` Save a flipbook of the texture. Deactivated when the input texture is larger than 2048x2048px - `scaling_factor: Optional[float]` Scaling factor (when `targetWidth` not specified) - `scheduler: Optional[str]` The scheduler used to generate this asset - `seed: Optional[str]` The seed used to generate this asset. Can be a string or a number in some cases . - `sharpen: Optional[bool]` Sharpen tiles. - `shiny: Optional[float]` How shiny is the surface? 0 is like a matte surface, 1 is like a diamond - `size: Optional[float]` - `sketch: Optional[bool]` Activate sketch detection instead of canny. - `source_project_id: Optional[str]` - `splice_threshold: Optional[float]` - `strength: Optional[float]` The strength Only available for the `flux-kontext` LoRA model. - `structure_fidelity: Optional[float]` Strength for the input image structure preservation - `structure_image: Optional[str]` The control image for structure. A canny detector will be applied to this image. Must reference an existing AssetId. - `style: Optional[Literal["3d-cartoon", "3d-rendered", "anime", 23 more]]` - `"3d-cartoon"` - `"3d-rendered"` - `"anime"` - `"cartoon"` - `"cinematic"` - `"claymation"` - `"cloud-skydome"` - `"comic"` - `"cyberpunk"` - `"enchanted"` - `"fantasy"` - `"ink"` - `"manga"` - `"manga-color"` - `"minimalist"` - `"neon-tron"` - `"oil-painting"` - `"pastel"` - `"photo"` - `"photography"` - `"psychedelic"` - `"retro-fantasy"` - `"scifi-concept-art"` - `"space"` - `"standard"` - `"whimsical"` - `style_fidelity: Optional[float]` The higher the value the more it will look like the style image(s) - `style_images: Optional[List[str]]` List of style images. Most of the time, only one image is enough. It must be existing AssetIds. - `style_images_fidelity: Optional[float]` Condition the influence of the style image(s). The higher the value, the more the style images will influence the upscaled image. - `target_height: Optional[float]` The target height of the output image. - `target_width: Optional[float]` Target width for the upscaled image, take priority over scaling factor - `text: Optional[str]` A textual description / keywords describing the object of interest. - `texture: Optional[str]` The asset to convert in texture maps. Must reference an existing AssetId. - `thumbnail: Optional[ExampleAssetMetadataThumbnail]` The thumbnail of the canvas - `asset_id: str` The AssetId of the image used as a thumbnail for the canvas (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the image used as a thumbnail for the canvas - `tile_style: Optional[bool]` If set to true, during the upscaling process, the model will match tiles of the source image with tiles of the style image(s). This will result in a more coherent restyle. Works best with style images that have a similar composition. - `training_image: Optional[bool]` - `vertical_expansion_ratio: Optional[float]` (deprecated) Vertical expansion ratio. - `width: Optional[float]` The width of the rendered image. - `mime_type: str` The mime type of the asset (example: "image/png") - `owner_id: str` The owner (project) ID (example: "proj_23tlk332lkht3kl2" or "team_dlkhgs23tlk3hlkth32lkht3kl2" for old teams) - `privacy: Literal["private", "public", "unlisted"]` The privacy of the asset - `"private"` - `"public"` - `"unlisted"` - `properties: ExampleAssetProperties` The properties of the asset, content may depend on the kind of asset returned - `size: float` - `animation_frame_count: Optional[float]` Number of animation frames if animations exist - `bitrate: Optional[float]` Bitrate of the media in bits per second - `bone_count: Optional[float]` Number of bones if skeleton exists - `channels: Optional[float]` Number of channels of the audio - `classification: Optional[Literal["effect", "interview", "music", 5 more]]` Classification of the audio - `"effect"` - `"interview"` - `"music"` - `"other"` - `"sound"` - `"speech"` - `"text"` - `"unknown"` - `codec_name: Optional[str]` Codec name of the media - `description: Optional[str]` Description of the audio - `dimensions: Optional[List[float]]` Bounding box dimensions [width, height, depth] - `duration: Optional[float]` Duration of the media in seconds - `face_count: Optional[float]` Number of faces/triangles in the mesh - `format: Optional[str]` Format of the mesh file (e.g. 'glb', etc.) - `frame_rate: Optional[float]` Frame rate of the video in frames per second - `has_animations: Optional[bool]` Whether the mesh has animations - `has_normals: Optional[bool]` Whether the mesh has normal vectors - `has_skeleton: Optional[bool]` Whether the mesh has bones/skeleton - `has_u_vs: Optional[bool]` Whether the mesh has UV coordinates - `height: Optional[float]` - `nb_frames: Optional[float]` Number of frames in the video - `sample_rate: Optional[float]` Sample rate of the media in Hz - `transcription: Optional[ExampleAssetPropertiesTranscription]` Transcription of the audio - `text: str` - `vertex_count: Optional[float]` Number of vertices in the mesh - `width: Optional[float]` - `source: Literal["3d23d", "3d23d:texture", "3d:texture", 72 more]` source of the asset - `"3d23d"` - `"3d23d:texture"` - `"3d:texture"` - `"3d:texture:albedo"` - `"3d:texture:metallic"` - `"3d:texture:mtl"` - `"3d:texture:normal"` - `"3d:texture:roughness"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-control-net"` - `"inference-control-net-img"` - `"inference-control-net-inpainting"` - `"inference-control-net-inpainting-ip-adapter"` - `"inference-control-net-ip-adapter"` - `"inference-control-net-reference"` - `"inference-control-net-texture"` - `"inference-img"` - `"inference-img-ip-adapter"` - `"inference-img-texture"` - `"inference-in-paint"` - `"inference-in-paint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt"` - `"inference-txt-ip-adapter"` - `"inference-txt-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture:albedo"` - `"texture:ao"` - `"texture:edge"` - `"texture:height"` - `"texture:metallic"` - `"texture:normal"` - `"texture:smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `status: Literal["error", "pending", "success"]` The actual status - `"error"` - `"pending"` - `"success"` - `tags: List[str]` The associated tags (example: ["sci-fi", "landscape"]) - `updated_at: str` The asset last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `url: str` Signed URL to get the asset content - `automatic_captioning: Optional[str]` Automatic captioning of the asset - `description: Optional[str]` The description, it will contain in priority: - the manual description - the advanced captioning when the asset is used in training flow - the automatic captioning - `embedding: Optional[List[float]]` The embedding of the asset when requested. Only available when an asset can be embedded (ie: not Detection maps) - `first_frame: Optional[ExampleAssetFirstFrame]` The video asset's first frame. Contains the assetId and the url of the first frame. - `asset_id: str` - `url: str` - `is_hidden: Optional[bool]` Whether the asset is hidden. - `last_frame: Optional[ExampleAssetLastFrame]` The video asset's last frame. Contains the assetId and the url of the last frame. - `asset_id: str` - `url: str` - `nsfw: Optional[List[str]]` The NSFW labels - `original_file_url: Optional[str]` The original file url. Contains the url of the original file. without any conversion. Only available for some specific video, audio and threeD assets. Is only specified if the given asset data has been replaced with a new file during the creation of the asset. - `output_index: Optional[float]` The output index of the asset within a job This index is an positive integer that starts at 0 It is used to differentiate between multiple outputs of the same job If the job has only one output, this index is 0 - `preview: Optional[ExampleAssetPreview]` The asset's preview. Contains the assetId and the url of the preview. - `asset_id: str` - `url: str` - `thumbnail: Optional[ExampleAssetThumbnail]` The asset's thumbnail. Contains the assetId and the url of the thumbnail. - `asset_id: str` - `url: str` - `model_id: str` Model id of the model used to generate the asset - `inference_id: Optional[str]` Inference id of the inference used to generate the asset - `inference_parameters: Optional[ExampleInferenceParameters]` The inference parameters used to generate the asset - `prompt: str` Full text prompt including the model placeholder. (example: "an illustration of phoenix in a fantasy world, flying over a mountain, 8k, bokeh effect") - `type: Literal["controlnet", "controlnet_img2img", "controlnet_inpaint", 15 more]` The type of inference to use. Example: txt2img, img2img, etc. Selecting the right type will condition the expected parameters. Note: if model.type is `sd-xl*` or `sd-1_5*`, when using the `"inpaint"` inference type, Scenario determines the best available `baseModel` for a given `modelId`: one of `["stable-diffusion-inpainting", "stable-diffusion-xl-1.0-inpainting-0.1"] will be used. - `"controlnet"` - `"controlnet_img2img"` - `"controlnet_inpaint"` - `"controlnet_inpaint_ip_adapter"` - `"controlnet_ip_adapter"` - `"controlnet_reference"` - `"controlnet_texture"` - `"img2img"` - `"img2img_ip_adapter"` - `"img2img_texture"` - `"inpaint"` - `"inpaint_ip_adapter"` - `"outpaint"` - `"reference"` - `"reference_texture"` - `"txt2img"` - `"txt2img_ip_adapter"` - `"txt2img_texture"` - `aspect_ratio: Optional[Literal["16:9", "1:1", "21:9", 8 more]]` The aspect ratio of the generated images. Only used for the model flux.1.1-pro-ultra. The aspect ratio is a string formatted as "width:height" (example: "16:9"). - `"16:9"` - `"1:1"` - `"21:9"` - `"2:3"` - `"3:2"` - `"3:4"` - `"4:3"` - `"4:5"` - `"5:4"` - `"9:16"` - `"9:21"` - `base_model_id: Optional[str]` The base model to use for the inference. Only Flux LoRA models can use this parameter. Allowed values are available in the model's attribute: `compliantModelIds` - `concepts: Optional[List[ExampleInferenceParametersConcept]]` - `model_id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: float` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `model_epoch: Optional[str]` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `control_end: Optional[float]` Specifies how long the ControlNet guidance should be applied during the inference process. Only available for Flux.1-dev based models. The value represents the percentage of total inference steps where the ControlNet guidance is active. For example: - 1.0: ControlNet guidance is applied during all inference steps - 0.5: ControlNet guidance is only applied during the first half of inference steps Default values: - 0.5 for Canny modality - 0.6 for all other modalities - `control_image: Optional[str]` Signed URL to display the controlnet input image - `control_image_id: Optional[str]` Asset id of the controlnet input image - `control_start: Optional[float]` Specifies the starting point of the ControlNet guidance during the inference process. Only available for Flux.1-dev based models. The value represents the percentage of total inference steps where the ControlNet guidance starts. For example: - 0.0: ControlNet guidance starts at the beginning of the inference steps - 0.5: ControlNet guidance starts at the middle of the inference steps - `disable_merging: Optional[bool]` If set to true, the entire input image will likely change during inpainting. This results in faster inferences, but the output image will be harder to integrate if the input is just a small part of a larger image. - `disable_modality_detection: Optional[bool]` If false, the process uses the given image to detect the modality. If true (default), the process will not try to detect the modality of the given image. For example: with `pose` modality and `false` value, the process will detect the pose of people in the given image with `depth` modality and `false` value, the process will detect the depth of the given image with `scribble` modality and `true`value, the process will use the given image as a scribble ⚠️ For models of the FLUX schnell or dev families, this parameter is ignored. The modality detection is always disabled. ⚠️ - `guidance: Optional[float]` Controls how closely the generated image follows the prompt. Higher values result in stronger adherence to the prompt. Default and allowed values depend on the model type: - For Flux dev models, the default is 3.5 and allowed values are within [0, 10] - For Flux pro models, the default is 3 and allowed values are within [2, 5] - For SDXL models, the default is 6 and allowed values are within [0, 20] - For SD1.5 models, the default is 7.5 and allowed values are within [0, 20] - `height: Optional[float]` The height of the generated images, must be a 8 multiple (within [64, 2048], default: 512) If model.type is `sd-xl`, `sd-xl-lora`, `sd-xl-composition` the height must be within [512, 2048] If model.type is `sd-1_5`, the height must be within [64, 1024] If model.type is `flux.1.1-pro-ultra`, you can use the aspectRatio parameter instead - `hide_results: Optional[bool]` If set, generated assets will be hidden and not returned in the list of images of the inference or when listing assets (default: false) - `image: Optional[str]` Signed URL to display the input image - `image_id: Optional[str]` Asset id of the input image - `intermediate_images: Optional[bool]` Enable or disable the intermediate images generation (default: false) - `ip_adapter_image: Optional[str]` Signed URL to display the IpAdapter image - `ip_adapter_image_id: Optional[str]` Asset id of the input IpAdapter image - `ip_adapter_image_ids: Optional[List[str]]` Asset id of the input IpAdapter images - `ip_adapter_images: Optional[List[str]]` Signed URL to display the IpAdapter images - `ip_adapter_scale: Optional[float]` IpAdapter scale factor (within [0.0, 1.0], default: 0.9). - `ip_adapter_scales: Optional[List[float]]` IpAdapter scale factors (within [0.0, 1.0], default: 0.9). - `ip_adapter_type: Optional[Literal["character", "style"]]` The type of IP Adapter model to use. Must be one of [`style`, `character`], default to `style`` - `"character"` - `"style"` - `mask: Optional[str]` Signed URL to display the mask image - `mask_id: Optional[str]` Asset id of the mask image - `modality: Optional[str]` The modality associated with the control image used for the generation: it can either be an object with a combination of maximum For models of SD1.5 family: - up to 3 modalities from `canny`, `pose`, `depth`, `lines`, `seg`, `scribble`, `lineart`, `normal-map`, `illusion` - or one of the following presets: `character`, `landscape`, `city`, `interior`. For models of the SDXL family: - up to 3 modalities from `canny`, `pose`, `depth`, `seg`, `illusion`, `scribble` - or one of the following presets: `character`, `landscape`. For models of the FLUX schnell or dev families: - one modality from: `canny`, `tile`, `depth`, `blur`, `pose`, `gray`, `low-quality` Optionally, you can associate a value to these modalities or presets. The value must be within `]0.0, 1.0]`. Examples: - `canny` - `depth:0.5,pose:1.0` - `canny:0.5,depth:0.5,lines:0.3` - `landscape` - `character:0.5` - `illusion:1` Note: if you use a value that is not supported by the model family, this will result in an error. - `model_epoch: Optional[str]` The epoch of the model to use for the inference. Only available for Flux Lora Trained models. - `negative_prompt: Optional[str]` The prompt not to guide the image generation, ignored when guidance < 1 (example: "((ugly face))") For Flux based model (not Fast-Flux): requires negativePromptStrength > 0 and active only for inference types txt2img / img2img / controlnet. - `negative_prompt_strength: Optional[float]` Only applicable for flux-dev based models for `txt2img`, `img2img`, and `controlnet` inference types. Controls the influence of the negative prompt. Default 0 means the negative prompt has no effect. Higher values increase negative prompt influence. Must be > 0 if negativePrompt is provided. - `num_inference_steps: Optional[float]` The number of denoising steps for each image generation (within [1, 150], default: 30) - `num_samples: Optional[float]` The number of images to generate (within [1, 128], default: 4) - `reference_adain: Optional[bool]` Whether to use reference adain Only for "reference" inference type - `reference_attn: Optional[bool]` Whether to use reference query for self attention's context Only for "reference" inference type - `scheduler: Optional[Literal["DDIMScheduler", "DDPMScheduler", "DEISMultistepScheduler", 12 more]]` The scheduler to use to override the default configured for the model. See detailed documentation for more details. - `"DDIMScheduler"` - `"DDPMScheduler"` - `"DEISMultistepScheduler"` - `"DPMSolverMultistepScheduler"` - `"DPMSolverSinglestepScheduler"` - `"EulerAncestralDiscreteScheduler"` - `"EulerDiscreteScheduler"` - `"HeunDiscreteScheduler"` - `"KDPM2AncestralDiscreteScheduler"` - `"KDPM2DiscreteScheduler"` - `"LCMScheduler"` - `"LMSDiscreteScheduler"` - `"PNDMScheduler"` - `"TCDScheduler"` - `"UniPCMultistepScheduler"` - `seed: Optional[str]` Used to reproduce previous results. Default: randomly generated number. - `strength: Optional[float]` Controls the noise intensity introduced to the input image, where a value of 1.0 completely erases the original image's details. Available for img2img and inpainting. (within [0.01, 1.0], default: 0.75) - `style_fidelity: Optional[float]` If style_fidelity=1.0, control more important, else if style_fidelity=0.0, prompt more important, else balanced Only for "reference" inference type - `width: Optional[float]` The width of the generated images, must be a 8 multiple (within [64, 2048], default: 512) If model.type is `sd-xl`, `sd-xl-lora`, `sd-xl-composition` the width must be within [512, 2048] If model.type is `sd-1_5`, the width must be within [64, 1024] If model.type is `flux.1.1-pro-ultra`, you can use the aspectRatio parameter instead - `job: Optional[ExampleJob]` The job associated with the asset - `created_at: str` The job creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `job_id: str` The job ID (example: "job_ocZCnG1Df35XRL1QyCZSRxAG8") - `job_type: Literal["assets-download", "canvas-export", "caption", 36 more]` The type of job - `"assets-download"` - `"canvas-export"` - `"caption"` - `"caption-llava"` - `"custom"` - `"describe-style"` - `"detection"` - `"embed"` - `"flux"` - `"flux-model-training"` - `"generate-prompt"` - `"image-generation"` - `"image-prompt-editing"` - `"inference"` - `"mesh-preview-rendering"` - `"model-download"` - `"model-import"` - `"model-training"` - `"musubi-model-training"` - `"openai-image-generation"` - `"patch-image"` - `"pixelate"` - `"reframe"` - `"remove-background"` - `"repaint"` - `"restyle"` - `"segment"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"skybox-upscale-360"` - `"texture"` - `"translate"` - `"upload"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"vectorize"` - `"workflow"` - `metadata: ExampleJobMetadata` Metadata of the job with some additional information - `asset_ids: Optional[List[str]]` List of produced assets for this job - `error: Optional[str]` Eventual error for the job - `flow: Optional[List[ExampleJobMetadataFlow]]` The flow of the job. Only available for workflow jobs. - `id: str` The id of the node. - `status: Literal["failure", "pending", "processing", 2 more]` The status of the node. Only available for WorkflowJob nodes. - `"failure"` - `"pending"` - `"processing"` - `"skipped"` - `"success"` - `type: Literal["custom-model", "for-each", "generate-prompt", 7 more]` The type of the job for the node. - `"custom-model"` - `"for-each"` - `"generate-prompt"` - `"list"` - `"logic"` - `"model"` - `"remove-background"` - `"transform"` - `"user-approval"` - `"workflow"` - `assets: Optional[List[ExampleJobMetadataFlowAsset]]` List of produced assets for this node. - `asset_id: str` - `url: str` - `count: Optional[float]` Fixed number of iterations for a ForEach node. When set, the loop runs exactly `count` times regardless of array input. When not set, the loop iterates over the resolved array input. Only available for ForEach nodes. - `depends_on: Optional[List[str]]` The nodes that this node depends on. Only available for nodes that have dependencies. Mainly used for user approval nodes. - `include_outputs_in_workflow_job: Optional[Literal[true]]` If true, the outputs of this node will be included in the workflow job's final output. Only applicable to producing nodes (custom-model, inference, etc.). By default, only last nodes (nodes not referenced by other nodes) contribute to outputs. Set this to true to also include intermediate nodes in the final output. Note: This should only be set to `true` or left undefined. - `true` - `inputs: Optional[List[ExampleJobMetadataFlowInput]]` The inputs of the node. - `name: str` The name that must be user to call the model through the API - `type: Literal["boolean", "file", "file_array", 7 more]` The data type of the input - `"boolean"` - `"file"` - `"file_array"` - `"inputs_array"` - `"model"` - `"model_array"` - `"number"` - `"number_array"` - `"string"` - `"string_array"` - `allowed_values: Optional[List[object]]` The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown. - `background_behavior: Optional[Literal["opaque", "transparent"]]` Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`. - `"opaque"` - `"transparent"` - `color: Optional[bool]` Whether the input is a color or not. Only available for `string` input type. - `cost_impact: Optional[bool]` Whether this input affects the model's cost calculation - `default: Optional[object]` The default value for the input - `description: Optional[str]` Help text displayed in the UI to provide additional information about the input - `group: Optional[str]` Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI. - `hint: Optional[str]` Hint text displayed in the UI as a tooltip to guide the user - `inputs: Optional[List[Dict[str, object]]]` The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs. - `items: Optional[List[List[ExampleJobMetadataFlowInputItem]]]` The configured items for inputs_array type inputs. Each item is an array of SubNodeInput that need ref/value resolution. Only available for inputs_array type. - `name: str` The name that must be user to call the model through the API - `type: Literal["boolean", "file", "file_array", 7 more]` The data type of the input - `"boolean"` - `"file"` - `"file_array"` - `"inputs_array"` - `"model"` - `"model_array"` - `"number"` - `"number_array"` - `"string"` - `"string_array"` - `allowed_values: Optional[List[object]]` The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown. - `background_behavior: Optional[Literal["opaque", "transparent"]]` Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`. - `"opaque"` - `"transparent"` - `color: Optional[bool]` Whether the input is a color or not. Only available for `string` input type. - `cost_impact: Optional[bool]` Whether this input affects the model's cost calculation - `default: Optional[object]` The default value for the input - `description: Optional[str]` Help text displayed in the UI to provide additional information about the input - `group: Optional[str]` Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI. - `hint: Optional[str]` Hint text displayed in the UI as a tooltip to guide the user - `inputs: Optional[List[Dict[str, object]]]` The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs. - `kind: Optional[Literal["3d", "audio", "document", 4 more]]` The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `label: Optional[str]` The label displayed in the UI for this input - `mask_from: Optional[str]` The name of the file input field to use as the mask source - `max: Optional[float]` The maximum allowed value. Only available for `number` and `array` input types. - `max_length: Optional[float]` The maximum allowed length for `string` inputs. Also applies to each item in `string_array`. - `max_size: Optional[float]` The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time. - `min: Optional[float]` The minimum allowed value. Only available for `number` and array input types. - `min_length: Optional[float]` The minimum allowed length for string inputs. Also applies to each item in `string_array`. - `model_types: Optional[List[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]]` The allowed model types for this input. Example: `["flux.1-lora"]`. Only available for `model_array` input type. - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `parent: Optional[bool]` Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types. For `file_array`, the parent asset is the first item in the array. - `placeholder: Optional[str]` Placeholder text for the input. Only available for 'string' input type. - `prompt: Optional[bool]` Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type. - `prompt_spark: Optional[bool]` Whether the input is used with prompt spark. Only available for `string` input type. - `ref: Optional[ExampleJobMetadataFlowInputItemRef]` The reference to another input or output of the same workflow. Must have at least one of node or conditional. - `conditional: Optional[List[str]]` The conditional nodes to reference. If the conditional nodes are successful, the node will be successful. If the conditional nodes are skipped, the node will be skipped. Contains an array of node ids used to check the status of the nodes. - `equal: Optional[str]` This is the desired node output value if ref is an if/else node. - `name: Optional[str]` The name of the input or output to reference. If the type is 'workflow', the name is the name of the input of the workflow is required If the type is 'node', the name is not mandatory, except if you want all outputs of the node. To get all outputs of a node, you can use the name 'all'. - `node: Optional[str]` The node id or 'workflow' if the source is a workflow input. - `required: Optional[ExampleJobMetadataFlowInputItemRequired]` Set of rules that describes when this input is required: - `always`: Input is always required - `ifNotDefined`: Input is required when another specified input is not defined - `ifDefined`: Input is required when another specified input is defined - `conditionalValues`: Input is required when another input has a specific value By default, the input is not required. - `always: Optional[bool]` Whether the input is always required - `conditional_values: Optional[object]` Makes this input required when another input has a specific value: - Key: name of the input to check - Value: operation and allowed values that trigger the requirement - `if_defined: Optional[object]` Makes this input required when another input is defined: - Key: name of the input that must be defined - Value: message to display when this input is required - `if_not_defined: Optional[object]` Makes this input required when another input is not defined: - Key: name of the input that must be undefined - Value: message to display when this input is required - `step: Optional[float]` The step increment for numeric inputs. Only available for `number` input type. - `value: Optional[object]` The value of the input. This is the value of the input that will be used to run the node. Only available for flows managed by a WorkflowJob. - `kind: Optional[Literal["3d", "audio", "document", 4 more]]` The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `label: Optional[str]` The label displayed in the UI for this input - `mask_from: Optional[str]` The name of the file input field to use as the mask source - `max: Optional[float]` The maximum allowed value. Only available for `number` and `array` input types. - `max_length: Optional[float]` The maximum allowed length for `string` inputs. Also applies to each item in `string_array`. - `max_size: Optional[float]` The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time. - `min: Optional[float]` The minimum allowed value. Only available for `number` and array input types. - `min_length: Optional[float]` The minimum allowed length for string inputs. Also applies to each item in `string_array`. - `model_types: Optional[List[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]]` The allowed model types for this input. Example: `["flux.1-lora"]`. Only available for `model_array` input type. - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `parent: Optional[bool]` Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types. For `file_array`, the parent asset is the first item in the array. - `placeholder: Optional[str]` Placeholder text for the input. Only available for 'string' input type. - `prompt: Optional[bool]` Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type. - `prompt_spark: Optional[bool]` Whether the input is used with prompt spark. Only available for `string` input type. - `ref: Optional[ExampleJobMetadataFlowInputRef]` The reference to another input or output of the same workflow. Must have at least one of node or conditional. - `conditional: Optional[List[str]]` The conditional nodes to reference. If the conditional nodes are successful, the node will be successful. If the conditional nodes are skipped, the node will be skipped. Contains an array of node ids used to check the status of the nodes. - `equal: Optional[str]` This is the desired node output value if ref is an if/else node. - `name: Optional[str]` The name of the input or output to reference. If the type is 'workflow', the name is the name of the input of the workflow is required If the type is 'node', the name is not mandatory, except if you want all outputs of the node. To get all outputs of a node, you can use the name 'all'. - `node: Optional[str]` The node id or 'workflow' if the source is a workflow input. - `required: Optional[ExampleJobMetadataFlowInputRequired]` Set of rules that describes when this input is required: - `always`: Input is always required - `ifNotDefined`: Input is required when another specified input is not defined - `ifDefined`: Input is required when another specified input is defined - `conditionalValues`: Input is required when another input has a specific value By default, the input is not required. - `always: Optional[bool]` Whether the input is always required - `conditional_values: Optional[object]` Makes this input required when another input has a specific value: - Key: name of the input to check - Value: operation and allowed values that trigger the requirement - `if_defined: Optional[object]` Makes this input required when another input is defined: - Key: name of the input that must be defined - Value: message to display when this input is required - `if_not_defined: Optional[object]` Makes this input required when another input is not defined: - Key: name of the input that must be undefined - Value: message to display when this input is required - `step: Optional[float]` The step increment for numeric inputs. Only available for `number` input type. - `value: Optional[object]` The value of the input. This is the value of the input that will be used to run the node. Only available for flows managed by a WorkflowJob. - `items: Optional[List[str]]` Statically-configured items for a List node. The node outputs this array as-is when executed. Only available for List nodes. The values can be strings, numbers, or asset IDs. - `iteration_index: Optional[float]` Zero-based index of the iteration this node copy belongs to. Set on dynamically-created copies of loop body nodes. - `job_id: Optional[str]` If the flow is part of a WorkflowJob, this is the jobId for the node. jobId is only available for nodes started. A node "Pending" for a running workflow job is not started. - `logic: Optional[ExampleJobMetadataFlowLogic]` The logic of the node. Only available for logic nodes. - `cases: Optional[List[ExampleJobMetadataFlowLogicCase]]` The cases of the logic. Only available for if/else nodes. - `condition: str` - `value: str` - `default: Optional[str]` The default case of the logic. Contains the id/output of the node to execute if no case is matched. Only available for if/else nodes. - `transform: Optional[str]` The transform of the logic. Only available for transform nodes. - `logic_type: Optional[Literal["if-else"]]` The type of the logic for the node. Only available for logic nodes. - `"if-else"` - `loop_body_node_ids: Optional[List[str]]` IDs of the body template nodes that belong to this ForEach loop. At runtime these templates are cloned once per iteration and marked Skipped. Only available for ForEach nodes. - `loop_node_id: Optional[str]` ID of the ForEach node that spawned this iteration copy. Set on dynamically-created copies of loop body nodes. - `model_id: Optional[str]` The model id for the node. Mainly used for custom model tasks. - `output: Optional[object]` The output of the node. Only available for logic nodes. - `workflow_id: Optional[str]` The workflow id for the node. Mainly used for workflow tasks. - `hint: Optional[str]` Actionable hint for the user explaining what went wrong and how to resolve it. - `input: Optional[Dict[str, object]]` The inputs for the job - `output: Optional[Dict[str, object]]` May contain the output of the job for specific custom models jobs. Only available for custom models which generate non-assets outputs. Example: LLM text results. - `output_model_id: Optional[str]` For voice-clone jobs: the ID of the model being trained. - `workflow_id: Optional[str]` The workflow ID of the job if job is part of a workflow. - `workflow_job_id: Optional[str]` The workflow job ID of the job if job is part of a workflow job. - `progress: float` Progress of the job (between 0 and 1) - `status: Literal["canceled", "failure", "finalizing", 5 more]` The current status of the job - `"canceled"` - `"failure"` - `"finalizing"` - `"in-progress"` - `"pending"` - `"queued"` - `"success"` - `"warming-up"` - `status_history: List[ExampleJobStatusHistory]` The history of the different statuses the job went through with the ISO string date of when the job reached each statuses. - `date: str` - `status: Literal["canceled", "failure", "finalizing", 5 more]` - `"canceled"` - `"failure"` - `"finalizing"` - `"in-progress"` - `"pending"` - `"queued"` - `"success"` - `"warming-up"` - `updated_at: str` The job last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `author_id: Optional[str]` The author user ID (example: "dcf121faaa1a0a0bbbd9ca1b73d62aea") - `billing: Optional[ExampleJobBilling]` The billing of the job - `cu_cost: float` - `cu_discount: float` - `owner_id: Optional[str]` The owner ID (example: "team_U3Qmc8PCdWXwAQJ4Dvw4tV6D") ### Example ```python import os from scenario_sdk import Scenario client = Scenario( api_key=os.environ.get("SCENARIO_SDK_API_KEY"), # This is the default and can be omitted api_secret=os.environ.get("SCENARIO_SDK_API_SECRET"), # This is the default and can be omitted ) examples = client.models.examples.list( model_id="modelId", ) print(examples.examples) ``` #### Response ```json { "examples": [ { "asset": { "id": "id", "authorId": "authorId", "collectionIds": [ "string" ], "createdAt": "createdAt", "editCapabilities": [ "DETECTION" ], "kind": "3d", "metadata": { "kind": "3d", "type": "3d-texture", "angular": 0, "aspectRatio": "aspectRatio", "backgroundOpacity": 0, "baseModelId": "baseModelId", "bbox": [ 0, 0, 0, 0 ], "betterQuality": true, "cannyStructureImage": "cannyStructureImage", "clustering": true, "colorCorrection": true, "colorMode": "colorMode", "colorPrecision": 0, "concepts": [ { "modelId": "modelId", "scale": -2, "modelEpoch": "modelEpoch" } ], "contours": [ [ [ [ 0 ] ] ] ], "controlEnd": 0, "copiedAt": "copiedAt", "cornerThreshold": 0, "creativity": 0, "creativityDecay": 0, "defaultParameters": true, "depthFidelity": 0, "depthImage": "depthImage", "detailsLevel": -50, "dilate": 0, "factor": 0, "filterSpeckle": 0, "fractality": 0, "geometryEnforcement": 0, "guidance": 0, "halfMode": true, "hdr": 0, "height": 0, "highThreshold": 0, "horizontalExpansionRatio": 1, "image": "image", "imageFidelity": 0, "imageType": "seamfull", "inferenceId": "inferenceId", "inputFidelity": "high", "inputLocation": "bottom", "invert": true, "keypointThreshold": 0, "layerDifference": 0, "lengthThreshold": 0, "lockExpiresAt": "lockExpiresAt", "lowThreshold": 0, "mask": "mask", "maxIterations": 0, "maxThreshold": 0, "minThreshold": 0, "modality": "canny", "mode": "mode", "modelId": "modelId", "modelType": "custom", "name": "name", "nbMasks": 0, "negativePrompt": "negativePrompt", "negativePromptStrength": 0, "numInferenceSteps": 5, "numOutputs": 1, "originalAssetId": "originalAssetId", "outputIndex": 0, "overlapPercentage": 0, "overrideEmbeddings": true, "parentId": "parentId", "parentJobId": "parentJobId", "pathPrecision": 0, "points": [ [ 0 ], [ 0 ], [ 0 ] ], "polished": 0, "preset": "preset", "progressPercent": 0, "prompt": "prompt", "promptFidelity": 0, "raised": 0, "referenceImages": [ "string" ], "refinementSteps": 0, "removeBackground": true, "resizeOption": 0.1, "resultContours": true, "resultImage": true, "resultMask": true, "rootParentId": "rootParentId", "saveFlipbook": true, "scalingFactor": 1, "scheduler": "scheduler", "seed": "seed", "sharpen": true, "shiny": 0, "size": 0, "sketch": true, "sourceProjectId": "sourceProjectId", "spliceThreshold": 0, "strength": 0, "structureFidelity": 0, "structureImage": "structureImage", "style": "3d-cartoon", "styleFidelity": 0, "styleImages": [ "string" ], "styleImagesFidelity": 0, "targetHeight": 0, "targetWidth": 1024, "text": "text", "texture": "texture", "thumbnail": { "assetId": "assetId", "url": "url" }, "tileStyle": true, "trainingImage": true, "verticalExpansionRatio": 1, "width": 1024 }, "mimeType": "mimeType", "ownerId": "ownerId", "privacy": "private", "properties": { "size": 0, "animationFrameCount": 0, "bitrate": 0, "boneCount": 0, "channels": 0, "classification": "effect", "codecName": "codecName", "description": "description", "dimensions": [ 0, 0, 0 ], "duration": 0, "faceCount": 0, "format": "format", "frameRate": 0, "hasAnimations": true, "hasNormals": true, "hasSkeleton": true, "hasUVs": true, "height": 0, "nbFrames": 0, "sampleRate": 0, "transcription": { "text": "text" }, "vertexCount": 0, "width": 0 }, "source": "3d23d", "status": "error", "tags": [ "string" ], "updatedAt": "updatedAt", "url": "url", "automaticCaptioning": "automaticCaptioning", "description": "description", "embedding": [ 0 ], "firstFrame": { "assetId": "assetId", "url": "url" }, "isHidden": true, "lastFrame": { "assetId": "assetId", "url": "url" }, "nsfw": [ "string" ], "originalFileUrl": "originalFileUrl", "outputIndex": 0, "preview": { "assetId": "assetId", "url": "url" }, "thumbnail": { "assetId": "assetId", "url": "url" } }, "modelId": "modelId", "inferenceId": "inferenceId", "inferenceParameters": { "prompt": "prompt", "type": "controlnet", "aspectRatio": "16:9", "baseModelId": "baseModelId", "concepts": [ { "modelId": "modelId", "scale": -2, "modelEpoch": "modelEpoch" } ], "controlEnd": 0.1, "controlImage": "controlImage", "controlImageId": "controlImageId", "controlStart": 0, "disableMerging": true, "disableModalityDetection": true, "guidance": 0, "height": 64, "hideResults": true, "image": "image", "imageId": "imageId", "intermediateImages": true, "ipAdapterImage": "ipAdapterImage", "ipAdapterImageId": "ipAdapterImageId", "ipAdapterImageIds": [ "string" ], "ipAdapterImages": [ "string" ], "ipAdapterScale": 0, "ipAdapterScales": [ 0 ], "ipAdapterType": "character", "mask": "mask", "maskId": "maskId", "modality": "modality", "modelEpoch": "modelEpoch", "negativePrompt": "negativePrompt", "negativePromptStrength": 0, "numInferenceSteps": 1, "numSamples": 1, "referenceAdain": true, "referenceAttn": true, "scheduler": "DDIMScheduler", "seed": "seed", "strength": 0.01, "styleFidelity": 0, "width": 64 }, "job": { "createdAt": "createdAt", "jobId": "jobId", "jobType": "assets-download", "metadata": { "assetIds": [ "string" ], "error": "error", "flow": [ { "id": "id", "status": "failure", "type": "custom-model", "assets": [ { "assetId": "assetId", "url": "url" } ], "count": 0, "dependsOn": [ "string" ], "includeOutputsInWorkflowJob": true, "inputs": [ { "name": "name", "type": "boolean", "allowedValues": [ {} ], "backgroundBehavior": "opaque", "color": true, "costImpact": true, "default": {}, "description": "description", "group": "group", "hint": "hint", "inputs": [ { "foo": "bar" } ], "items": [ [ { "name": "name", "type": "boolean", "allowedValues": [ {} ], "backgroundBehavior": "opaque", "color": true, "costImpact": true, "default": {}, "description": "description", "group": "group", "hint": "hint", "inputs": [ { "foo": "bar" } ], "kind": "3d", "label": "label", "maskFrom": "maskFrom", "max": 0, "maxLength": 0, "maxSize": 0, "min": 0, "minLength": 0, "modelTypes": [ "custom" ], "parent": true, "placeholder": "placeholder", "prompt": true, "promptSpark": true, "ref": { "conditional": [ "string" ], "equal": "equal", "name": "name", "node": "node" }, "required": { "always": true, "conditionalValues": {}, "ifDefined": {}, "ifNotDefined": {} }, "step": 1, "value": {} } ] ], "kind": "3d", "label": "label", "maskFrom": "maskFrom", "max": 0, "maxLength": 0, "maxSize": 0, "min": 0, "minLength": 0, "modelTypes": [ "custom" ], "parent": true, "placeholder": "placeholder", "prompt": true, "promptSpark": true, "ref": { "conditional": [ "string" ], "equal": "equal", "name": "name", "node": "node" }, "required": { "always": true, "conditionalValues": {}, "ifDefined": {}, "ifNotDefined": {} }, "step": 1, "value": {} } ], "items": [ "string" ], "iterationIndex": 0, "jobId": "jobId", "logic": { "cases": [ { "condition": "condition", "value": "value" } ], "default": "default", "transform": "transform" }, "logicType": "if-else", "loopBodyNodeIds": [ "string" ], "loopNodeId": "loopNodeId", "modelId": "modelId", "output": {}, "workflowId": "workflowId" } ], "hint": "hint", "input": { "foo": "bar" }, "output": { "foo": "bar" }, "outputModelId": "outputModelId", "workflowId": "workflowId", "workflowJobId": "workflowJobId" }, "progress": 0, "status": "canceled", "statusHistory": [ { "date": "date", "status": "canceled" } ], "updatedAt": "updatedAt", "authorId": "authorId", "billing": { "cuCost": 0, "cuDiscount": 0 }, "ownerId": "ownerId" } } ] } ``` ## Update `models.examples.update(strmodel_id, ExampleUpdateParams**kwargs) -> ExampleUpdateResponse` **put** `/models/{modelId}/examples` Add/delete/sort examples of the given `modelId` ### Parameters - `model_id: str` - `asset_ids: Sequence[str]` The list of asset ids to use as examples of the model - `original_assets: Optional[bool]` If set to true, returns the original asset without transformation ### Returns - `class ExampleUpdateResponse: …` - `examples: List[Example]` - `asset: ExampleAsset` Asset generated by the inference - `id: str` The asset ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `author_id: str` The author user ID (example: "dcf121faaa1a0a0bbbd9ca1b73d62aea") - `collection_ids: List[str]` A list of CollectionId this asset belongs to - `created_at: str` The asset creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `edit_capabilities: List[Literal["DETECTION", "GENERATIVE_FILL", "PIXELATE", 8 more]]` List of edit capabilities - `"DETECTION"` - `"GENERATIVE_FILL"` - `"PIXELATE"` - `"PROMPT_EDITING"` - `"REFINE"` - `"REFRAME"` - `"REMOVE_BACKGROUND"` - `"SEGMENTATION"` - `"UPSCALE"` - `"UPSCALE_360"` - `"VECTORIZATION"` - `kind: Literal["3d", "audio", "document", 4 more]` The kind of asset - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `metadata: ExampleAssetMetadata` Metadata of the asset with some additional information - `kind: Literal["3d", "audio", "document", 4 more]` - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `type: Literal["3d-texture", "3d-texture-albedo", "3d-texture-metallic", 72 more]` The type of the asset. Ex: 'inference-txt2img' will represent an asset generated from a text to image model - `"3d-texture"` - `"3d-texture-albedo"` - `"3d-texture-metallic"` - `"3d-texture-mtl"` - `"3d-texture-normal"` - `"3d-texture-roughness"` - `"3d23d"` - `"3d23d-texture"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-controlnet"` - `"inference-controlnet-img2img"` - `"inference-controlnet-inpaint"` - `"inference-controlnet-inpaint-ip-adapter"` - `"inference-controlnet-ip-adapter"` - `"inference-controlnet-reference"` - `"inference-controlnet-texture"` - `"inference-img2img"` - `"inference-img2img-ip-adapter"` - `"inference-img2img-texture"` - `"inference-inpaint"` - `"inference-inpaint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt2img"` - `"inference-txt2img-ip-adapter"` - `"inference-txt2img-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture-albedo"` - `"texture-ao"` - `"texture-edge"` - `"texture-height"` - `"texture-metallic"` - `"texture-normal"` - `"texture-smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `angular: Optional[float]` How angular is the surface? 0 is like a sphere, 1 is like a mechanical object - `aspect_ratio: Optional[str]` The optional aspect ratio given for the generation, only applicable for some models - `background_opacity: Optional[float]` Int to set between 0 and 255 for the opacity of the background in the result images. - `base_model_id: Optional[str]` The baseModelId that maybe changed at inference time - `bbox: Optional[List[float]]` A bounding box around the object of interest, in the format [x1, y1, x2, y2]. - `better_quality: Optional[bool]` Remove small dark spots (i.e. “pepper”) and connect small bright cracks. - `canny_structure_image: Optional[str]` The control image already processed by canny detector. Must reference an existing AssetId. - `clustering: Optional[bool]` Activate clustering. - `color_correction: Optional[bool]` Ensure upscaled tile have the same color histogram as original tile. - `color_mode: Optional[str]` - `color_precision: Optional[float]` - `concepts: Optional[List[ExampleAssetMetadataConcept]]` Flux Kontext LoRA to style the image. For Flux Kontext Prompt Editing. - `model_id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: float` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `model_epoch: Optional[str]` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `contours: Optional[List[List[List[List[float]]]]]` - `control_end: Optional[float]` End step for control. - `copied_at: Optional[str]` The date when the asset was copied to a project - `corner_threshold: Optional[float]` - `creativity: Optional[float]` Allow the generation of "hallucinations" during the upscale process, which adds additional details and deviates from the original image. Default: optimized for your preset and style. - `creativity_decay: Optional[float]` Amount of decay in creativity over the upscale process. The lowest the value, the less the creativity will be preserved over the upscale process. - `default_parameters: Optional[bool]` If true, use the default parameters - `depth_fidelity: Optional[float]` The depth fidelity if a depth image provided - `depth_image: Optional[str]` The control image processed by depth estimator. Must reference an existing AssetId. - `details_level: Optional[float]` Amount of details to remove or add - `dilate: Optional[float]` The number of pixels to dilate the result masks. - `factor: Optional[float]` Contrast factor for Grayscale detector - `filter_speckle: Optional[float]` - `fractality: Optional[float]` Determine the scale at which the upscale process works. - With a small value, the upscale works at the largest scale, resulting in fewer added details and more coherent images. Ideal for portraits, for example. - With a large value, the upscale works at the smallest scale, resulting in more added details and more hallucinations. Ideal for landscapes, for example. (info): A small value is slower and more expensive to run. - `geometry_enforcement: Optional[float]` Apply extra control to the Skybox 360 geometry. The higher the value, the more the 360 geometry will influence the generated skybox image. Use with caution. Default is adapted to the other parameters. - `guidance: Optional[float]` The guidance used to generate this asset - `half_mode: Optional[bool]` - `hdr: Optional[float]` - `height: Optional[float]` - `high_threshold: Optional[float]` High threshold for Canny detector - `horizontal_expansion_ratio: Optional[float]` (deprecated) Horizontal expansion ratio. - `image: Optional[str]` The input image to process. Must reference an existing AssetId or be a data URL. - `image_fidelity: Optional[float]` Strengthen the similarity to the original image during the upscale. Default: optimized for your preset and style. - `image_type: Optional[Literal["seamfull", "skybox", "texture"]]` Preserve the seamless properties of skybox or texture images. Input has to be of same type (seamless). - `"seamfull"` - `"skybox"` - `"texture"` - `inference_id: Optional[str]` The id of the Inference describing how this image was generated - `input_fidelity: Optional[Literal["high", "low"]]` When set to `high`, allows to better preserve details from the input images in the output. This is especially useful when using images that contain elements like faces or logos that require accurate preservation in the generated image. You can provide multiple input images that will all be preserved with high fidelity, but keep in mind that the first image will be preserved with richer textures and finer details, so if you include elements such as faces, consider placing them in the first image. Only available for the `gpt-image-1` model. - `"high"` - `"low"` - `input_location: Optional[Literal["bottom", "left", "middle", 2 more]]` Location of the input image in the output. - `"bottom"` - `"left"` - `"middle"` - `"right"` - `"top"` - `invert: Optional[bool]` To invert the relief - `keypoint_threshold: Optional[float]` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `layer_difference: Optional[float]` - `length_threshold: Optional[float]` - `lock_expires_at: Optional[str]` The ISO timestamp when the lock on the canvas will expire - `low_threshold: Optional[float]` Low threshold for Canny detector - `mask: Optional[str]` The mask used for the asset generation or editing - `max_iterations: Optional[float]` - `max_threshold: Optional[float]` Maximum threshold for Grayscale conversion - `min_threshold: Optional[float]` Minimum threshold for Grayscale conversion - `modality: Optional[Literal["canny", "depth", "grayscale", 7 more]]` Modality to detect - `"canny"` - `"depth"` - `"grayscale"` - `"lineart_anime"` - `"mlsd"` - `"normal"` - `"pose"` - `"scribble"` - `"segmentation"` - `"sketch"` - `mode: Optional[str]` - `model_id: Optional[str]` The modelId used to generate this asset - `model_type: Optional[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]` The type of the generator used - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `name: Optional[str]` - `nb_masks: Optional[float]` - `negative_prompt: Optional[str]` The negative prompt used to generate this asset - `negative_prompt_strength: Optional[float]` Controls the influence of the negative prompt. Default 0 means the negative prompt has no effect. Higher values increase negative prompt influence. Must be > 0 if negativePrompt is provided. - `num_inference_steps: Optional[float]` The number of denoising steps for each image generation. - `num_outputs: Optional[float]` The number of outputs to generate. - `original_asset_id: Optional[str]` - `output_index: Optional[float]` - `overlap_percentage: Optional[float]` Overlap percentage for the output image. - `override_embeddings: Optional[bool]` Override the embeddings of the model. Only your prompt and negativePrompt will be used. Use with caution. - `parent_id: Optional[str]` - `parent_job_id: Optional[str]` - `path_precision: Optional[float]` - `points: Optional[List[List[float]]]` List of points (label, x, y) in the image where label = 0 for background and 1 for object. - `polished: Optional[float]` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `preset: Optional[str]` - `progress_percent: Optional[float]` - `prompt: Optional[str]` The prompt that guided the asset generation or editing - `prompt_fidelity: Optional[float]` Increase the fidelity to the prompt during upscale. Default: optimized for your preset and style. - `raised: Optional[float]` How raised is the surface? 0 is flat like water, 1 is like a very rough rock - `reference_images: Optional[List[str]]` The reference images used for the asset generation or editing - `refinement_steps: Optional[float]` Additional refinement steps before scaling. If scalingFactor == 1, the refinement process will be applied (1 + refinementSteps) times. If scalingFactor > 1, the refinement process will be applied refinementSteps times. - `remove_background: Optional[bool]` Remove background for Grayscale detector - `resize_option: Optional[float]` Size proportion of the input image in the output. - `result_contours: Optional[bool]` Boolean to output the contours. - `result_image: Optional[bool]` Boolean to able output the cut out object. - `result_mask: Optional[bool]` Boolean to able return the masks (binary image) in the response. - `root_parent_id: Optional[str]` - `save_flipbook: Optional[bool]` Save a flipbook of the texture. Deactivated when the input texture is larger than 2048x2048px - `scaling_factor: Optional[float]` Scaling factor (when `targetWidth` not specified) - `scheduler: Optional[str]` The scheduler used to generate this asset - `seed: Optional[str]` The seed used to generate this asset. Can be a string or a number in some cases . - `sharpen: Optional[bool]` Sharpen tiles. - `shiny: Optional[float]` How shiny is the surface? 0 is like a matte surface, 1 is like a diamond - `size: Optional[float]` - `sketch: Optional[bool]` Activate sketch detection instead of canny. - `source_project_id: Optional[str]` - `splice_threshold: Optional[float]` - `strength: Optional[float]` The strength Only available for the `flux-kontext` LoRA model. - `structure_fidelity: Optional[float]` Strength for the input image structure preservation - `structure_image: Optional[str]` The control image for structure. A canny detector will be applied to this image. Must reference an existing AssetId. - `style: Optional[Literal["3d-cartoon", "3d-rendered", "anime", 23 more]]` - `"3d-cartoon"` - `"3d-rendered"` - `"anime"` - `"cartoon"` - `"cinematic"` - `"claymation"` - `"cloud-skydome"` - `"comic"` - `"cyberpunk"` - `"enchanted"` - `"fantasy"` - `"ink"` - `"manga"` - `"manga-color"` - `"minimalist"` - `"neon-tron"` - `"oil-painting"` - `"pastel"` - `"photo"` - `"photography"` - `"psychedelic"` - `"retro-fantasy"` - `"scifi-concept-art"` - `"space"` - `"standard"` - `"whimsical"` - `style_fidelity: Optional[float]` The higher the value the more it will look like the style image(s) - `style_images: Optional[List[str]]` List of style images. Most of the time, only one image is enough. It must be existing AssetIds. - `style_images_fidelity: Optional[float]` Condition the influence of the style image(s). The higher the value, the more the style images will influence the upscaled image. - `target_height: Optional[float]` The target height of the output image. - `target_width: Optional[float]` Target width for the upscaled image, take priority over scaling factor - `text: Optional[str]` A textual description / keywords describing the object of interest. - `texture: Optional[str]` The asset to convert in texture maps. Must reference an existing AssetId. - `thumbnail: Optional[ExampleAssetMetadataThumbnail]` The thumbnail of the canvas - `asset_id: str` The AssetId of the image used as a thumbnail for the canvas (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the image used as a thumbnail for the canvas - `tile_style: Optional[bool]` If set to true, during the upscaling process, the model will match tiles of the source image with tiles of the style image(s). This will result in a more coherent restyle. Works best with style images that have a similar composition. - `training_image: Optional[bool]` - `vertical_expansion_ratio: Optional[float]` (deprecated) Vertical expansion ratio. - `width: Optional[float]` The width of the rendered image. - `mime_type: str` The mime type of the asset (example: "image/png") - `owner_id: str` The owner (project) ID (example: "proj_23tlk332lkht3kl2" or "team_dlkhgs23tlk3hlkth32lkht3kl2" for old teams) - `privacy: Literal["private", "public", "unlisted"]` The privacy of the asset - `"private"` - `"public"` - `"unlisted"` - `properties: ExampleAssetProperties` The properties of the asset, content may depend on the kind of asset returned - `size: float` - `animation_frame_count: Optional[float]` Number of animation frames if animations exist - `bitrate: Optional[float]` Bitrate of the media in bits per second - `bone_count: Optional[float]` Number of bones if skeleton exists - `channels: Optional[float]` Number of channels of the audio - `classification: Optional[Literal["effect", "interview", "music", 5 more]]` Classification of the audio - `"effect"` - `"interview"` - `"music"` - `"other"` - `"sound"` - `"speech"` - `"text"` - `"unknown"` - `codec_name: Optional[str]` Codec name of the media - `description: Optional[str]` Description of the audio - `dimensions: Optional[List[float]]` Bounding box dimensions [width, height, depth] - `duration: Optional[float]` Duration of the media in seconds - `face_count: Optional[float]` Number of faces/triangles in the mesh - `format: Optional[str]` Format of the mesh file (e.g. 'glb', etc.) - `frame_rate: Optional[float]` Frame rate of the video in frames per second - `has_animations: Optional[bool]` Whether the mesh has animations - `has_normals: Optional[bool]` Whether the mesh has normal vectors - `has_skeleton: Optional[bool]` Whether the mesh has bones/skeleton - `has_u_vs: Optional[bool]` Whether the mesh has UV coordinates - `height: Optional[float]` - `nb_frames: Optional[float]` Number of frames in the video - `sample_rate: Optional[float]` Sample rate of the media in Hz - `transcription: Optional[ExampleAssetPropertiesTranscription]` Transcription of the audio - `text: str` - `vertex_count: Optional[float]` Number of vertices in the mesh - `width: Optional[float]` - `source: Literal["3d23d", "3d23d:texture", "3d:texture", 72 more]` source of the asset - `"3d23d"` - `"3d23d:texture"` - `"3d:texture"` - `"3d:texture:albedo"` - `"3d:texture:metallic"` - `"3d:texture:mtl"` - `"3d:texture:normal"` - `"3d:texture:roughness"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-control-net"` - `"inference-control-net-img"` - `"inference-control-net-inpainting"` - `"inference-control-net-inpainting-ip-adapter"` - `"inference-control-net-ip-adapter"` - `"inference-control-net-reference"` - `"inference-control-net-texture"` - `"inference-img"` - `"inference-img-ip-adapter"` - `"inference-img-texture"` - `"inference-in-paint"` - `"inference-in-paint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt"` - `"inference-txt-ip-adapter"` - `"inference-txt-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture:albedo"` - `"texture:ao"` - `"texture:edge"` - `"texture:height"` - `"texture:metallic"` - `"texture:normal"` - `"texture:smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `status: Literal["error", "pending", "success"]` The actual status - `"error"` - `"pending"` - `"success"` - `tags: List[str]` The associated tags (example: ["sci-fi", "landscape"]) - `updated_at: str` The asset last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `url: str` Signed URL to get the asset content - `automatic_captioning: Optional[str]` Automatic captioning of the asset - `description: Optional[str]` The description, it will contain in priority: - the manual description - the advanced captioning when the asset is used in training flow - the automatic captioning - `embedding: Optional[List[float]]` The embedding of the asset when requested. Only available when an asset can be embedded (ie: not Detection maps) - `first_frame: Optional[ExampleAssetFirstFrame]` The video asset's first frame. Contains the assetId and the url of the first frame. - `asset_id: str` - `url: str` - `is_hidden: Optional[bool]` Whether the asset is hidden. - `last_frame: Optional[ExampleAssetLastFrame]` The video asset's last frame. Contains the assetId and the url of the last frame. - `asset_id: str` - `url: str` - `nsfw: Optional[List[str]]` The NSFW labels - `original_file_url: Optional[str]` The original file url. Contains the url of the original file. without any conversion. Only available for some specific video, audio and threeD assets. Is only specified if the given asset data has been replaced with a new file during the creation of the asset. - `output_index: Optional[float]` The output index of the asset within a job This index is an positive integer that starts at 0 It is used to differentiate between multiple outputs of the same job If the job has only one output, this index is 0 - `preview: Optional[ExampleAssetPreview]` The asset's preview. Contains the assetId and the url of the preview. - `asset_id: str` - `url: str` - `thumbnail: Optional[ExampleAssetThumbnail]` The asset's thumbnail. Contains the assetId and the url of the thumbnail. - `asset_id: str` - `url: str` - `model_id: str` Model id of the model used to generate the asset - `inference_id: Optional[str]` Inference id of the inference used to generate the asset - `inference_parameters: Optional[ExampleInferenceParameters]` The inference parameters used to generate the asset - `prompt: str` Full text prompt including the model placeholder. (example: "an illustration of phoenix in a fantasy world, flying over a mountain, 8k, bokeh effect") - `type: Literal["controlnet", "controlnet_img2img", "controlnet_inpaint", 15 more]` The type of inference to use. Example: txt2img, img2img, etc. Selecting the right type will condition the expected parameters. Note: if model.type is `sd-xl*` or `sd-1_5*`, when using the `"inpaint"` inference type, Scenario determines the best available `baseModel` for a given `modelId`: one of `["stable-diffusion-inpainting", "stable-diffusion-xl-1.0-inpainting-0.1"] will be used. - `"controlnet"` - `"controlnet_img2img"` - `"controlnet_inpaint"` - `"controlnet_inpaint_ip_adapter"` - `"controlnet_ip_adapter"` - `"controlnet_reference"` - `"controlnet_texture"` - `"img2img"` - `"img2img_ip_adapter"` - `"img2img_texture"` - `"inpaint"` - `"inpaint_ip_adapter"` - `"outpaint"` - `"reference"` - `"reference_texture"` - `"txt2img"` - `"txt2img_ip_adapter"` - `"txt2img_texture"` - `aspect_ratio: Optional[Literal["16:9", "1:1", "21:9", 8 more]]` The aspect ratio of the generated images. Only used for the model flux.1.1-pro-ultra. The aspect ratio is a string formatted as "width:height" (example: "16:9"). - `"16:9"` - `"1:1"` - `"21:9"` - `"2:3"` - `"3:2"` - `"3:4"` - `"4:3"` - `"4:5"` - `"5:4"` - `"9:16"` - `"9:21"` - `base_model_id: Optional[str]` The base model to use for the inference. Only Flux LoRA models can use this parameter. Allowed values are available in the model's attribute: `compliantModelIds` - `concepts: Optional[List[ExampleInferenceParametersConcept]]` - `model_id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: float` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `model_epoch: Optional[str]` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `control_end: Optional[float]` Specifies how long the ControlNet guidance should be applied during the inference process. Only available for Flux.1-dev based models. The value represents the percentage of total inference steps where the ControlNet guidance is active. For example: - 1.0: ControlNet guidance is applied during all inference steps - 0.5: ControlNet guidance is only applied during the first half of inference steps Default values: - 0.5 for Canny modality - 0.6 for all other modalities - `control_image: Optional[str]` Signed URL to display the controlnet input image - `control_image_id: Optional[str]` Asset id of the controlnet input image - `control_start: Optional[float]` Specifies the starting point of the ControlNet guidance during the inference process. Only available for Flux.1-dev based models. The value represents the percentage of total inference steps where the ControlNet guidance starts. For example: - 0.0: ControlNet guidance starts at the beginning of the inference steps - 0.5: ControlNet guidance starts at the middle of the inference steps - `disable_merging: Optional[bool]` If set to true, the entire input image will likely change during inpainting. This results in faster inferences, but the output image will be harder to integrate if the input is just a small part of a larger image. - `disable_modality_detection: Optional[bool]` If false, the process uses the given image to detect the modality. If true (default), the process will not try to detect the modality of the given image. For example: with `pose` modality and `false` value, the process will detect the pose of people in the given image with `depth` modality and `false` value, the process will detect the depth of the given image with `scribble` modality and `true`value, the process will use the given image as a scribble ⚠️ For models of the FLUX schnell or dev families, this parameter is ignored. The modality detection is always disabled. ⚠️ - `guidance: Optional[float]` Controls how closely the generated image follows the prompt. Higher values result in stronger adherence to the prompt. Default and allowed values depend on the model type: - For Flux dev models, the default is 3.5 and allowed values are within [0, 10] - For Flux pro models, the default is 3 and allowed values are within [2, 5] - For SDXL models, the default is 6 and allowed values are within [0, 20] - For SD1.5 models, the default is 7.5 and allowed values are within [0, 20] - `height: Optional[float]` The height of the generated images, must be a 8 multiple (within [64, 2048], default: 512) If model.type is `sd-xl`, `sd-xl-lora`, `sd-xl-composition` the height must be within [512, 2048] If model.type is `sd-1_5`, the height must be within [64, 1024] If model.type is `flux.1.1-pro-ultra`, you can use the aspectRatio parameter instead - `hide_results: Optional[bool]` If set, generated assets will be hidden and not returned in the list of images of the inference or when listing assets (default: false) - `image: Optional[str]` Signed URL to display the input image - `image_id: Optional[str]` Asset id of the input image - `intermediate_images: Optional[bool]` Enable or disable the intermediate images generation (default: false) - `ip_adapter_image: Optional[str]` Signed URL to display the IpAdapter image - `ip_adapter_image_id: Optional[str]` Asset id of the input IpAdapter image - `ip_adapter_image_ids: Optional[List[str]]` Asset id of the input IpAdapter images - `ip_adapter_images: Optional[List[str]]` Signed URL to display the IpAdapter images - `ip_adapter_scale: Optional[float]` IpAdapter scale factor (within [0.0, 1.0], default: 0.9). - `ip_adapter_scales: Optional[List[float]]` IpAdapter scale factors (within [0.0, 1.0], default: 0.9). - `ip_adapter_type: Optional[Literal["character", "style"]]` The type of IP Adapter model to use. Must be one of [`style`, `character`], default to `style`` - `"character"` - `"style"` - `mask: Optional[str]` Signed URL to display the mask image - `mask_id: Optional[str]` Asset id of the mask image - `modality: Optional[str]` The modality associated with the control image used for the generation: it can either be an object with a combination of maximum For models of SD1.5 family: - up to 3 modalities from `canny`, `pose`, `depth`, `lines`, `seg`, `scribble`, `lineart`, `normal-map`, `illusion` - or one of the following presets: `character`, `landscape`, `city`, `interior`. For models of the SDXL family: - up to 3 modalities from `canny`, `pose`, `depth`, `seg`, `illusion`, `scribble` - or one of the following presets: `character`, `landscape`. For models of the FLUX schnell or dev families: - one modality from: `canny`, `tile`, `depth`, `blur`, `pose`, `gray`, `low-quality` Optionally, you can associate a value to these modalities or presets. The value must be within `]0.0, 1.0]`. Examples: - `canny` - `depth:0.5,pose:1.0` - `canny:0.5,depth:0.5,lines:0.3` - `landscape` - `character:0.5` - `illusion:1` Note: if you use a value that is not supported by the model family, this will result in an error. - `model_epoch: Optional[str]` The epoch of the model to use for the inference. Only available for Flux Lora Trained models. - `negative_prompt: Optional[str]` The prompt not to guide the image generation, ignored when guidance < 1 (example: "((ugly face))") For Flux based model (not Fast-Flux): requires negativePromptStrength > 0 and active only for inference types txt2img / img2img / controlnet. - `negative_prompt_strength: Optional[float]` Only applicable for flux-dev based models for `txt2img`, `img2img`, and `controlnet` inference types. Controls the influence of the negative prompt. Default 0 means the negative prompt has no effect. Higher values increase negative prompt influence. Must be > 0 if negativePrompt is provided. - `num_inference_steps: Optional[float]` The number of denoising steps for each image generation (within [1, 150], default: 30) - `num_samples: Optional[float]` The number of images to generate (within [1, 128], default: 4) - `reference_adain: Optional[bool]` Whether to use reference adain Only for "reference" inference type - `reference_attn: Optional[bool]` Whether to use reference query for self attention's context Only for "reference" inference type - `scheduler: Optional[Literal["DDIMScheduler", "DDPMScheduler", "DEISMultistepScheduler", 12 more]]` The scheduler to use to override the default configured for the model. See detailed documentation for more details. - `"DDIMScheduler"` - `"DDPMScheduler"` - `"DEISMultistepScheduler"` - `"DPMSolverMultistepScheduler"` - `"DPMSolverSinglestepScheduler"` - `"EulerAncestralDiscreteScheduler"` - `"EulerDiscreteScheduler"` - `"HeunDiscreteScheduler"` - `"KDPM2AncestralDiscreteScheduler"` - `"KDPM2DiscreteScheduler"` - `"LCMScheduler"` - `"LMSDiscreteScheduler"` - `"PNDMScheduler"` - `"TCDScheduler"` - `"UniPCMultistepScheduler"` - `seed: Optional[str]` Used to reproduce previous results. Default: randomly generated number. - `strength: Optional[float]` Controls the noise intensity introduced to the input image, where a value of 1.0 completely erases the original image's details. Available for img2img and inpainting. (within [0.01, 1.0], default: 0.75) - `style_fidelity: Optional[float]` If style_fidelity=1.0, control more important, else if style_fidelity=0.0, prompt more important, else balanced Only for "reference" inference type - `width: Optional[float]` The width of the generated images, must be a 8 multiple (within [64, 2048], default: 512) If model.type is `sd-xl`, `sd-xl-lora`, `sd-xl-composition` the width must be within [512, 2048] If model.type is `sd-1_5`, the width must be within [64, 1024] If model.type is `flux.1.1-pro-ultra`, you can use the aspectRatio parameter instead - `job: Optional[ExampleJob]` The job associated with the asset - `created_at: str` The job creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `job_id: str` The job ID (example: "job_ocZCnG1Df35XRL1QyCZSRxAG8") - `job_type: Literal["assets-download", "canvas-export", "caption", 36 more]` The type of job - `"assets-download"` - `"canvas-export"` - `"caption"` - `"caption-llava"` - `"custom"` - `"describe-style"` - `"detection"` - `"embed"` - `"flux"` - `"flux-model-training"` - `"generate-prompt"` - `"image-generation"` - `"image-prompt-editing"` - `"inference"` - `"mesh-preview-rendering"` - `"model-download"` - `"model-import"` - `"model-training"` - `"musubi-model-training"` - `"openai-image-generation"` - `"patch-image"` - `"pixelate"` - `"reframe"` - `"remove-background"` - `"repaint"` - `"restyle"` - `"segment"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"skybox-upscale-360"` - `"texture"` - `"translate"` - `"upload"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"vectorize"` - `"workflow"` - `metadata: ExampleJobMetadata` Metadata of the job with some additional information - `asset_ids: Optional[List[str]]` List of produced assets for this job - `error: Optional[str]` Eventual error for the job - `flow: Optional[List[ExampleJobMetadataFlow]]` The flow of the job. Only available for workflow jobs. - `id: str` The id of the node. - `status: Literal["failure", "pending", "processing", 2 more]` The status of the node. Only available for WorkflowJob nodes. - `"failure"` - `"pending"` - `"processing"` - `"skipped"` - `"success"` - `type: Literal["custom-model", "for-each", "generate-prompt", 7 more]` The type of the job for the node. - `"custom-model"` - `"for-each"` - `"generate-prompt"` - `"list"` - `"logic"` - `"model"` - `"remove-background"` - `"transform"` - `"user-approval"` - `"workflow"` - `assets: Optional[List[ExampleJobMetadataFlowAsset]]` List of produced assets for this node. - `asset_id: str` - `url: str` - `count: Optional[float]` Fixed number of iterations for a ForEach node. When set, the loop runs exactly `count` times regardless of array input. When not set, the loop iterates over the resolved array input. Only available for ForEach nodes. - `depends_on: Optional[List[str]]` The nodes that this node depends on. Only available for nodes that have dependencies. Mainly used for user approval nodes. - `include_outputs_in_workflow_job: Optional[Literal[true]]` If true, the outputs of this node will be included in the workflow job's final output. Only applicable to producing nodes (custom-model, inference, etc.). By default, only last nodes (nodes not referenced by other nodes) contribute to outputs. Set this to true to also include intermediate nodes in the final output. Note: This should only be set to `true` or left undefined. - `true` - `inputs: Optional[List[ExampleJobMetadataFlowInput]]` The inputs of the node. - `name: str` The name that must be user to call the model through the API - `type: Literal["boolean", "file", "file_array", 7 more]` The data type of the input - `"boolean"` - `"file"` - `"file_array"` - `"inputs_array"` - `"model"` - `"model_array"` - `"number"` - `"number_array"` - `"string"` - `"string_array"` - `allowed_values: Optional[List[object]]` The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown. - `background_behavior: Optional[Literal["opaque", "transparent"]]` Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`. - `"opaque"` - `"transparent"` - `color: Optional[bool]` Whether the input is a color or not. Only available for `string` input type. - `cost_impact: Optional[bool]` Whether this input affects the model's cost calculation - `default: Optional[object]` The default value for the input - `description: Optional[str]` Help text displayed in the UI to provide additional information about the input - `group: Optional[str]` Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI. - `hint: Optional[str]` Hint text displayed in the UI as a tooltip to guide the user - `inputs: Optional[List[Dict[str, object]]]` The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs. - `items: Optional[List[List[ExampleJobMetadataFlowInputItem]]]` The configured items for inputs_array type inputs. Each item is an array of SubNodeInput that need ref/value resolution. Only available for inputs_array type. - `name: str` The name that must be user to call the model through the API - `type: Literal["boolean", "file", "file_array", 7 more]` The data type of the input - `"boolean"` - `"file"` - `"file_array"` - `"inputs_array"` - `"model"` - `"model_array"` - `"number"` - `"number_array"` - `"string"` - `"string_array"` - `allowed_values: Optional[List[object]]` The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown. - `background_behavior: Optional[Literal["opaque", "transparent"]]` Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`. - `"opaque"` - `"transparent"` - `color: Optional[bool]` Whether the input is a color or not. Only available for `string` input type. - `cost_impact: Optional[bool]` Whether this input affects the model's cost calculation - `default: Optional[object]` The default value for the input - `description: Optional[str]` Help text displayed in the UI to provide additional information about the input - `group: Optional[str]` Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI. - `hint: Optional[str]` Hint text displayed in the UI as a tooltip to guide the user - `inputs: Optional[List[Dict[str, object]]]` The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs. - `kind: Optional[Literal["3d", "audio", "document", 4 more]]` The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `label: Optional[str]` The label displayed in the UI for this input - `mask_from: Optional[str]` The name of the file input field to use as the mask source - `max: Optional[float]` The maximum allowed value. Only available for `number` and `array` input types. - `max_length: Optional[float]` The maximum allowed length for `string` inputs. Also applies to each item in `string_array`. - `max_size: Optional[float]` The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time. - `min: Optional[float]` The minimum allowed value. Only available for `number` and array input types. - `min_length: Optional[float]` The minimum allowed length for string inputs. Also applies to each item in `string_array`. - `model_types: Optional[List[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]]` The allowed model types for this input. Example: `["flux.1-lora"]`. Only available for `model_array` input type. - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `parent: Optional[bool]` Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types. For `file_array`, the parent asset is the first item in the array. - `placeholder: Optional[str]` Placeholder text for the input. Only available for 'string' input type. - `prompt: Optional[bool]` Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type. - `prompt_spark: Optional[bool]` Whether the input is used with prompt spark. Only available for `string` input type. - `ref: Optional[ExampleJobMetadataFlowInputItemRef]` The reference to another input or output of the same workflow. Must have at least one of node or conditional. - `conditional: Optional[List[str]]` The conditional nodes to reference. If the conditional nodes are successful, the node will be successful. If the conditional nodes are skipped, the node will be skipped. Contains an array of node ids used to check the status of the nodes. - `equal: Optional[str]` This is the desired node output value if ref is an if/else node. - `name: Optional[str]` The name of the input or output to reference. If the type is 'workflow', the name is the name of the input of the workflow is required If the type is 'node', the name is not mandatory, except if you want all outputs of the node. To get all outputs of a node, you can use the name 'all'. - `node: Optional[str]` The node id or 'workflow' if the source is a workflow input. - `required: Optional[ExampleJobMetadataFlowInputItemRequired]` Set of rules that describes when this input is required: - `always`: Input is always required - `ifNotDefined`: Input is required when another specified input is not defined - `ifDefined`: Input is required when another specified input is defined - `conditionalValues`: Input is required when another input has a specific value By default, the input is not required. - `always: Optional[bool]` Whether the input is always required - `conditional_values: Optional[object]` Makes this input required when another input has a specific value: - Key: name of the input to check - Value: operation and allowed values that trigger the requirement - `if_defined: Optional[object]` Makes this input required when another input is defined: - Key: name of the input that must be defined - Value: message to display when this input is required - `if_not_defined: Optional[object]` Makes this input required when another input is not defined: - Key: name of the input that must be undefined - Value: message to display when this input is required - `step: Optional[float]` The step increment for numeric inputs. Only available for `number` input type. - `value: Optional[object]` The value of the input. This is the value of the input that will be used to run the node. Only available for flows managed by a WorkflowJob. - `kind: Optional[Literal["3d", "audio", "document", 4 more]]` The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `label: Optional[str]` The label displayed in the UI for this input - `mask_from: Optional[str]` The name of the file input field to use as the mask source - `max: Optional[float]` The maximum allowed value. Only available for `number` and `array` input types. - `max_length: Optional[float]` The maximum allowed length for `string` inputs. Also applies to each item in `string_array`. - `max_size: Optional[float]` The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time. - `min: Optional[float]` The minimum allowed value. Only available for `number` and array input types. - `min_length: Optional[float]` The minimum allowed length for string inputs. Also applies to each item in `string_array`. - `model_types: Optional[List[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]]` The allowed model types for this input. Example: `["flux.1-lora"]`. Only available for `model_array` input type. - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `parent: Optional[bool]` Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types. For `file_array`, the parent asset is the first item in the array. - `placeholder: Optional[str]` Placeholder text for the input. Only available for 'string' input type. - `prompt: Optional[bool]` Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type. - `prompt_spark: Optional[bool]` Whether the input is used with prompt spark. Only available for `string` input type. - `ref: Optional[ExampleJobMetadataFlowInputRef]` The reference to another input or output of the same workflow. Must have at least one of node or conditional. - `conditional: Optional[List[str]]` The conditional nodes to reference. If the conditional nodes are successful, the node will be successful. If the conditional nodes are skipped, the node will be skipped. Contains an array of node ids used to check the status of the nodes. - `equal: Optional[str]` This is the desired node output value if ref is an if/else node. - `name: Optional[str]` The name of the input or output to reference. If the type is 'workflow', the name is the name of the input of the workflow is required If the type is 'node', the name is not mandatory, except if you want all outputs of the node. To get all outputs of a node, you can use the name 'all'. - `node: Optional[str]` The node id or 'workflow' if the source is a workflow input. - `required: Optional[ExampleJobMetadataFlowInputRequired]` Set of rules that describes when this input is required: - `always`: Input is always required - `ifNotDefined`: Input is required when another specified input is not defined - `ifDefined`: Input is required when another specified input is defined - `conditionalValues`: Input is required when another input has a specific value By default, the input is not required. - `always: Optional[bool]` Whether the input is always required - `conditional_values: Optional[object]` Makes this input required when another input has a specific value: - Key: name of the input to check - Value: operation and allowed values that trigger the requirement - `if_defined: Optional[object]` Makes this input required when another input is defined: - Key: name of the input that must be defined - Value: message to display when this input is required - `if_not_defined: Optional[object]` Makes this input required when another input is not defined: - Key: name of the input that must be undefined - Value: message to display when this input is required - `step: Optional[float]` The step increment for numeric inputs. Only available for `number` input type. - `value: Optional[object]` The value of the input. This is the value of the input that will be used to run the node. Only available for flows managed by a WorkflowJob. - `items: Optional[List[str]]` Statically-configured items for a List node. The node outputs this array as-is when executed. Only available for List nodes. The values can be strings, numbers, or asset IDs. - `iteration_index: Optional[float]` Zero-based index of the iteration this node copy belongs to. Set on dynamically-created copies of loop body nodes. - `job_id: Optional[str]` If the flow is part of a WorkflowJob, this is the jobId for the node. jobId is only available for nodes started. A node "Pending" for a running workflow job is not started. - `logic: Optional[ExampleJobMetadataFlowLogic]` The logic of the node. Only available for logic nodes. - `cases: Optional[List[ExampleJobMetadataFlowLogicCase]]` The cases of the logic. Only available for if/else nodes. - `condition: str` - `value: str` - `default: Optional[str]` The default case of the logic. Contains the id/output of the node to execute if no case is matched. Only available for if/else nodes. - `transform: Optional[str]` The transform of the logic. Only available for transform nodes. - `logic_type: Optional[Literal["if-else"]]` The type of the logic for the node. Only available for logic nodes. - `"if-else"` - `loop_body_node_ids: Optional[List[str]]` IDs of the body template nodes that belong to this ForEach loop. At runtime these templates are cloned once per iteration and marked Skipped. Only available for ForEach nodes. - `loop_node_id: Optional[str]` ID of the ForEach node that spawned this iteration copy. Set on dynamically-created copies of loop body nodes. - `model_id: Optional[str]` The model id for the node. Mainly used for custom model tasks. - `output: Optional[object]` The output of the node. Only available for logic nodes. - `workflow_id: Optional[str]` The workflow id for the node. Mainly used for workflow tasks. - `hint: Optional[str]` Actionable hint for the user explaining what went wrong and how to resolve it. - `input: Optional[Dict[str, object]]` The inputs for the job - `output: Optional[Dict[str, object]]` May contain the output of the job for specific custom models jobs. Only available for custom models which generate non-assets outputs. Example: LLM text results. - `output_model_id: Optional[str]` For voice-clone jobs: the ID of the model being trained. - `workflow_id: Optional[str]` The workflow ID of the job if job is part of a workflow. - `workflow_job_id: Optional[str]` The workflow job ID of the job if job is part of a workflow job. - `progress: float` Progress of the job (between 0 and 1) - `status: Literal["canceled", "failure", "finalizing", 5 more]` The current status of the job - `"canceled"` - `"failure"` - `"finalizing"` - `"in-progress"` - `"pending"` - `"queued"` - `"success"` - `"warming-up"` - `status_history: List[ExampleJobStatusHistory]` The history of the different statuses the job went through with the ISO string date of when the job reached each statuses. - `date: str` - `status: Literal["canceled", "failure", "finalizing", 5 more]` - `"canceled"` - `"failure"` - `"finalizing"` - `"in-progress"` - `"pending"` - `"queued"` - `"success"` - `"warming-up"` - `updated_at: str` The job last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `author_id: Optional[str]` The author user ID (example: "dcf121faaa1a0a0bbbd9ca1b73d62aea") - `billing: Optional[ExampleJobBilling]` The billing of the job - `cu_cost: float` - `cu_discount: float` - `owner_id: Optional[str]` The owner ID (example: "team_U3Qmc8PCdWXwAQJ4Dvw4tV6D") ### Example ```python import os from scenario_sdk import Scenario client = Scenario( api_key=os.environ.get("SCENARIO_SDK_API_KEY"), # This is the default and can be omitted api_secret=os.environ.get("SCENARIO_SDK_API_SECRET"), # This is the default and can be omitted ) example = client.models.examples.update( model_id="modelId", asset_ids=["string"], ) print(example.examples) ``` #### Response ```json { "examples": [ { "asset": { "id": "id", "authorId": "authorId", "collectionIds": [ "string" ], "createdAt": "createdAt", "editCapabilities": [ "DETECTION" ], "kind": "3d", "metadata": { "kind": "3d", "type": "3d-texture", "angular": 0, "aspectRatio": "aspectRatio", "backgroundOpacity": 0, "baseModelId": "baseModelId", "bbox": [ 0, 0, 0, 0 ], "betterQuality": true, "cannyStructureImage": "cannyStructureImage", "clustering": true, "colorCorrection": true, "colorMode": "colorMode", "colorPrecision": 0, "concepts": [ { "modelId": "modelId", "scale": -2, "modelEpoch": "modelEpoch" } ], "contours": [ [ [ [ 0 ] ] ] ], "controlEnd": 0, "copiedAt": "copiedAt", "cornerThreshold": 0, "creativity": 0, "creativityDecay": 0, "defaultParameters": true, "depthFidelity": 0, "depthImage": "depthImage", "detailsLevel": -50, "dilate": 0, "factor": 0, "filterSpeckle": 0, "fractality": 0, "geometryEnforcement": 0, "guidance": 0, "halfMode": true, "hdr": 0, "height": 0, "highThreshold": 0, "horizontalExpansionRatio": 1, "image": "image", "imageFidelity": 0, "imageType": "seamfull", "inferenceId": "inferenceId", "inputFidelity": "high", "inputLocation": "bottom", "invert": true, "keypointThreshold": 0, "layerDifference": 0, "lengthThreshold": 0, "lockExpiresAt": "lockExpiresAt", "lowThreshold": 0, "mask": "mask", "maxIterations": 0, "maxThreshold": 0, "minThreshold": 0, "modality": "canny", "mode": "mode", "modelId": "modelId", "modelType": "custom", "name": "name", "nbMasks": 0, "negativePrompt": "negativePrompt", "negativePromptStrength": 0, "numInferenceSteps": 5, "numOutputs": 1, "originalAssetId": "originalAssetId", "outputIndex": 0, "overlapPercentage": 0, "overrideEmbeddings": true, "parentId": "parentId", "parentJobId": "parentJobId", "pathPrecision": 0, "points": [ [ 0 ], [ 0 ], [ 0 ] ], "polished": 0, "preset": "preset", "progressPercent": 0, "prompt": "prompt", "promptFidelity": 0, "raised": 0, "referenceImages": [ "string" ], "refinementSteps": 0, "removeBackground": true, "resizeOption": 0.1, "resultContours": true, "resultImage": true, "resultMask": true, "rootParentId": "rootParentId", "saveFlipbook": true, "scalingFactor": 1, "scheduler": "scheduler", "seed": "seed", "sharpen": true, "shiny": 0, "size": 0, "sketch": true, "sourceProjectId": "sourceProjectId", "spliceThreshold": 0, "strength": 0, "structureFidelity": 0, "structureImage": "structureImage", "style": "3d-cartoon", "styleFidelity": 0, "styleImages": [ "string" ], "styleImagesFidelity": 0, "targetHeight": 0, "targetWidth": 1024, "text": "text", "texture": "texture", "thumbnail": { "assetId": "assetId", "url": "url" }, "tileStyle": true, "trainingImage": true, "verticalExpansionRatio": 1, "width": 1024 }, "mimeType": "mimeType", "ownerId": "ownerId", "privacy": "private", "properties": { "size": 0, "animationFrameCount": 0, "bitrate": 0, "boneCount": 0, "channels": 0, "classification": "effect", "codecName": "codecName", "description": "description", "dimensions": [ 0, 0, 0 ], "duration": 0, "faceCount": 0, "format": "format", "frameRate": 0, "hasAnimations": true, "hasNormals": true, "hasSkeleton": true, "hasUVs": true, "height": 0, "nbFrames": 0, "sampleRate": 0, "transcription": { "text": "text" }, "vertexCount": 0, "width": 0 }, "source": "3d23d", "status": "error", "tags": [ "string" ], "updatedAt": "updatedAt", "url": "url", "automaticCaptioning": "automaticCaptioning", "description": "description", "embedding": [ 0 ], "firstFrame": { "assetId": "assetId", "url": "url" }, "isHidden": true, "lastFrame": { "assetId": "assetId", "url": "url" }, "nsfw": [ "string" ], "originalFileUrl": "originalFileUrl", "outputIndex": 0, "preview": { "assetId": "assetId", "url": "url" }, "thumbnail": { "assetId": "assetId", "url": "url" } }, "modelId": "modelId", "inferenceId": "inferenceId", "inferenceParameters": { "prompt": "prompt", "type": "controlnet", "aspectRatio": "16:9", "baseModelId": "baseModelId", "concepts": [ { "modelId": "modelId", "scale": -2, "modelEpoch": "modelEpoch" } ], "controlEnd": 0.1, "controlImage": "controlImage", "controlImageId": "controlImageId", "controlStart": 0, "disableMerging": true, "disableModalityDetection": true, "guidance": 0, "height": 64, "hideResults": true, "image": "image", "imageId": "imageId", "intermediateImages": true, "ipAdapterImage": "ipAdapterImage", "ipAdapterImageId": "ipAdapterImageId", "ipAdapterImageIds": [ "string" ], "ipAdapterImages": [ "string" ], "ipAdapterScale": 0, "ipAdapterScales": [ 0 ], "ipAdapterType": "character", "mask": "mask", "maskId": "maskId", "modality": "modality", "modelEpoch": "modelEpoch", "negativePrompt": "negativePrompt", "negativePromptStrength": 0, "numInferenceSteps": 1, "numSamples": 1, "referenceAdain": true, "referenceAttn": true, "scheduler": "DDIMScheduler", "seed": "seed", "strength": 0.01, "styleFidelity": 0, "width": 64 }, "job": { "createdAt": "createdAt", "jobId": "jobId", "jobType": "assets-download", "metadata": { "assetIds": [ "string" ], "error": "error", "flow": [ { "id": "id", "status": "failure", "type": "custom-model", "assets": [ { "assetId": "assetId", "url": "url" } ], "count": 0, "dependsOn": [ "string" ], "includeOutputsInWorkflowJob": true, "inputs": [ { "name": "name", "type": "boolean", "allowedValues": [ {} ], "backgroundBehavior": "opaque", "color": true, "costImpact": true, "default": {}, "description": "description", "group": "group", "hint": "hint", "inputs": [ { "foo": "bar" } ], "items": [ [ { "name": "name", "type": "boolean", "allowedValues": [ {} ], "backgroundBehavior": "opaque", "color": true, "costImpact": true, "default": {}, "description": "description", "group": "group", "hint": "hint", "inputs": [ { "foo": "bar" } ], "kind": "3d", "label": "label", "maskFrom": "maskFrom", "max": 0, "maxLength": 0, "maxSize": 0, "min": 0, "minLength": 0, "modelTypes": [ "custom" ], "parent": true, "placeholder": "placeholder", "prompt": true, "promptSpark": true, "ref": { "conditional": [ "string" ], "equal": "equal", "name": "name", "node": "node" }, "required": { "always": true, "conditionalValues": {}, "ifDefined": {}, "ifNotDefined": {} }, "step": 1, "value": {} } ] ], "kind": "3d", "label": "label", "maskFrom": "maskFrom", "max": 0, "maxLength": 0, "maxSize": 0, "min": 0, "minLength": 0, "modelTypes": [ "custom" ], "parent": true, "placeholder": "placeholder", "prompt": true, "promptSpark": true, "ref": { "conditional": [ "string" ], "equal": "equal", "name": "name", "node": "node" }, "required": { "always": true, "conditionalValues": {}, "ifDefined": {}, "ifNotDefined": {} }, "step": 1, "value": {} } ], "items": [ "string" ], "iterationIndex": 0, "jobId": "jobId", "logic": { "cases": [ { "condition": "condition", "value": "value" } ], "default": "default", "transform": "transform" }, "logicType": "if-else", "loopBodyNodeIds": [ "string" ], "loopNodeId": "loopNodeId", "modelId": "modelId", "output": {}, "workflowId": "workflowId" } ], "hint": "hint", "input": { "foo": "bar" }, "output": { "foo": "bar" }, "outputModelId": "outputModelId", "workflowId": "workflowId", "workflowJobId": "workflowJobId" }, "progress": 0, "status": "canceled", "statusHistory": [ { "date": "date", "status": "canceled" } ], "updatedAt": "updatedAt", "authorId": "authorId", "billing": { "cuCost": 0, "cuDiscount": 0 }, "ownerId": "ownerId" } } ] } ``` ## Domain Types ### Example List Response - `class ExampleListResponse: …` - `examples: List[Example]` - `asset: ExampleAsset` Asset generated by the inference - `id: str` The asset ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `author_id: str` The author user ID (example: "dcf121faaa1a0a0bbbd9ca1b73d62aea") - `collection_ids: List[str]` A list of CollectionId this asset belongs to - `created_at: str` The asset creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `edit_capabilities: List[Literal["DETECTION", "GENERATIVE_FILL", "PIXELATE", 8 more]]` List of edit capabilities - `"DETECTION"` - `"GENERATIVE_FILL"` - `"PIXELATE"` - `"PROMPT_EDITING"` - `"REFINE"` - `"REFRAME"` - `"REMOVE_BACKGROUND"` - `"SEGMENTATION"` - `"UPSCALE"` - `"UPSCALE_360"` - `"VECTORIZATION"` - `kind: Literal["3d", "audio", "document", 4 more]` The kind of asset - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `metadata: ExampleAssetMetadata` Metadata of the asset with some additional information - `kind: Literal["3d", "audio", "document", 4 more]` - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `type: Literal["3d-texture", "3d-texture-albedo", "3d-texture-metallic", 72 more]` The type of the asset. Ex: 'inference-txt2img' will represent an asset generated from a text to image model - `"3d-texture"` - `"3d-texture-albedo"` - `"3d-texture-metallic"` - `"3d-texture-mtl"` - `"3d-texture-normal"` - `"3d-texture-roughness"` - `"3d23d"` - `"3d23d-texture"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-controlnet"` - `"inference-controlnet-img2img"` - `"inference-controlnet-inpaint"` - `"inference-controlnet-inpaint-ip-adapter"` - `"inference-controlnet-ip-adapter"` - `"inference-controlnet-reference"` - `"inference-controlnet-texture"` - `"inference-img2img"` - `"inference-img2img-ip-adapter"` - `"inference-img2img-texture"` - `"inference-inpaint"` - `"inference-inpaint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt2img"` - `"inference-txt2img-ip-adapter"` - `"inference-txt2img-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture-albedo"` - `"texture-ao"` - `"texture-edge"` - `"texture-height"` - `"texture-metallic"` - `"texture-normal"` - `"texture-smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `angular: Optional[float]` How angular is the surface? 0 is like a sphere, 1 is like a mechanical object - `aspect_ratio: Optional[str]` The optional aspect ratio given for the generation, only applicable for some models - `background_opacity: Optional[float]` Int to set between 0 and 255 for the opacity of the background in the result images. - `base_model_id: Optional[str]` The baseModelId that maybe changed at inference time - `bbox: Optional[List[float]]` A bounding box around the object of interest, in the format [x1, y1, x2, y2]. - `better_quality: Optional[bool]` Remove small dark spots (i.e. “pepper”) and connect small bright cracks. - `canny_structure_image: Optional[str]` The control image already processed by canny detector. Must reference an existing AssetId. - `clustering: Optional[bool]` Activate clustering. - `color_correction: Optional[bool]` Ensure upscaled tile have the same color histogram as original tile. - `color_mode: Optional[str]` - `color_precision: Optional[float]` - `concepts: Optional[List[ExampleAssetMetadataConcept]]` Flux Kontext LoRA to style the image. For Flux Kontext Prompt Editing. - `model_id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: float` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `model_epoch: Optional[str]` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `contours: Optional[List[List[List[List[float]]]]]` - `control_end: Optional[float]` End step for control. - `copied_at: Optional[str]` The date when the asset was copied to a project - `corner_threshold: Optional[float]` - `creativity: Optional[float]` Allow the generation of "hallucinations" during the upscale process, which adds additional details and deviates from the original image. Default: optimized for your preset and style. - `creativity_decay: Optional[float]` Amount of decay in creativity over the upscale process. The lowest the value, the less the creativity will be preserved over the upscale process. - `default_parameters: Optional[bool]` If true, use the default parameters - `depth_fidelity: Optional[float]` The depth fidelity if a depth image provided - `depth_image: Optional[str]` The control image processed by depth estimator. Must reference an existing AssetId. - `details_level: Optional[float]` Amount of details to remove or add - `dilate: Optional[float]` The number of pixels to dilate the result masks. - `factor: Optional[float]` Contrast factor for Grayscale detector - `filter_speckle: Optional[float]` - `fractality: Optional[float]` Determine the scale at which the upscale process works. - With a small value, the upscale works at the largest scale, resulting in fewer added details and more coherent images. Ideal for portraits, for example. - With a large value, the upscale works at the smallest scale, resulting in more added details and more hallucinations. Ideal for landscapes, for example. (info): A small value is slower and more expensive to run. - `geometry_enforcement: Optional[float]` Apply extra control to the Skybox 360 geometry. The higher the value, the more the 360 geometry will influence the generated skybox image. Use with caution. Default is adapted to the other parameters. - `guidance: Optional[float]` The guidance used to generate this asset - `half_mode: Optional[bool]` - `hdr: Optional[float]` - `height: Optional[float]` - `high_threshold: Optional[float]` High threshold for Canny detector - `horizontal_expansion_ratio: Optional[float]` (deprecated) Horizontal expansion ratio. - `image: Optional[str]` The input image to process. Must reference an existing AssetId or be a data URL. - `image_fidelity: Optional[float]` Strengthen the similarity to the original image during the upscale. Default: optimized for your preset and style. - `image_type: Optional[Literal["seamfull", "skybox", "texture"]]` Preserve the seamless properties of skybox or texture images. Input has to be of same type (seamless). - `"seamfull"` - `"skybox"` - `"texture"` - `inference_id: Optional[str]` The id of the Inference describing how this image was generated - `input_fidelity: Optional[Literal["high", "low"]]` When set to `high`, allows to better preserve details from the input images in the output. This is especially useful when using images that contain elements like faces or logos that require accurate preservation in the generated image. You can provide multiple input images that will all be preserved with high fidelity, but keep in mind that the first image will be preserved with richer textures and finer details, so if you include elements such as faces, consider placing them in the first image. Only available for the `gpt-image-1` model. - `"high"` - `"low"` - `input_location: Optional[Literal["bottom", "left", "middle", 2 more]]` Location of the input image in the output. - `"bottom"` - `"left"` - `"middle"` - `"right"` - `"top"` - `invert: Optional[bool]` To invert the relief - `keypoint_threshold: Optional[float]` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `layer_difference: Optional[float]` - `length_threshold: Optional[float]` - `lock_expires_at: Optional[str]` The ISO timestamp when the lock on the canvas will expire - `low_threshold: Optional[float]` Low threshold for Canny detector - `mask: Optional[str]` The mask used for the asset generation or editing - `max_iterations: Optional[float]` - `max_threshold: Optional[float]` Maximum threshold for Grayscale conversion - `min_threshold: Optional[float]` Minimum threshold for Grayscale conversion - `modality: Optional[Literal["canny", "depth", "grayscale", 7 more]]` Modality to detect - `"canny"` - `"depth"` - `"grayscale"` - `"lineart_anime"` - `"mlsd"` - `"normal"` - `"pose"` - `"scribble"` - `"segmentation"` - `"sketch"` - `mode: Optional[str]` - `model_id: Optional[str]` The modelId used to generate this asset - `model_type: Optional[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]` The type of the generator used - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `name: Optional[str]` - `nb_masks: Optional[float]` - `negative_prompt: Optional[str]` The negative prompt used to generate this asset - `negative_prompt_strength: Optional[float]` Controls the influence of the negative prompt. Default 0 means the negative prompt has no effect. Higher values increase negative prompt influence. Must be > 0 if negativePrompt is provided. - `num_inference_steps: Optional[float]` The number of denoising steps for each image generation. - `num_outputs: Optional[float]` The number of outputs to generate. - `original_asset_id: Optional[str]` - `output_index: Optional[float]` - `overlap_percentage: Optional[float]` Overlap percentage for the output image. - `override_embeddings: Optional[bool]` Override the embeddings of the model. Only your prompt and negativePrompt will be used. Use with caution. - `parent_id: Optional[str]` - `parent_job_id: Optional[str]` - `path_precision: Optional[float]` - `points: Optional[List[List[float]]]` List of points (label, x, y) in the image where label = 0 for background and 1 for object. - `polished: Optional[float]` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `preset: Optional[str]` - `progress_percent: Optional[float]` - `prompt: Optional[str]` The prompt that guided the asset generation or editing - `prompt_fidelity: Optional[float]` Increase the fidelity to the prompt during upscale. Default: optimized for your preset and style. - `raised: Optional[float]` How raised is the surface? 0 is flat like water, 1 is like a very rough rock - `reference_images: Optional[List[str]]` The reference images used for the asset generation or editing - `refinement_steps: Optional[float]` Additional refinement steps before scaling. If scalingFactor == 1, the refinement process will be applied (1 + refinementSteps) times. If scalingFactor > 1, the refinement process will be applied refinementSteps times. - `remove_background: Optional[bool]` Remove background for Grayscale detector - `resize_option: Optional[float]` Size proportion of the input image in the output. - `result_contours: Optional[bool]` Boolean to output the contours. - `result_image: Optional[bool]` Boolean to able output the cut out object. - `result_mask: Optional[bool]` Boolean to able return the masks (binary image) in the response. - `root_parent_id: Optional[str]` - `save_flipbook: Optional[bool]` Save a flipbook of the texture. Deactivated when the input texture is larger than 2048x2048px - `scaling_factor: Optional[float]` Scaling factor (when `targetWidth` not specified) - `scheduler: Optional[str]` The scheduler used to generate this asset - `seed: Optional[str]` The seed used to generate this asset. Can be a string or a number in some cases . - `sharpen: Optional[bool]` Sharpen tiles. - `shiny: Optional[float]` How shiny is the surface? 0 is like a matte surface, 1 is like a diamond - `size: Optional[float]` - `sketch: Optional[bool]` Activate sketch detection instead of canny. - `source_project_id: Optional[str]` - `splice_threshold: Optional[float]` - `strength: Optional[float]` The strength Only available for the `flux-kontext` LoRA model. - `structure_fidelity: Optional[float]` Strength for the input image structure preservation - `structure_image: Optional[str]` The control image for structure. A canny detector will be applied to this image. Must reference an existing AssetId. - `style: Optional[Literal["3d-cartoon", "3d-rendered", "anime", 23 more]]` - `"3d-cartoon"` - `"3d-rendered"` - `"anime"` - `"cartoon"` - `"cinematic"` - `"claymation"` - `"cloud-skydome"` - `"comic"` - `"cyberpunk"` - `"enchanted"` - `"fantasy"` - `"ink"` - `"manga"` - `"manga-color"` - `"minimalist"` - `"neon-tron"` - `"oil-painting"` - `"pastel"` - `"photo"` - `"photography"` - `"psychedelic"` - `"retro-fantasy"` - `"scifi-concept-art"` - `"space"` - `"standard"` - `"whimsical"` - `style_fidelity: Optional[float]` The higher the value the more it will look like the style image(s) - `style_images: Optional[List[str]]` List of style images. Most of the time, only one image is enough. It must be existing AssetIds. - `style_images_fidelity: Optional[float]` Condition the influence of the style image(s). The higher the value, the more the style images will influence the upscaled image. - `target_height: Optional[float]` The target height of the output image. - `target_width: Optional[float]` Target width for the upscaled image, take priority over scaling factor - `text: Optional[str]` A textual description / keywords describing the object of interest. - `texture: Optional[str]` The asset to convert in texture maps. Must reference an existing AssetId. - `thumbnail: Optional[ExampleAssetMetadataThumbnail]` The thumbnail of the canvas - `asset_id: str` The AssetId of the image used as a thumbnail for the canvas (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the image used as a thumbnail for the canvas - `tile_style: Optional[bool]` If set to true, during the upscaling process, the model will match tiles of the source image with tiles of the style image(s). This will result in a more coherent restyle. Works best with style images that have a similar composition. - `training_image: Optional[bool]` - `vertical_expansion_ratio: Optional[float]` (deprecated) Vertical expansion ratio. - `width: Optional[float]` The width of the rendered image. - `mime_type: str` The mime type of the asset (example: "image/png") - `owner_id: str` The owner (project) ID (example: "proj_23tlk332lkht3kl2" or "team_dlkhgs23tlk3hlkth32lkht3kl2" for old teams) - `privacy: Literal["private", "public", "unlisted"]` The privacy of the asset - `"private"` - `"public"` - `"unlisted"` - `properties: ExampleAssetProperties` The properties of the asset, content may depend on the kind of asset returned - `size: float` - `animation_frame_count: Optional[float]` Number of animation frames if animations exist - `bitrate: Optional[float]` Bitrate of the media in bits per second - `bone_count: Optional[float]` Number of bones if skeleton exists - `channels: Optional[float]` Number of channels of the audio - `classification: Optional[Literal["effect", "interview", "music", 5 more]]` Classification of the audio - `"effect"` - `"interview"` - `"music"` - `"other"` - `"sound"` - `"speech"` - `"text"` - `"unknown"` - `codec_name: Optional[str]` Codec name of the media - `description: Optional[str]` Description of the audio - `dimensions: Optional[List[float]]` Bounding box dimensions [width, height, depth] - `duration: Optional[float]` Duration of the media in seconds - `face_count: Optional[float]` Number of faces/triangles in the mesh - `format: Optional[str]` Format of the mesh file (e.g. 'glb', etc.) - `frame_rate: Optional[float]` Frame rate of the video in frames per second - `has_animations: Optional[bool]` Whether the mesh has animations - `has_normals: Optional[bool]` Whether the mesh has normal vectors - `has_skeleton: Optional[bool]` Whether the mesh has bones/skeleton - `has_u_vs: Optional[bool]` Whether the mesh has UV coordinates - `height: Optional[float]` - `nb_frames: Optional[float]` Number of frames in the video - `sample_rate: Optional[float]` Sample rate of the media in Hz - `transcription: Optional[ExampleAssetPropertiesTranscription]` Transcription of the audio - `text: str` - `vertex_count: Optional[float]` Number of vertices in the mesh - `width: Optional[float]` - `source: Literal["3d23d", "3d23d:texture", "3d:texture", 72 more]` source of the asset - `"3d23d"` - `"3d23d:texture"` - `"3d:texture"` - `"3d:texture:albedo"` - `"3d:texture:metallic"` - `"3d:texture:mtl"` - `"3d:texture:normal"` - `"3d:texture:roughness"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-control-net"` - `"inference-control-net-img"` - `"inference-control-net-inpainting"` - `"inference-control-net-inpainting-ip-adapter"` - `"inference-control-net-ip-adapter"` - `"inference-control-net-reference"` - `"inference-control-net-texture"` - `"inference-img"` - `"inference-img-ip-adapter"` - `"inference-img-texture"` - `"inference-in-paint"` - `"inference-in-paint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt"` - `"inference-txt-ip-adapter"` - `"inference-txt-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture:albedo"` - `"texture:ao"` - `"texture:edge"` - `"texture:height"` - `"texture:metallic"` - `"texture:normal"` - `"texture:smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `status: Literal["error", "pending", "success"]` The actual status - `"error"` - `"pending"` - `"success"` - `tags: List[str]` The associated tags (example: ["sci-fi", "landscape"]) - `updated_at: str` The asset last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `url: str` Signed URL to get the asset content - `automatic_captioning: Optional[str]` Automatic captioning of the asset - `description: Optional[str]` The description, it will contain in priority: - the manual description - the advanced captioning when the asset is used in training flow - the automatic captioning - `embedding: Optional[List[float]]` The embedding of the asset when requested. Only available when an asset can be embedded (ie: not Detection maps) - `first_frame: Optional[ExampleAssetFirstFrame]` The video asset's first frame. Contains the assetId and the url of the first frame. - `asset_id: str` - `url: str` - `is_hidden: Optional[bool]` Whether the asset is hidden. - `last_frame: Optional[ExampleAssetLastFrame]` The video asset's last frame. Contains the assetId and the url of the last frame. - `asset_id: str` - `url: str` - `nsfw: Optional[List[str]]` The NSFW labels - `original_file_url: Optional[str]` The original file url. Contains the url of the original file. without any conversion. Only available for some specific video, audio and threeD assets. Is only specified if the given asset data has been replaced with a new file during the creation of the asset. - `output_index: Optional[float]` The output index of the asset within a job This index is an positive integer that starts at 0 It is used to differentiate between multiple outputs of the same job If the job has only one output, this index is 0 - `preview: Optional[ExampleAssetPreview]` The asset's preview. Contains the assetId and the url of the preview. - `asset_id: str` - `url: str` - `thumbnail: Optional[ExampleAssetThumbnail]` The asset's thumbnail. Contains the assetId and the url of the thumbnail. - `asset_id: str` - `url: str` - `model_id: str` Model id of the model used to generate the asset - `inference_id: Optional[str]` Inference id of the inference used to generate the asset - `inference_parameters: Optional[ExampleInferenceParameters]` The inference parameters used to generate the asset - `prompt: str` Full text prompt including the model placeholder. (example: "an illustration of phoenix in a fantasy world, flying over a mountain, 8k, bokeh effect") - `type: Literal["controlnet", "controlnet_img2img", "controlnet_inpaint", 15 more]` The type of inference to use. Example: txt2img, img2img, etc. Selecting the right type will condition the expected parameters. Note: if model.type is `sd-xl*` or `sd-1_5*`, when using the `"inpaint"` inference type, Scenario determines the best available `baseModel` for a given `modelId`: one of `["stable-diffusion-inpainting", "stable-diffusion-xl-1.0-inpainting-0.1"] will be used. - `"controlnet"` - `"controlnet_img2img"` - `"controlnet_inpaint"` - `"controlnet_inpaint_ip_adapter"` - `"controlnet_ip_adapter"` - `"controlnet_reference"` - `"controlnet_texture"` - `"img2img"` - `"img2img_ip_adapter"` - `"img2img_texture"` - `"inpaint"` - `"inpaint_ip_adapter"` - `"outpaint"` - `"reference"` - `"reference_texture"` - `"txt2img"` - `"txt2img_ip_adapter"` - `"txt2img_texture"` - `aspect_ratio: Optional[Literal["16:9", "1:1", "21:9", 8 more]]` The aspect ratio of the generated images. Only used for the model flux.1.1-pro-ultra. The aspect ratio is a string formatted as "width:height" (example: "16:9"). - `"16:9"` - `"1:1"` - `"21:9"` - `"2:3"` - `"3:2"` - `"3:4"` - `"4:3"` - `"4:5"` - `"5:4"` - `"9:16"` - `"9:21"` - `base_model_id: Optional[str]` The base model to use for the inference. Only Flux LoRA models can use this parameter. Allowed values are available in the model's attribute: `compliantModelIds` - `concepts: Optional[List[ExampleInferenceParametersConcept]]` - `model_id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: float` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `model_epoch: Optional[str]` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `control_end: Optional[float]` Specifies how long the ControlNet guidance should be applied during the inference process. Only available for Flux.1-dev based models. The value represents the percentage of total inference steps where the ControlNet guidance is active. For example: - 1.0: ControlNet guidance is applied during all inference steps - 0.5: ControlNet guidance is only applied during the first half of inference steps Default values: - 0.5 for Canny modality - 0.6 for all other modalities - `control_image: Optional[str]` Signed URL to display the controlnet input image - `control_image_id: Optional[str]` Asset id of the controlnet input image - `control_start: Optional[float]` Specifies the starting point of the ControlNet guidance during the inference process. Only available for Flux.1-dev based models. The value represents the percentage of total inference steps where the ControlNet guidance starts. For example: - 0.0: ControlNet guidance starts at the beginning of the inference steps - 0.5: ControlNet guidance starts at the middle of the inference steps - `disable_merging: Optional[bool]` If set to true, the entire input image will likely change during inpainting. This results in faster inferences, but the output image will be harder to integrate if the input is just a small part of a larger image. - `disable_modality_detection: Optional[bool]` If false, the process uses the given image to detect the modality. If true (default), the process will not try to detect the modality of the given image. For example: with `pose` modality and `false` value, the process will detect the pose of people in the given image with `depth` modality and `false` value, the process will detect the depth of the given image with `scribble` modality and `true`value, the process will use the given image as a scribble ⚠️ For models of the FLUX schnell or dev families, this parameter is ignored. The modality detection is always disabled. ⚠️ - `guidance: Optional[float]` Controls how closely the generated image follows the prompt. Higher values result in stronger adherence to the prompt. Default and allowed values depend on the model type: - For Flux dev models, the default is 3.5 and allowed values are within [0, 10] - For Flux pro models, the default is 3 and allowed values are within [2, 5] - For SDXL models, the default is 6 and allowed values are within [0, 20] - For SD1.5 models, the default is 7.5 and allowed values are within [0, 20] - `height: Optional[float]` The height of the generated images, must be a 8 multiple (within [64, 2048], default: 512) If model.type is `sd-xl`, `sd-xl-lora`, `sd-xl-composition` the height must be within [512, 2048] If model.type is `sd-1_5`, the height must be within [64, 1024] If model.type is `flux.1.1-pro-ultra`, you can use the aspectRatio parameter instead - `hide_results: Optional[bool]` If set, generated assets will be hidden and not returned in the list of images of the inference or when listing assets (default: false) - `image: Optional[str]` Signed URL to display the input image - `image_id: Optional[str]` Asset id of the input image - `intermediate_images: Optional[bool]` Enable or disable the intermediate images generation (default: false) - `ip_adapter_image: Optional[str]` Signed URL to display the IpAdapter image - `ip_adapter_image_id: Optional[str]` Asset id of the input IpAdapter image - `ip_adapter_image_ids: Optional[List[str]]` Asset id of the input IpAdapter images - `ip_adapter_images: Optional[List[str]]` Signed URL to display the IpAdapter images - `ip_adapter_scale: Optional[float]` IpAdapter scale factor (within [0.0, 1.0], default: 0.9). - `ip_adapter_scales: Optional[List[float]]` IpAdapter scale factors (within [0.0, 1.0], default: 0.9). - `ip_adapter_type: Optional[Literal["character", "style"]]` The type of IP Adapter model to use. Must be one of [`style`, `character`], default to `style`` - `"character"` - `"style"` - `mask: Optional[str]` Signed URL to display the mask image - `mask_id: Optional[str]` Asset id of the mask image - `modality: Optional[str]` The modality associated with the control image used for the generation: it can either be an object with a combination of maximum For models of SD1.5 family: - up to 3 modalities from `canny`, `pose`, `depth`, `lines`, `seg`, `scribble`, `lineart`, `normal-map`, `illusion` - or one of the following presets: `character`, `landscape`, `city`, `interior`. For models of the SDXL family: - up to 3 modalities from `canny`, `pose`, `depth`, `seg`, `illusion`, `scribble` - or one of the following presets: `character`, `landscape`. For models of the FLUX schnell or dev families: - one modality from: `canny`, `tile`, `depth`, `blur`, `pose`, `gray`, `low-quality` Optionally, you can associate a value to these modalities or presets. The value must be within `]0.0, 1.0]`. Examples: - `canny` - `depth:0.5,pose:1.0` - `canny:0.5,depth:0.5,lines:0.3` - `landscape` - `character:0.5` - `illusion:1` Note: if you use a value that is not supported by the model family, this will result in an error. - `model_epoch: Optional[str]` The epoch of the model to use for the inference. Only available for Flux Lora Trained models. - `negative_prompt: Optional[str]` The prompt not to guide the image generation, ignored when guidance < 1 (example: "((ugly face))") For Flux based model (not Fast-Flux): requires negativePromptStrength > 0 and active only for inference types txt2img / img2img / controlnet. - `negative_prompt_strength: Optional[float]` Only applicable for flux-dev based models for `txt2img`, `img2img`, and `controlnet` inference types. Controls the influence of the negative prompt. Default 0 means the negative prompt has no effect. Higher values increase negative prompt influence. Must be > 0 if negativePrompt is provided. - `num_inference_steps: Optional[float]` The number of denoising steps for each image generation (within [1, 150], default: 30) - `num_samples: Optional[float]` The number of images to generate (within [1, 128], default: 4) - `reference_adain: Optional[bool]` Whether to use reference adain Only for "reference" inference type - `reference_attn: Optional[bool]` Whether to use reference query for self attention's context Only for "reference" inference type - `scheduler: Optional[Literal["DDIMScheduler", "DDPMScheduler", "DEISMultistepScheduler", 12 more]]` The scheduler to use to override the default configured for the model. See detailed documentation for more details. - `"DDIMScheduler"` - `"DDPMScheduler"` - `"DEISMultistepScheduler"` - `"DPMSolverMultistepScheduler"` - `"DPMSolverSinglestepScheduler"` - `"EulerAncestralDiscreteScheduler"` - `"EulerDiscreteScheduler"` - `"HeunDiscreteScheduler"` - `"KDPM2AncestralDiscreteScheduler"` - `"KDPM2DiscreteScheduler"` - `"LCMScheduler"` - `"LMSDiscreteScheduler"` - `"PNDMScheduler"` - `"TCDScheduler"` - `"UniPCMultistepScheduler"` - `seed: Optional[str]` Used to reproduce previous results. Default: randomly generated number. - `strength: Optional[float]` Controls the noise intensity introduced to the input image, where a value of 1.0 completely erases the original image's details. Available for img2img and inpainting. (within [0.01, 1.0], default: 0.75) - `style_fidelity: Optional[float]` If style_fidelity=1.0, control more important, else if style_fidelity=0.0, prompt more important, else balanced Only for "reference" inference type - `width: Optional[float]` The width of the generated images, must be a 8 multiple (within [64, 2048], default: 512) If model.type is `sd-xl`, `sd-xl-lora`, `sd-xl-composition` the width must be within [512, 2048] If model.type is `sd-1_5`, the width must be within [64, 1024] If model.type is `flux.1.1-pro-ultra`, you can use the aspectRatio parameter instead - `job: Optional[ExampleJob]` The job associated with the asset - `created_at: str` The job creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `job_id: str` The job ID (example: "job_ocZCnG1Df35XRL1QyCZSRxAG8") - `job_type: Literal["assets-download", "canvas-export", "caption", 36 more]` The type of job - `"assets-download"` - `"canvas-export"` - `"caption"` - `"caption-llava"` - `"custom"` - `"describe-style"` - `"detection"` - `"embed"` - `"flux"` - `"flux-model-training"` - `"generate-prompt"` - `"image-generation"` - `"image-prompt-editing"` - `"inference"` - `"mesh-preview-rendering"` - `"model-download"` - `"model-import"` - `"model-training"` - `"musubi-model-training"` - `"openai-image-generation"` - `"patch-image"` - `"pixelate"` - `"reframe"` - `"remove-background"` - `"repaint"` - `"restyle"` - `"segment"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"skybox-upscale-360"` - `"texture"` - `"translate"` - `"upload"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"vectorize"` - `"workflow"` - `metadata: ExampleJobMetadata` Metadata of the job with some additional information - `asset_ids: Optional[List[str]]` List of produced assets for this job - `error: Optional[str]` Eventual error for the job - `flow: Optional[List[ExampleJobMetadataFlow]]` The flow of the job. Only available for workflow jobs. - `id: str` The id of the node. - `status: Literal["failure", "pending", "processing", 2 more]` The status of the node. Only available for WorkflowJob nodes. - `"failure"` - `"pending"` - `"processing"` - `"skipped"` - `"success"` - `type: Literal["custom-model", "for-each", "generate-prompt", 7 more]` The type of the job for the node. - `"custom-model"` - `"for-each"` - `"generate-prompt"` - `"list"` - `"logic"` - `"model"` - `"remove-background"` - `"transform"` - `"user-approval"` - `"workflow"` - `assets: Optional[List[ExampleJobMetadataFlowAsset]]` List of produced assets for this node. - `asset_id: str` - `url: str` - `count: Optional[float]` Fixed number of iterations for a ForEach node. When set, the loop runs exactly `count` times regardless of array input. When not set, the loop iterates over the resolved array input. Only available for ForEach nodes. - `depends_on: Optional[List[str]]` The nodes that this node depends on. Only available for nodes that have dependencies. Mainly used for user approval nodes. - `include_outputs_in_workflow_job: Optional[Literal[true]]` If true, the outputs of this node will be included in the workflow job's final output. Only applicable to producing nodes (custom-model, inference, etc.). By default, only last nodes (nodes not referenced by other nodes) contribute to outputs. Set this to true to also include intermediate nodes in the final output. Note: This should only be set to `true` or left undefined. - `true` - `inputs: Optional[List[ExampleJobMetadataFlowInput]]` The inputs of the node. - `name: str` The name that must be user to call the model through the API - `type: Literal["boolean", "file", "file_array", 7 more]` The data type of the input - `"boolean"` - `"file"` - `"file_array"` - `"inputs_array"` - `"model"` - `"model_array"` - `"number"` - `"number_array"` - `"string"` - `"string_array"` - `allowed_values: Optional[List[object]]` The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown. - `background_behavior: Optional[Literal["opaque", "transparent"]]` Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`. - `"opaque"` - `"transparent"` - `color: Optional[bool]` Whether the input is a color or not. Only available for `string` input type. - `cost_impact: Optional[bool]` Whether this input affects the model's cost calculation - `default: Optional[object]` The default value for the input - `description: Optional[str]` Help text displayed in the UI to provide additional information about the input - `group: Optional[str]` Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI. - `hint: Optional[str]` Hint text displayed in the UI as a tooltip to guide the user - `inputs: Optional[List[Dict[str, object]]]` The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs. - `items: Optional[List[List[ExampleJobMetadataFlowInputItem]]]` The configured items for inputs_array type inputs. Each item is an array of SubNodeInput that need ref/value resolution. Only available for inputs_array type. - `name: str` The name that must be user to call the model through the API - `type: Literal["boolean", "file", "file_array", 7 more]` The data type of the input - `"boolean"` - `"file"` - `"file_array"` - `"inputs_array"` - `"model"` - `"model_array"` - `"number"` - `"number_array"` - `"string"` - `"string_array"` - `allowed_values: Optional[List[object]]` The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown. - `background_behavior: Optional[Literal["opaque", "transparent"]]` Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`. - `"opaque"` - `"transparent"` - `color: Optional[bool]` Whether the input is a color or not. Only available for `string` input type. - `cost_impact: Optional[bool]` Whether this input affects the model's cost calculation - `default: Optional[object]` The default value for the input - `description: Optional[str]` Help text displayed in the UI to provide additional information about the input - `group: Optional[str]` Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI. - `hint: Optional[str]` Hint text displayed in the UI as a tooltip to guide the user - `inputs: Optional[List[Dict[str, object]]]` The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs. - `kind: Optional[Literal["3d", "audio", "document", 4 more]]` The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `label: Optional[str]` The label displayed in the UI for this input - `mask_from: Optional[str]` The name of the file input field to use as the mask source - `max: Optional[float]` The maximum allowed value. Only available for `number` and `array` input types. - `max_length: Optional[float]` The maximum allowed length for `string` inputs. Also applies to each item in `string_array`. - `max_size: Optional[float]` The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time. - `min: Optional[float]` The minimum allowed value. Only available for `number` and array input types. - `min_length: Optional[float]` The minimum allowed length for string inputs. Also applies to each item in `string_array`. - `model_types: Optional[List[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]]` The allowed model types for this input. Example: `["flux.1-lora"]`. Only available for `model_array` input type. - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `parent: Optional[bool]` Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types. For `file_array`, the parent asset is the first item in the array. - `placeholder: Optional[str]` Placeholder text for the input. Only available for 'string' input type. - `prompt: Optional[bool]` Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type. - `prompt_spark: Optional[bool]` Whether the input is used with prompt spark. Only available for `string` input type. - `ref: Optional[ExampleJobMetadataFlowInputItemRef]` The reference to another input or output of the same workflow. Must have at least one of node or conditional. - `conditional: Optional[List[str]]` The conditional nodes to reference. If the conditional nodes are successful, the node will be successful. If the conditional nodes are skipped, the node will be skipped. Contains an array of node ids used to check the status of the nodes. - `equal: Optional[str]` This is the desired node output value if ref is an if/else node. - `name: Optional[str]` The name of the input or output to reference. If the type is 'workflow', the name is the name of the input of the workflow is required If the type is 'node', the name is not mandatory, except if you want all outputs of the node. To get all outputs of a node, you can use the name 'all'. - `node: Optional[str]` The node id or 'workflow' if the source is a workflow input. - `required: Optional[ExampleJobMetadataFlowInputItemRequired]` Set of rules that describes when this input is required: - `always`: Input is always required - `ifNotDefined`: Input is required when another specified input is not defined - `ifDefined`: Input is required when another specified input is defined - `conditionalValues`: Input is required when another input has a specific value By default, the input is not required. - `always: Optional[bool]` Whether the input is always required - `conditional_values: Optional[object]` Makes this input required when another input has a specific value: - Key: name of the input to check - Value: operation and allowed values that trigger the requirement - `if_defined: Optional[object]` Makes this input required when another input is defined: - Key: name of the input that must be defined - Value: message to display when this input is required - `if_not_defined: Optional[object]` Makes this input required when another input is not defined: - Key: name of the input that must be undefined - Value: message to display when this input is required - `step: Optional[float]` The step increment for numeric inputs. Only available for `number` input type. - `value: Optional[object]` The value of the input. This is the value of the input that will be used to run the node. Only available for flows managed by a WorkflowJob. - `kind: Optional[Literal["3d", "audio", "document", 4 more]]` The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `label: Optional[str]` The label displayed in the UI for this input - `mask_from: Optional[str]` The name of the file input field to use as the mask source - `max: Optional[float]` The maximum allowed value. Only available for `number` and `array` input types. - `max_length: Optional[float]` The maximum allowed length for `string` inputs. Also applies to each item in `string_array`. - `max_size: Optional[float]` The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time. - `min: Optional[float]` The minimum allowed value. Only available for `number` and array input types. - `min_length: Optional[float]` The minimum allowed length for string inputs. Also applies to each item in `string_array`. - `model_types: Optional[List[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]]` The allowed model types for this input. Example: `["flux.1-lora"]`. Only available for `model_array` input type. - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `parent: Optional[bool]` Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types. For `file_array`, the parent asset is the first item in the array. - `placeholder: Optional[str]` Placeholder text for the input. Only available for 'string' input type. - `prompt: Optional[bool]` Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type. - `prompt_spark: Optional[bool]` Whether the input is used with prompt spark. Only available for `string` input type. - `ref: Optional[ExampleJobMetadataFlowInputRef]` The reference to another input or output of the same workflow. Must have at least one of node or conditional. - `conditional: Optional[List[str]]` The conditional nodes to reference. If the conditional nodes are successful, the node will be successful. If the conditional nodes are skipped, the node will be skipped. Contains an array of node ids used to check the status of the nodes. - `equal: Optional[str]` This is the desired node output value if ref is an if/else node. - `name: Optional[str]` The name of the input or output to reference. If the type is 'workflow', the name is the name of the input of the workflow is required If the type is 'node', the name is not mandatory, except if you want all outputs of the node. To get all outputs of a node, you can use the name 'all'. - `node: Optional[str]` The node id or 'workflow' if the source is a workflow input. - `required: Optional[ExampleJobMetadataFlowInputRequired]` Set of rules that describes when this input is required: - `always`: Input is always required - `ifNotDefined`: Input is required when another specified input is not defined - `ifDefined`: Input is required when another specified input is defined - `conditionalValues`: Input is required when another input has a specific value By default, the input is not required. - `always: Optional[bool]` Whether the input is always required - `conditional_values: Optional[object]` Makes this input required when another input has a specific value: - Key: name of the input to check - Value: operation and allowed values that trigger the requirement - `if_defined: Optional[object]` Makes this input required when another input is defined: - Key: name of the input that must be defined - Value: message to display when this input is required - `if_not_defined: Optional[object]` Makes this input required when another input is not defined: - Key: name of the input that must be undefined - Value: message to display when this input is required - `step: Optional[float]` The step increment for numeric inputs. Only available for `number` input type. - `value: Optional[object]` The value of the input. This is the value of the input that will be used to run the node. Only available for flows managed by a WorkflowJob. - `items: Optional[List[str]]` Statically-configured items for a List node. The node outputs this array as-is when executed. Only available for List nodes. The values can be strings, numbers, or asset IDs. - `iteration_index: Optional[float]` Zero-based index of the iteration this node copy belongs to. Set on dynamically-created copies of loop body nodes. - `job_id: Optional[str]` If the flow is part of a WorkflowJob, this is the jobId for the node. jobId is only available for nodes started. A node "Pending" for a running workflow job is not started. - `logic: Optional[ExampleJobMetadataFlowLogic]` The logic of the node. Only available for logic nodes. - `cases: Optional[List[ExampleJobMetadataFlowLogicCase]]` The cases of the logic. Only available for if/else nodes. - `condition: str` - `value: str` - `default: Optional[str]` The default case of the logic. Contains the id/output of the node to execute if no case is matched. Only available for if/else nodes. - `transform: Optional[str]` The transform of the logic. Only available for transform nodes. - `logic_type: Optional[Literal["if-else"]]` The type of the logic for the node. Only available for logic nodes. - `"if-else"` - `loop_body_node_ids: Optional[List[str]]` IDs of the body template nodes that belong to this ForEach loop. At runtime these templates are cloned once per iteration and marked Skipped. Only available for ForEach nodes. - `loop_node_id: Optional[str]` ID of the ForEach node that spawned this iteration copy. Set on dynamically-created copies of loop body nodes. - `model_id: Optional[str]` The model id for the node. Mainly used for custom model tasks. - `output: Optional[object]` The output of the node. Only available for logic nodes. - `workflow_id: Optional[str]` The workflow id for the node. Mainly used for workflow tasks. - `hint: Optional[str]` Actionable hint for the user explaining what went wrong and how to resolve it. - `input: Optional[Dict[str, object]]` The inputs for the job - `output: Optional[Dict[str, object]]` May contain the output of the job for specific custom models jobs. Only available for custom models which generate non-assets outputs. Example: LLM text results. - `output_model_id: Optional[str]` For voice-clone jobs: the ID of the model being trained. - `workflow_id: Optional[str]` The workflow ID of the job if job is part of a workflow. - `workflow_job_id: Optional[str]` The workflow job ID of the job if job is part of a workflow job. - `progress: float` Progress of the job (between 0 and 1) - `status: Literal["canceled", "failure", "finalizing", 5 more]` The current status of the job - `"canceled"` - `"failure"` - `"finalizing"` - `"in-progress"` - `"pending"` - `"queued"` - `"success"` - `"warming-up"` - `status_history: List[ExampleJobStatusHistory]` The history of the different statuses the job went through with the ISO string date of when the job reached each statuses. - `date: str` - `status: Literal["canceled", "failure", "finalizing", 5 more]` - `"canceled"` - `"failure"` - `"finalizing"` - `"in-progress"` - `"pending"` - `"queued"` - `"success"` - `"warming-up"` - `updated_at: str` The job last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `author_id: Optional[str]` The author user ID (example: "dcf121faaa1a0a0bbbd9ca1b73d62aea") - `billing: Optional[ExampleJobBilling]` The billing of the job - `cu_cost: float` - `cu_discount: float` - `owner_id: Optional[str]` The owner ID (example: "team_U3Qmc8PCdWXwAQJ4Dvw4tV6D") ### Example Update Response - `class ExampleUpdateResponse: …` - `examples: List[Example]` - `asset: ExampleAsset` Asset generated by the inference - `id: str` The asset ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `author_id: str` The author user ID (example: "dcf121faaa1a0a0bbbd9ca1b73d62aea") - `collection_ids: List[str]` A list of CollectionId this asset belongs to - `created_at: str` The asset creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `edit_capabilities: List[Literal["DETECTION", "GENERATIVE_FILL", "PIXELATE", 8 more]]` List of edit capabilities - `"DETECTION"` - `"GENERATIVE_FILL"` - `"PIXELATE"` - `"PROMPT_EDITING"` - `"REFINE"` - `"REFRAME"` - `"REMOVE_BACKGROUND"` - `"SEGMENTATION"` - `"UPSCALE"` - `"UPSCALE_360"` - `"VECTORIZATION"` - `kind: Literal["3d", "audio", "document", 4 more]` The kind of asset - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `metadata: ExampleAssetMetadata` Metadata of the asset with some additional information - `kind: Literal["3d", "audio", "document", 4 more]` - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `type: Literal["3d-texture", "3d-texture-albedo", "3d-texture-metallic", 72 more]` The type of the asset. Ex: 'inference-txt2img' will represent an asset generated from a text to image model - `"3d-texture"` - `"3d-texture-albedo"` - `"3d-texture-metallic"` - `"3d-texture-mtl"` - `"3d-texture-normal"` - `"3d-texture-roughness"` - `"3d23d"` - `"3d23d-texture"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-controlnet"` - `"inference-controlnet-img2img"` - `"inference-controlnet-inpaint"` - `"inference-controlnet-inpaint-ip-adapter"` - `"inference-controlnet-ip-adapter"` - `"inference-controlnet-reference"` - `"inference-controlnet-texture"` - `"inference-img2img"` - `"inference-img2img-ip-adapter"` - `"inference-img2img-texture"` - `"inference-inpaint"` - `"inference-inpaint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt2img"` - `"inference-txt2img-ip-adapter"` - `"inference-txt2img-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture-albedo"` - `"texture-ao"` - `"texture-edge"` - `"texture-height"` - `"texture-metallic"` - `"texture-normal"` - `"texture-smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `angular: Optional[float]` How angular is the surface? 0 is like a sphere, 1 is like a mechanical object - `aspect_ratio: Optional[str]` The optional aspect ratio given for the generation, only applicable for some models - `background_opacity: Optional[float]` Int to set between 0 and 255 for the opacity of the background in the result images. - `base_model_id: Optional[str]` The baseModelId that maybe changed at inference time - `bbox: Optional[List[float]]` A bounding box around the object of interest, in the format [x1, y1, x2, y2]. - `better_quality: Optional[bool]` Remove small dark spots (i.e. “pepper”) and connect small bright cracks. - `canny_structure_image: Optional[str]` The control image already processed by canny detector. Must reference an existing AssetId. - `clustering: Optional[bool]` Activate clustering. - `color_correction: Optional[bool]` Ensure upscaled tile have the same color histogram as original tile. - `color_mode: Optional[str]` - `color_precision: Optional[float]` - `concepts: Optional[List[ExampleAssetMetadataConcept]]` Flux Kontext LoRA to style the image. For Flux Kontext Prompt Editing. - `model_id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: float` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `model_epoch: Optional[str]` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `contours: Optional[List[List[List[List[float]]]]]` - `control_end: Optional[float]` End step for control. - `copied_at: Optional[str]` The date when the asset was copied to a project - `corner_threshold: Optional[float]` - `creativity: Optional[float]` Allow the generation of "hallucinations" during the upscale process, which adds additional details and deviates from the original image. Default: optimized for your preset and style. - `creativity_decay: Optional[float]` Amount of decay in creativity over the upscale process. The lowest the value, the less the creativity will be preserved over the upscale process. - `default_parameters: Optional[bool]` If true, use the default parameters - `depth_fidelity: Optional[float]` The depth fidelity if a depth image provided - `depth_image: Optional[str]` The control image processed by depth estimator. Must reference an existing AssetId. - `details_level: Optional[float]` Amount of details to remove or add - `dilate: Optional[float]` The number of pixels to dilate the result masks. - `factor: Optional[float]` Contrast factor for Grayscale detector - `filter_speckle: Optional[float]` - `fractality: Optional[float]` Determine the scale at which the upscale process works. - With a small value, the upscale works at the largest scale, resulting in fewer added details and more coherent images. Ideal for portraits, for example. - With a large value, the upscale works at the smallest scale, resulting in more added details and more hallucinations. Ideal for landscapes, for example. (info): A small value is slower and more expensive to run. - `geometry_enforcement: Optional[float]` Apply extra control to the Skybox 360 geometry. The higher the value, the more the 360 geometry will influence the generated skybox image. Use with caution. Default is adapted to the other parameters. - `guidance: Optional[float]` The guidance used to generate this asset - `half_mode: Optional[bool]` - `hdr: Optional[float]` - `height: Optional[float]` - `high_threshold: Optional[float]` High threshold for Canny detector - `horizontal_expansion_ratio: Optional[float]` (deprecated) Horizontal expansion ratio. - `image: Optional[str]` The input image to process. Must reference an existing AssetId or be a data URL. - `image_fidelity: Optional[float]` Strengthen the similarity to the original image during the upscale. Default: optimized for your preset and style. - `image_type: Optional[Literal["seamfull", "skybox", "texture"]]` Preserve the seamless properties of skybox or texture images. Input has to be of same type (seamless). - `"seamfull"` - `"skybox"` - `"texture"` - `inference_id: Optional[str]` The id of the Inference describing how this image was generated - `input_fidelity: Optional[Literal["high", "low"]]` When set to `high`, allows to better preserve details from the input images in the output. This is especially useful when using images that contain elements like faces or logos that require accurate preservation in the generated image. You can provide multiple input images that will all be preserved with high fidelity, but keep in mind that the first image will be preserved with richer textures and finer details, so if you include elements such as faces, consider placing them in the first image. Only available for the `gpt-image-1` model. - `"high"` - `"low"` - `input_location: Optional[Literal["bottom", "left", "middle", 2 more]]` Location of the input image in the output. - `"bottom"` - `"left"` - `"middle"` - `"right"` - `"top"` - `invert: Optional[bool]` To invert the relief - `keypoint_threshold: Optional[float]` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `layer_difference: Optional[float]` - `length_threshold: Optional[float]` - `lock_expires_at: Optional[str]` The ISO timestamp when the lock on the canvas will expire - `low_threshold: Optional[float]` Low threshold for Canny detector - `mask: Optional[str]` The mask used for the asset generation or editing - `max_iterations: Optional[float]` - `max_threshold: Optional[float]` Maximum threshold for Grayscale conversion - `min_threshold: Optional[float]` Minimum threshold for Grayscale conversion - `modality: Optional[Literal["canny", "depth", "grayscale", 7 more]]` Modality to detect - `"canny"` - `"depth"` - `"grayscale"` - `"lineart_anime"` - `"mlsd"` - `"normal"` - `"pose"` - `"scribble"` - `"segmentation"` - `"sketch"` - `mode: Optional[str]` - `model_id: Optional[str]` The modelId used to generate this asset - `model_type: Optional[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]` The type of the generator used - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `name: Optional[str]` - `nb_masks: Optional[float]` - `negative_prompt: Optional[str]` The negative prompt used to generate this asset - `negative_prompt_strength: Optional[float]` Controls the influence of the negative prompt. Default 0 means the negative prompt has no effect. Higher values increase negative prompt influence. Must be > 0 if negativePrompt is provided. - `num_inference_steps: Optional[float]` The number of denoising steps for each image generation. - `num_outputs: Optional[float]` The number of outputs to generate. - `original_asset_id: Optional[str]` - `output_index: Optional[float]` - `overlap_percentage: Optional[float]` Overlap percentage for the output image. - `override_embeddings: Optional[bool]` Override the embeddings of the model. Only your prompt and negativePrompt will be used. Use with caution. - `parent_id: Optional[str]` - `parent_job_id: Optional[str]` - `path_precision: Optional[float]` - `points: Optional[List[List[float]]]` List of points (label, x, y) in the image where label = 0 for background and 1 for object. - `polished: Optional[float]` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `preset: Optional[str]` - `progress_percent: Optional[float]` - `prompt: Optional[str]` The prompt that guided the asset generation or editing - `prompt_fidelity: Optional[float]` Increase the fidelity to the prompt during upscale. Default: optimized for your preset and style. - `raised: Optional[float]` How raised is the surface? 0 is flat like water, 1 is like a very rough rock - `reference_images: Optional[List[str]]` The reference images used for the asset generation or editing - `refinement_steps: Optional[float]` Additional refinement steps before scaling. If scalingFactor == 1, the refinement process will be applied (1 + refinementSteps) times. If scalingFactor > 1, the refinement process will be applied refinementSteps times. - `remove_background: Optional[bool]` Remove background for Grayscale detector - `resize_option: Optional[float]` Size proportion of the input image in the output. - `result_contours: Optional[bool]` Boolean to output the contours. - `result_image: Optional[bool]` Boolean to able output the cut out object. - `result_mask: Optional[bool]` Boolean to able return the masks (binary image) in the response. - `root_parent_id: Optional[str]` - `save_flipbook: Optional[bool]` Save a flipbook of the texture. Deactivated when the input texture is larger than 2048x2048px - `scaling_factor: Optional[float]` Scaling factor (when `targetWidth` not specified) - `scheduler: Optional[str]` The scheduler used to generate this asset - `seed: Optional[str]` The seed used to generate this asset. Can be a string or a number in some cases . - `sharpen: Optional[bool]` Sharpen tiles. - `shiny: Optional[float]` How shiny is the surface? 0 is like a matte surface, 1 is like a diamond - `size: Optional[float]` - `sketch: Optional[bool]` Activate sketch detection instead of canny. - `source_project_id: Optional[str]` - `splice_threshold: Optional[float]` - `strength: Optional[float]` The strength Only available for the `flux-kontext` LoRA model. - `structure_fidelity: Optional[float]` Strength for the input image structure preservation - `structure_image: Optional[str]` The control image for structure. A canny detector will be applied to this image. Must reference an existing AssetId. - `style: Optional[Literal["3d-cartoon", "3d-rendered", "anime", 23 more]]` - `"3d-cartoon"` - `"3d-rendered"` - `"anime"` - `"cartoon"` - `"cinematic"` - `"claymation"` - `"cloud-skydome"` - `"comic"` - `"cyberpunk"` - `"enchanted"` - `"fantasy"` - `"ink"` - `"manga"` - `"manga-color"` - `"minimalist"` - `"neon-tron"` - `"oil-painting"` - `"pastel"` - `"photo"` - `"photography"` - `"psychedelic"` - `"retro-fantasy"` - `"scifi-concept-art"` - `"space"` - `"standard"` - `"whimsical"` - `style_fidelity: Optional[float]` The higher the value the more it will look like the style image(s) - `style_images: Optional[List[str]]` List of style images. Most of the time, only one image is enough. It must be existing AssetIds. - `style_images_fidelity: Optional[float]` Condition the influence of the style image(s). The higher the value, the more the style images will influence the upscaled image. - `target_height: Optional[float]` The target height of the output image. - `target_width: Optional[float]` Target width for the upscaled image, take priority over scaling factor - `text: Optional[str]` A textual description / keywords describing the object of interest. - `texture: Optional[str]` The asset to convert in texture maps. Must reference an existing AssetId. - `thumbnail: Optional[ExampleAssetMetadataThumbnail]` The thumbnail of the canvas - `asset_id: str` The AssetId of the image used as a thumbnail for the canvas (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the image used as a thumbnail for the canvas - `tile_style: Optional[bool]` If set to true, during the upscaling process, the model will match tiles of the source image with tiles of the style image(s). This will result in a more coherent restyle. Works best with style images that have a similar composition. - `training_image: Optional[bool]` - `vertical_expansion_ratio: Optional[float]` (deprecated) Vertical expansion ratio. - `width: Optional[float]` The width of the rendered image. - `mime_type: str` The mime type of the asset (example: "image/png") - `owner_id: str` The owner (project) ID (example: "proj_23tlk332lkht3kl2" or "team_dlkhgs23tlk3hlkth32lkht3kl2" for old teams) - `privacy: Literal["private", "public", "unlisted"]` The privacy of the asset - `"private"` - `"public"` - `"unlisted"` - `properties: ExampleAssetProperties` The properties of the asset, content may depend on the kind of asset returned - `size: float` - `animation_frame_count: Optional[float]` Number of animation frames if animations exist - `bitrate: Optional[float]` Bitrate of the media in bits per second - `bone_count: Optional[float]` Number of bones if skeleton exists - `channels: Optional[float]` Number of channels of the audio - `classification: Optional[Literal["effect", "interview", "music", 5 more]]` Classification of the audio - `"effect"` - `"interview"` - `"music"` - `"other"` - `"sound"` - `"speech"` - `"text"` - `"unknown"` - `codec_name: Optional[str]` Codec name of the media - `description: Optional[str]` Description of the audio - `dimensions: Optional[List[float]]` Bounding box dimensions [width, height, depth] - `duration: Optional[float]` Duration of the media in seconds - `face_count: Optional[float]` Number of faces/triangles in the mesh - `format: Optional[str]` Format of the mesh file (e.g. 'glb', etc.) - `frame_rate: Optional[float]` Frame rate of the video in frames per second - `has_animations: Optional[bool]` Whether the mesh has animations - `has_normals: Optional[bool]` Whether the mesh has normal vectors - `has_skeleton: Optional[bool]` Whether the mesh has bones/skeleton - `has_u_vs: Optional[bool]` Whether the mesh has UV coordinates - `height: Optional[float]` - `nb_frames: Optional[float]` Number of frames in the video - `sample_rate: Optional[float]` Sample rate of the media in Hz - `transcription: Optional[ExampleAssetPropertiesTranscription]` Transcription of the audio - `text: str` - `vertex_count: Optional[float]` Number of vertices in the mesh - `width: Optional[float]` - `source: Literal["3d23d", "3d23d:texture", "3d:texture", 72 more]` source of the asset - `"3d23d"` - `"3d23d:texture"` - `"3d:texture"` - `"3d:texture:albedo"` - `"3d:texture:metallic"` - `"3d:texture:mtl"` - `"3d:texture:normal"` - `"3d:texture:roughness"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-control-net"` - `"inference-control-net-img"` - `"inference-control-net-inpainting"` - `"inference-control-net-inpainting-ip-adapter"` - `"inference-control-net-ip-adapter"` - `"inference-control-net-reference"` - `"inference-control-net-texture"` - `"inference-img"` - `"inference-img-ip-adapter"` - `"inference-img-texture"` - `"inference-in-paint"` - `"inference-in-paint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt"` - `"inference-txt-ip-adapter"` - `"inference-txt-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture:albedo"` - `"texture:ao"` - `"texture:edge"` - `"texture:height"` - `"texture:metallic"` - `"texture:normal"` - `"texture:smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `status: Literal["error", "pending", "success"]` The actual status - `"error"` - `"pending"` - `"success"` - `tags: List[str]` The associated tags (example: ["sci-fi", "landscape"]) - `updated_at: str` The asset last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `url: str` Signed URL to get the asset content - `automatic_captioning: Optional[str]` Automatic captioning of the asset - `description: Optional[str]` The description, it will contain in priority: - the manual description - the advanced captioning when the asset is used in training flow - the automatic captioning - `embedding: Optional[List[float]]` The embedding of the asset when requested. Only available when an asset can be embedded (ie: not Detection maps) - `first_frame: Optional[ExampleAssetFirstFrame]` The video asset's first frame. Contains the assetId and the url of the first frame. - `asset_id: str` - `url: str` - `is_hidden: Optional[bool]` Whether the asset is hidden. - `last_frame: Optional[ExampleAssetLastFrame]` The video asset's last frame. Contains the assetId and the url of the last frame. - `asset_id: str` - `url: str` - `nsfw: Optional[List[str]]` The NSFW labels - `original_file_url: Optional[str]` The original file url. Contains the url of the original file. without any conversion. Only available for some specific video, audio and threeD assets. Is only specified if the given asset data has been replaced with a new file during the creation of the asset. - `output_index: Optional[float]` The output index of the asset within a job This index is an positive integer that starts at 0 It is used to differentiate between multiple outputs of the same job If the job has only one output, this index is 0 - `preview: Optional[ExampleAssetPreview]` The asset's preview. Contains the assetId and the url of the preview. - `asset_id: str` - `url: str` - `thumbnail: Optional[ExampleAssetThumbnail]` The asset's thumbnail. Contains the assetId and the url of the thumbnail. - `asset_id: str` - `url: str` - `model_id: str` Model id of the model used to generate the asset - `inference_id: Optional[str]` Inference id of the inference used to generate the asset - `inference_parameters: Optional[ExampleInferenceParameters]` The inference parameters used to generate the asset - `prompt: str` Full text prompt including the model placeholder. (example: "an illustration of phoenix in a fantasy world, flying over a mountain, 8k, bokeh effect") - `type: Literal["controlnet", "controlnet_img2img", "controlnet_inpaint", 15 more]` The type of inference to use. Example: txt2img, img2img, etc. Selecting the right type will condition the expected parameters. Note: if model.type is `sd-xl*` or `sd-1_5*`, when using the `"inpaint"` inference type, Scenario determines the best available `baseModel` for a given `modelId`: one of `["stable-diffusion-inpainting", "stable-diffusion-xl-1.0-inpainting-0.1"] will be used. - `"controlnet"` - `"controlnet_img2img"` - `"controlnet_inpaint"` - `"controlnet_inpaint_ip_adapter"` - `"controlnet_ip_adapter"` - `"controlnet_reference"` - `"controlnet_texture"` - `"img2img"` - `"img2img_ip_adapter"` - `"img2img_texture"` - `"inpaint"` - `"inpaint_ip_adapter"` - `"outpaint"` - `"reference"` - `"reference_texture"` - `"txt2img"` - `"txt2img_ip_adapter"` - `"txt2img_texture"` - `aspect_ratio: Optional[Literal["16:9", "1:1", "21:9", 8 more]]` The aspect ratio of the generated images. Only used for the model flux.1.1-pro-ultra. The aspect ratio is a string formatted as "width:height" (example: "16:9"). - `"16:9"` - `"1:1"` - `"21:9"` - `"2:3"` - `"3:2"` - `"3:4"` - `"4:3"` - `"4:5"` - `"5:4"` - `"9:16"` - `"9:21"` - `base_model_id: Optional[str]` The base model to use for the inference. Only Flux LoRA models can use this parameter. Allowed values are available in the model's attribute: `compliantModelIds` - `concepts: Optional[List[ExampleInferenceParametersConcept]]` - `model_id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: float` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `model_epoch: Optional[str]` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `control_end: Optional[float]` Specifies how long the ControlNet guidance should be applied during the inference process. Only available for Flux.1-dev based models. The value represents the percentage of total inference steps where the ControlNet guidance is active. For example: - 1.0: ControlNet guidance is applied during all inference steps - 0.5: ControlNet guidance is only applied during the first half of inference steps Default values: - 0.5 for Canny modality - 0.6 for all other modalities - `control_image: Optional[str]` Signed URL to display the controlnet input image - `control_image_id: Optional[str]` Asset id of the controlnet input image - `control_start: Optional[float]` Specifies the starting point of the ControlNet guidance during the inference process. Only available for Flux.1-dev based models. The value represents the percentage of total inference steps where the ControlNet guidance starts. For example: - 0.0: ControlNet guidance starts at the beginning of the inference steps - 0.5: ControlNet guidance starts at the middle of the inference steps - `disable_merging: Optional[bool]` If set to true, the entire input image will likely change during inpainting. This results in faster inferences, but the output image will be harder to integrate if the input is just a small part of a larger image. - `disable_modality_detection: Optional[bool]` If false, the process uses the given image to detect the modality. If true (default), the process will not try to detect the modality of the given image. For example: with `pose` modality and `false` value, the process will detect the pose of people in the given image with `depth` modality and `false` value, the process will detect the depth of the given image with `scribble` modality and `true`value, the process will use the given image as a scribble ⚠️ For models of the FLUX schnell or dev families, this parameter is ignored. The modality detection is always disabled. ⚠️ - `guidance: Optional[float]` Controls how closely the generated image follows the prompt. Higher values result in stronger adherence to the prompt. Default and allowed values depend on the model type: - For Flux dev models, the default is 3.5 and allowed values are within [0, 10] - For Flux pro models, the default is 3 and allowed values are within [2, 5] - For SDXL models, the default is 6 and allowed values are within [0, 20] - For SD1.5 models, the default is 7.5 and allowed values are within [0, 20] - `height: Optional[float]` The height of the generated images, must be a 8 multiple (within [64, 2048], default: 512) If model.type is `sd-xl`, `sd-xl-lora`, `sd-xl-composition` the height must be within [512, 2048] If model.type is `sd-1_5`, the height must be within [64, 1024] If model.type is `flux.1.1-pro-ultra`, you can use the aspectRatio parameter instead - `hide_results: Optional[bool]` If set, generated assets will be hidden and not returned in the list of images of the inference or when listing assets (default: false) - `image: Optional[str]` Signed URL to display the input image - `image_id: Optional[str]` Asset id of the input image - `intermediate_images: Optional[bool]` Enable or disable the intermediate images generation (default: false) - `ip_adapter_image: Optional[str]` Signed URL to display the IpAdapter image - `ip_adapter_image_id: Optional[str]` Asset id of the input IpAdapter image - `ip_adapter_image_ids: Optional[List[str]]` Asset id of the input IpAdapter images - `ip_adapter_images: Optional[List[str]]` Signed URL to display the IpAdapter images - `ip_adapter_scale: Optional[float]` IpAdapter scale factor (within [0.0, 1.0], default: 0.9). - `ip_adapter_scales: Optional[List[float]]` IpAdapter scale factors (within [0.0, 1.0], default: 0.9). - `ip_adapter_type: Optional[Literal["character", "style"]]` The type of IP Adapter model to use. Must be one of [`style`, `character`], default to `style`` - `"character"` - `"style"` - `mask: Optional[str]` Signed URL to display the mask image - `mask_id: Optional[str]` Asset id of the mask image - `modality: Optional[str]` The modality associated with the control image used for the generation: it can either be an object with a combination of maximum For models of SD1.5 family: - up to 3 modalities from `canny`, `pose`, `depth`, `lines`, `seg`, `scribble`, `lineart`, `normal-map`, `illusion` - or one of the following presets: `character`, `landscape`, `city`, `interior`. For models of the SDXL family: - up to 3 modalities from `canny`, `pose`, `depth`, `seg`, `illusion`, `scribble` - or one of the following presets: `character`, `landscape`. For models of the FLUX schnell or dev families: - one modality from: `canny`, `tile`, `depth`, `blur`, `pose`, `gray`, `low-quality` Optionally, you can associate a value to these modalities or presets. The value must be within `]0.0, 1.0]`. Examples: - `canny` - `depth:0.5,pose:1.0` - `canny:0.5,depth:0.5,lines:0.3` - `landscape` - `character:0.5` - `illusion:1` Note: if you use a value that is not supported by the model family, this will result in an error. - `model_epoch: Optional[str]` The epoch of the model to use for the inference. Only available for Flux Lora Trained models. - `negative_prompt: Optional[str]` The prompt not to guide the image generation, ignored when guidance < 1 (example: "((ugly face))") For Flux based model (not Fast-Flux): requires negativePromptStrength > 0 and active only for inference types txt2img / img2img / controlnet. - `negative_prompt_strength: Optional[float]` Only applicable for flux-dev based models for `txt2img`, `img2img`, and `controlnet` inference types. Controls the influence of the negative prompt. Default 0 means the negative prompt has no effect. Higher values increase negative prompt influence. Must be > 0 if negativePrompt is provided. - `num_inference_steps: Optional[float]` The number of denoising steps for each image generation (within [1, 150], default: 30) - `num_samples: Optional[float]` The number of images to generate (within [1, 128], default: 4) - `reference_adain: Optional[bool]` Whether to use reference adain Only for "reference" inference type - `reference_attn: Optional[bool]` Whether to use reference query for self attention's context Only for "reference" inference type - `scheduler: Optional[Literal["DDIMScheduler", "DDPMScheduler", "DEISMultistepScheduler", 12 more]]` The scheduler to use to override the default configured for the model. See detailed documentation for more details. - `"DDIMScheduler"` - `"DDPMScheduler"` - `"DEISMultistepScheduler"` - `"DPMSolverMultistepScheduler"` - `"DPMSolverSinglestepScheduler"` - `"EulerAncestralDiscreteScheduler"` - `"EulerDiscreteScheduler"` - `"HeunDiscreteScheduler"` - `"KDPM2AncestralDiscreteScheduler"` - `"KDPM2DiscreteScheduler"` - `"LCMScheduler"` - `"LMSDiscreteScheduler"` - `"PNDMScheduler"` - `"TCDScheduler"` - `"UniPCMultistepScheduler"` - `seed: Optional[str]` Used to reproduce previous results. Default: randomly generated number. - `strength: Optional[float]` Controls the noise intensity introduced to the input image, where a value of 1.0 completely erases the original image's details. Available for img2img and inpainting. (within [0.01, 1.0], default: 0.75) - `style_fidelity: Optional[float]` If style_fidelity=1.0, control more important, else if style_fidelity=0.0, prompt more important, else balanced Only for "reference" inference type - `width: Optional[float]` The width of the generated images, must be a 8 multiple (within [64, 2048], default: 512) If model.type is `sd-xl`, `sd-xl-lora`, `sd-xl-composition` the width must be within [512, 2048] If model.type is `sd-1_5`, the width must be within [64, 1024] If model.type is `flux.1.1-pro-ultra`, you can use the aspectRatio parameter instead - `job: Optional[ExampleJob]` The job associated with the asset - `created_at: str` The job creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `job_id: str` The job ID (example: "job_ocZCnG1Df35XRL1QyCZSRxAG8") - `job_type: Literal["assets-download", "canvas-export", "caption", 36 more]` The type of job - `"assets-download"` - `"canvas-export"` - `"caption"` - `"caption-llava"` - `"custom"` - `"describe-style"` - `"detection"` - `"embed"` - `"flux"` - `"flux-model-training"` - `"generate-prompt"` - `"image-generation"` - `"image-prompt-editing"` - `"inference"` - `"mesh-preview-rendering"` - `"model-download"` - `"model-import"` - `"model-training"` - `"musubi-model-training"` - `"openai-image-generation"` - `"patch-image"` - `"pixelate"` - `"reframe"` - `"remove-background"` - `"repaint"` - `"restyle"` - `"segment"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"skybox-upscale-360"` - `"texture"` - `"translate"` - `"upload"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"vectorize"` - `"workflow"` - `metadata: ExampleJobMetadata` Metadata of the job with some additional information - `asset_ids: Optional[List[str]]` List of produced assets for this job - `error: Optional[str]` Eventual error for the job - `flow: Optional[List[ExampleJobMetadataFlow]]` The flow of the job. Only available for workflow jobs. - `id: str` The id of the node. - `status: Literal["failure", "pending", "processing", 2 more]` The status of the node. Only available for WorkflowJob nodes. - `"failure"` - `"pending"` - `"processing"` - `"skipped"` - `"success"` - `type: Literal["custom-model", "for-each", "generate-prompt", 7 more]` The type of the job for the node. - `"custom-model"` - `"for-each"` - `"generate-prompt"` - `"list"` - `"logic"` - `"model"` - `"remove-background"` - `"transform"` - `"user-approval"` - `"workflow"` - `assets: Optional[List[ExampleJobMetadataFlowAsset]]` List of produced assets for this node. - `asset_id: str` - `url: str` - `count: Optional[float]` Fixed number of iterations for a ForEach node. When set, the loop runs exactly `count` times regardless of array input. When not set, the loop iterates over the resolved array input. Only available for ForEach nodes. - `depends_on: Optional[List[str]]` The nodes that this node depends on. Only available for nodes that have dependencies. Mainly used for user approval nodes. - `include_outputs_in_workflow_job: Optional[Literal[true]]` If true, the outputs of this node will be included in the workflow job's final output. Only applicable to producing nodes (custom-model, inference, etc.). By default, only last nodes (nodes not referenced by other nodes) contribute to outputs. Set this to true to also include intermediate nodes in the final output. Note: This should only be set to `true` or left undefined. - `true` - `inputs: Optional[List[ExampleJobMetadataFlowInput]]` The inputs of the node. - `name: str` The name that must be user to call the model through the API - `type: Literal["boolean", "file", "file_array", 7 more]` The data type of the input - `"boolean"` - `"file"` - `"file_array"` - `"inputs_array"` - `"model"` - `"model_array"` - `"number"` - `"number_array"` - `"string"` - `"string_array"` - `allowed_values: Optional[List[object]]` The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown. - `background_behavior: Optional[Literal["opaque", "transparent"]]` Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`. - `"opaque"` - `"transparent"` - `color: Optional[bool]` Whether the input is a color or not. Only available for `string` input type. - `cost_impact: Optional[bool]` Whether this input affects the model's cost calculation - `default: Optional[object]` The default value for the input - `description: Optional[str]` Help text displayed in the UI to provide additional information about the input - `group: Optional[str]` Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI. - `hint: Optional[str]` Hint text displayed in the UI as a tooltip to guide the user - `inputs: Optional[List[Dict[str, object]]]` The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs. - `items: Optional[List[List[ExampleJobMetadataFlowInputItem]]]` The configured items for inputs_array type inputs. Each item is an array of SubNodeInput that need ref/value resolution. Only available for inputs_array type. - `name: str` The name that must be user to call the model through the API - `type: Literal["boolean", "file", "file_array", 7 more]` The data type of the input - `"boolean"` - `"file"` - `"file_array"` - `"inputs_array"` - `"model"` - `"model_array"` - `"number"` - `"number_array"` - `"string"` - `"string_array"` - `allowed_values: Optional[List[object]]` The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown. - `background_behavior: Optional[Literal["opaque", "transparent"]]` Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`. - `"opaque"` - `"transparent"` - `color: Optional[bool]` Whether the input is a color or not. Only available for `string` input type. - `cost_impact: Optional[bool]` Whether this input affects the model's cost calculation - `default: Optional[object]` The default value for the input - `description: Optional[str]` Help text displayed in the UI to provide additional information about the input - `group: Optional[str]` Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI. - `hint: Optional[str]` Hint text displayed in the UI as a tooltip to guide the user - `inputs: Optional[List[Dict[str, object]]]` The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs. - `kind: Optional[Literal["3d", "audio", "document", 4 more]]` The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `label: Optional[str]` The label displayed in the UI for this input - `mask_from: Optional[str]` The name of the file input field to use as the mask source - `max: Optional[float]` The maximum allowed value. Only available for `number` and `array` input types. - `max_length: Optional[float]` The maximum allowed length for `string` inputs. Also applies to each item in `string_array`. - `max_size: Optional[float]` The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time. - `min: Optional[float]` The minimum allowed value. Only available for `number` and array input types. - `min_length: Optional[float]` The minimum allowed length for string inputs. Also applies to each item in `string_array`. - `model_types: Optional[List[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]]` The allowed model types for this input. Example: `["flux.1-lora"]`. Only available for `model_array` input type. - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `parent: Optional[bool]` Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types. For `file_array`, the parent asset is the first item in the array. - `placeholder: Optional[str]` Placeholder text for the input. Only available for 'string' input type. - `prompt: Optional[bool]` Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type. - `prompt_spark: Optional[bool]` Whether the input is used with prompt spark. Only available for `string` input type. - `ref: Optional[ExampleJobMetadataFlowInputItemRef]` The reference to another input or output of the same workflow. Must have at least one of node or conditional. - `conditional: Optional[List[str]]` The conditional nodes to reference. If the conditional nodes are successful, the node will be successful. If the conditional nodes are skipped, the node will be skipped. Contains an array of node ids used to check the status of the nodes. - `equal: Optional[str]` This is the desired node output value if ref is an if/else node. - `name: Optional[str]` The name of the input or output to reference. If the type is 'workflow', the name is the name of the input of the workflow is required If the type is 'node', the name is not mandatory, except if you want all outputs of the node. To get all outputs of a node, you can use the name 'all'. - `node: Optional[str]` The node id or 'workflow' if the source is a workflow input. - `required: Optional[ExampleJobMetadataFlowInputItemRequired]` Set of rules that describes when this input is required: - `always`: Input is always required - `ifNotDefined`: Input is required when another specified input is not defined - `ifDefined`: Input is required when another specified input is defined - `conditionalValues`: Input is required when another input has a specific value By default, the input is not required. - `always: Optional[bool]` Whether the input is always required - `conditional_values: Optional[object]` Makes this input required when another input has a specific value: - Key: name of the input to check - Value: operation and allowed values that trigger the requirement - `if_defined: Optional[object]` Makes this input required when another input is defined: - Key: name of the input that must be defined - Value: message to display when this input is required - `if_not_defined: Optional[object]` Makes this input required when another input is not defined: - Key: name of the input that must be undefined - Value: message to display when this input is required - `step: Optional[float]` The step increment for numeric inputs. Only available for `number` input type. - `value: Optional[object]` The value of the input. This is the value of the input that will be used to run the node. Only available for flows managed by a WorkflowJob. - `kind: Optional[Literal["3d", "audio", "document", 4 more]]` The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `label: Optional[str]` The label displayed in the UI for this input - `mask_from: Optional[str]` The name of the file input field to use as the mask source - `max: Optional[float]` The maximum allowed value. Only available for `number` and `array` input types. - `max_length: Optional[float]` The maximum allowed length for `string` inputs. Also applies to each item in `string_array`. - `max_size: Optional[float]` The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time. - `min: Optional[float]` The minimum allowed value. Only available for `number` and array input types. - `min_length: Optional[float]` The minimum allowed length for string inputs. Also applies to each item in `string_array`. - `model_types: Optional[List[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]]` The allowed model types for this input. Example: `["flux.1-lora"]`. Only available for `model_array` input type. - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `parent: Optional[bool]` Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types. For `file_array`, the parent asset is the first item in the array. - `placeholder: Optional[str]` Placeholder text for the input. Only available for 'string' input type. - `prompt: Optional[bool]` Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type. - `prompt_spark: Optional[bool]` Whether the input is used with prompt spark. Only available for `string` input type. - `ref: Optional[ExampleJobMetadataFlowInputRef]` The reference to another input or output of the same workflow. Must have at least one of node or conditional. - `conditional: Optional[List[str]]` The conditional nodes to reference. If the conditional nodes are successful, the node will be successful. If the conditional nodes are skipped, the node will be skipped. Contains an array of node ids used to check the status of the nodes. - `equal: Optional[str]` This is the desired node output value if ref is an if/else node. - `name: Optional[str]` The name of the input or output to reference. If the type is 'workflow', the name is the name of the input of the workflow is required If the type is 'node', the name is not mandatory, except if you want all outputs of the node. To get all outputs of a node, you can use the name 'all'. - `node: Optional[str]` The node id or 'workflow' if the source is a workflow input. - `required: Optional[ExampleJobMetadataFlowInputRequired]` Set of rules that describes when this input is required: - `always`: Input is always required - `ifNotDefined`: Input is required when another specified input is not defined - `ifDefined`: Input is required when another specified input is defined - `conditionalValues`: Input is required when another input has a specific value By default, the input is not required. - `always: Optional[bool]` Whether the input is always required - `conditional_values: Optional[object]` Makes this input required when another input has a specific value: - Key: name of the input to check - Value: operation and allowed values that trigger the requirement - `if_defined: Optional[object]` Makes this input required when another input is defined: - Key: name of the input that must be defined - Value: message to display when this input is required - `if_not_defined: Optional[object]` Makes this input required when another input is not defined: - Key: name of the input that must be undefined - Value: message to display when this input is required - `step: Optional[float]` The step increment for numeric inputs. Only available for `number` input type. - `value: Optional[object]` The value of the input. This is the value of the input that will be used to run the node. Only available for flows managed by a WorkflowJob. - `items: Optional[List[str]]` Statically-configured items for a List node. The node outputs this array as-is when executed. Only available for List nodes. The values can be strings, numbers, or asset IDs. - `iteration_index: Optional[float]` Zero-based index of the iteration this node copy belongs to. Set on dynamically-created copies of loop body nodes. - `job_id: Optional[str]` If the flow is part of a WorkflowJob, this is the jobId for the node. jobId is only available for nodes started. A node "Pending" for a running workflow job is not started. - `logic: Optional[ExampleJobMetadataFlowLogic]` The logic of the node. Only available for logic nodes. - `cases: Optional[List[ExampleJobMetadataFlowLogicCase]]` The cases of the logic. Only available for if/else nodes. - `condition: str` - `value: str` - `default: Optional[str]` The default case of the logic. Contains the id/output of the node to execute if no case is matched. Only available for if/else nodes. - `transform: Optional[str]` The transform of the logic. Only available for transform nodes. - `logic_type: Optional[Literal["if-else"]]` The type of the logic for the node. Only available for logic nodes. - `"if-else"` - `loop_body_node_ids: Optional[List[str]]` IDs of the body template nodes that belong to this ForEach loop. At runtime these templates are cloned once per iteration and marked Skipped. Only available for ForEach nodes. - `loop_node_id: Optional[str]` ID of the ForEach node that spawned this iteration copy. Set on dynamically-created copies of loop body nodes. - `model_id: Optional[str]` The model id for the node. Mainly used for custom model tasks. - `output: Optional[object]` The output of the node. Only available for logic nodes. - `workflow_id: Optional[str]` The workflow id for the node. Mainly used for workflow tasks. - `hint: Optional[str]` Actionable hint for the user explaining what went wrong and how to resolve it. - `input: Optional[Dict[str, object]]` The inputs for the job - `output: Optional[Dict[str, object]]` May contain the output of the job for specific custom models jobs. Only available for custom models which generate non-assets outputs. Example: LLM text results. - `output_model_id: Optional[str]` For voice-clone jobs: the ID of the model being trained. - `workflow_id: Optional[str]` The workflow ID of the job if job is part of a workflow. - `workflow_job_id: Optional[str]` The workflow job ID of the job if job is part of a workflow job. - `progress: float` Progress of the job (between 0 and 1) - `status: Literal["canceled", "failure", "finalizing", 5 more]` The current status of the job - `"canceled"` - `"failure"` - `"finalizing"` - `"in-progress"` - `"pending"` - `"queued"` - `"success"` - `"warming-up"` - `status_history: List[ExampleJobStatusHistory]` The history of the different statuses the job went through with the ISO string date of when the job reached each statuses. - `date: str` - `status: Literal["canceled", "failure", "finalizing", 5 more]` - `"canceled"` - `"failure"` - `"finalizing"` - `"in-progress"` - `"pending"` - `"queued"` - `"success"` - `"warming-up"` - `updated_at: str` The job last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `author_id: Optional[str]` The author user ID (example: "dcf121faaa1a0a0bbbd9ca1b73d62aea") - `billing: Optional[ExampleJobBilling]` The billing of the job - `cu_cost: float` - `cu_discount: float` - `owner_id: Optional[str]` The owner ID (example: "team_U3Qmc8PCdWXwAQJ4Dvw4tV6D") # Train ## Trigger `models.train.trigger(strmodel_id, TrainTriggerParams**kwargs) -> TrainTriggerResponse` **put** `/models/{modelId}/train` Trigger the given `modelId` training ### Parameters - `model_id: str` - `dry_run: Optional[object]` - `original_assets: Optional[bool]` If set to true, returns the original asset without transformation - `training_images_count: Optional[int]` Simulate the number of training images, used for dryRun purpose - `parameters: Optional[Parameters]` - `age: Optional[str]` Age group of the voice (for professional cloning) Only available for ElevenLabs voice training - `batch_size: Optional[float]` The batch size Less steps, and will increase the learning rate Only available for Flux LoRA training - `class_prompt: Optional[str]` The prompt to specify images in the same class as provided instance images Only available for SD15 training - `clone_type: Optional[str]` Type of voice cloning: "instant" (fast) or "professional" (higher quality, requires captcha) Only available for ElevenLabs voice training - `concept_prompt: Optional[str]` The prompt with identifier specifying the instance (or subject) of the class (example: "a daiton dog") Default value varies depending on the model type: - For SD1.5: "daiton" if no class is associated with the model - For SDXL: "daiton" - For Flux: "" - `gender: Optional[str]` Gender of the voice (for professional cloning) Only available for ElevenLabs voice training - `language: Optional[str]` Language of the audio samples (ISO 639-1 code) Only available for ElevenLabs voice training - `learning_rate: Optional[float]` Initial learning rate (after the potential warmup period) Default value varies depending on the model type: - For SD1.5 and SDXL: 0.000005 - For Flux: 0.0001 - `learning_rate_text_encoder: Optional[float]` Initial learning rate (after the potential warmup period) for the text encoder Maximum [Flux LoRA: 0.001] Default [SDXL: 0.00005 | Flux LoRA: 0.00001] Minimum [SDXL: 0 | Flux LoRA: 0.000001] - `learning_rate_unet: Optional[float]` Initial learning rate (after the potential warmup period) for the UNet Only available for SDXL LoRA training - `lr_scheduler: Optional[Literal["constant", "constant-with-warmup", "cosine", 3 more]]` The scheduler type to use (default: "constant") Only available for SD15 and SDXL LoRA training - `"constant"` - `"constant-with-warmup"` - `"cosine"` - `"cosine-with-restarts"` - `"linear"` - `"polynomial"` - `max_train_steps: Optional[float]` Maximum number of training steps to execute (default: varies depending on the model type) For SDXL LoRA training, please use `numTextTrainSteps` and `numUNetTrainSteps` instead Default value varies depending on the model type: - For SD1.5: round((number of training images * 225) / 3) - For SDXL: number of training images * 175 - For Flux: number of training images * 100 Maximum value varies depending on the model type: - For SD1.5 and SDXL: [0, 40000] - For Flux: [0, 10000] - `nb_epochs: Optional[float]` The number of epochs to train for Only available for Flux LoRA training - `nb_repeats: Optional[float]` The number of times to repeat the training Only available for Flux LoRA training - `num_text_train_steps: Optional[float]` The number of training steps for the text encoder Only available for SDXL LoRA training - `num_u_net_train_steps: Optional[float]` The number of training steps for the UNet Only available for SDXL LoRA training - `optimize_for: Optional[Literal["likeness"]]` Optimize the model training task for a specific type of input images. The available values are: - "likeness": optimize training for likeness or portrait (targets specific transformer blocks) - "all": train all transformer blocks - "none": train no specific transformer blocks This parameter controls which double and single transformer blocks are trained during the LoRA training process. Only available for Flux LoRA training - `"likeness"` - `prior_loss_weight: Optional[float]` The weight of prior preservation loss Only available for SD15 and SDXL LoRA training - `random_crop: Optional[bool]` Whether to random crop or center crop images before resizing to the working resolution Only available for SD15 and SDXL LoRA training - `random_crop_ratio: Optional[float]` Ratio of random crops Only available for SD15 and SDXL LoRA training - `random_crop_scale: Optional[float]` Scale of random crops Only available for SD15 and SDXL LoRA training - `rank: Optional[float]` The dimension of the LoRA update matrices Only available for SDXL (deprecated), Flux LoRA and Musubi training Default value varies depending on the model type: - For SDXL (deprecated): 64 - For Flux: 16 - For Musubi: 64 Each trainer enforces its own tighter limit (Flux LoRA: [2; 64], Musubi: [2; 128]) - `remove_background_noise: Optional[bool]` Whether to remove background noise from audio samples before cloning. When enabled, each sample must be at least 5 seconds long. Only available for ElevenLabs voice training - `sample_prompts: Optional[Sequence[str]]` The prompts to use for each epoch Only available for Flux LoRA training - `sample_source_images: Optional[Sequence[str]]` The sample prompt images (AssetIds) paired with samplePrompts Only available for Flux LoRA training Must be the same length as samplePrompts - `scale_lr: Optional[bool]` Whether to scale the learning rate Note: Legacy parameter, will be ignored Only available for SD15 and SDXL LoRA training - `seed: Optional[float]` Used to reproduce previous results. Default: randomly generated number. Only available for SD15 and SDXL LoRA training - `text_encoder_training_ratio: Optional[float]` Whether to train the text encoder or not Example: For 100 steps and a value of 0.2, it means that the text encoder will be trained for 20 steps and then the UNet for 80 steps Note: Legacy parameter, please use `numTextTrainSteps` and `numUNetTrainSteps` Only available for SD15 and SDXL LoRA training - `validation_frequency: Optional[float]` Validation frequency. Cannot be greater than maxTrainSteps value Only available for SD15 and SDXL LoRA training - `validation_prompt: Optional[str]` Validation prompt Only available for SD15 and SDXL LoRA training - `voice_description: Optional[str]` Description of the voice characteristics Only available for ElevenLabs voice training - `wandb_key: Optional[str]` The Weights And Bias key to use for logging. The maximum length is 40 characters ### Returns - `class TrainTriggerResponse: …` - `job: Job` - `created_at: str` The job creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `job_id: str` The job ID (example: "job_ocZCnG1Df35XRL1QyCZSRxAG8") - `job_type: Literal["assets-download", "canvas-export", "caption", 36 more]` The type of job - `"assets-download"` - `"canvas-export"` - `"caption"` - `"caption-llava"` - `"custom"` - `"describe-style"` - `"detection"` - `"embed"` - `"flux"` - `"flux-model-training"` - `"generate-prompt"` - `"image-generation"` - `"image-prompt-editing"` - `"inference"` - `"mesh-preview-rendering"` - `"model-download"` - `"model-import"` - `"model-training"` - `"musubi-model-training"` - `"openai-image-generation"` - `"patch-image"` - `"pixelate"` - `"reframe"` - `"remove-background"` - `"repaint"` - `"restyle"` - `"segment"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"skybox-upscale-360"` - `"texture"` - `"translate"` - `"upload"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"vectorize"` - `"workflow"` - `metadata: JobMetadata` Metadata of the job with some additional information - `asset_ids: Optional[List[str]]` List of produced assets for this job - `error: Optional[str]` Eventual error for the job - `flow: Optional[List[JobMetadataFlow]]` The flow of the job. Only available for workflow jobs. - `id: str` The id of the node. - `status: Literal["failure", "pending", "processing", 2 more]` The status of the node. Only available for WorkflowJob nodes. - `"failure"` - `"pending"` - `"processing"` - `"skipped"` - `"success"` - `type: Literal["custom-model", "for-each", "generate-prompt", 7 more]` The type of the job for the node. - `"custom-model"` - `"for-each"` - `"generate-prompt"` - `"list"` - `"logic"` - `"model"` - `"remove-background"` - `"transform"` - `"user-approval"` - `"workflow"` - `assets: Optional[List[JobMetadataFlowAsset]]` List of produced assets for this node. - `asset_id: str` - `url: str` - `count: Optional[float]` Fixed number of iterations for a ForEach node. When set, the loop runs exactly `count` times regardless of array input. When not set, the loop iterates over the resolved array input. Only available for ForEach nodes. - `depends_on: Optional[List[str]]` The nodes that this node depends on. Only available for nodes that have dependencies. Mainly used for user approval nodes. - `include_outputs_in_workflow_job: Optional[Literal[true]]` If true, the outputs of this node will be included in the workflow job's final output. Only applicable to producing nodes (custom-model, inference, etc.). By default, only last nodes (nodes not referenced by other nodes) contribute to outputs. Set this to true to also include intermediate nodes in the final output. Note: This should only be set to `true` or left undefined. - `true` - `inputs: Optional[List[JobMetadataFlowInput]]` The inputs of the node. - `name: str` The name that must be user to call the model through the API - `type: Literal["boolean", "file", "file_array", 7 more]` The data type of the input - `"boolean"` - `"file"` - `"file_array"` - `"inputs_array"` - `"model"` - `"model_array"` - `"number"` - `"number_array"` - `"string"` - `"string_array"` - `allowed_values: Optional[List[object]]` The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown. - `background_behavior: Optional[Literal["opaque", "transparent"]]` Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`. - `"opaque"` - `"transparent"` - `color: Optional[bool]` Whether the input is a color or not. Only available for `string` input type. - `cost_impact: Optional[bool]` Whether this input affects the model's cost calculation - `default: Optional[object]` The default value for the input - `description: Optional[str]` Help text displayed in the UI to provide additional information about the input - `group: Optional[str]` Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI. - `hint: Optional[str]` Hint text displayed in the UI as a tooltip to guide the user - `inputs: Optional[List[Dict[str, object]]]` The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs. - `items: Optional[List[List[JobMetadataFlowInputItem]]]` The configured items for inputs_array type inputs. Each item is an array of SubNodeInput that need ref/value resolution. Only available for inputs_array type. - `name: str` The name that must be user to call the model through the API - `type: Literal["boolean", "file", "file_array", 7 more]` The data type of the input - `"boolean"` - `"file"` - `"file_array"` - `"inputs_array"` - `"model"` - `"model_array"` - `"number"` - `"number_array"` - `"string"` - `"string_array"` - `allowed_values: Optional[List[object]]` The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown. - `background_behavior: Optional[Literal["opaque", "transparent"]]` Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`. - `"opaque"` - `"transparent"` - `color: Optional[bool]` Whether the input is a color or not. Only available for `string` input type. - `cost_impact: Optional[bool]` Whether this input affects the model's cost calculation - `default: Optional[object]` The default value for the input - `description: Optional[str]` Help text displayed in the UI to provide additional information about the input - `group: Optional[str]` Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI. - `hint: Optional[str]` Hint text displayed in the UI as a tooltip to guide the user - `inputs: Optional[List[Dict[str, object]]]` The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs. - `kind: Optional[Literal["3d", "audio", "document", 4 more]]` The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `label: Optional[str]` The label displayed in the UI for this input - `mask_from: Optional[str]` The name of the file input field to use as the mask source - `max: Optional[float]` The maximum allowed value. Only available for `number` and `array` input types. - `max_length: Optional[float]` The maximum allowed length for `string` inputs. Also applies to each item in `string_array`. - `max_size: Optional[float]` The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time. - `min: Optional[float]` The minimum allowed value. Only available for `number` and array input types. - `min_length: Optional[float]` The minimum allowed length for string inputs. Also applies to each item in `string_array`. - `model_types: Optional[List[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]]` The allowed model types for this input. Example: `["flux.1-lora"]`. Only available for `model_array` input type. - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `parent: Optional[bool]` Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types. For `file_array`, the parent asset is the first item in the array. - `placeholder: Optional[str]` Placeholder text for the input. Only available for 'string' input type. - `prompt: Optional[bool]` Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type. - `prompt_spark: Optional[bool]` Whether the input is used with prompt spark. Only available for `string` input type. - `ref: Optional[JobMetadataFlowInputItemRef]` The reference to another input or output of the same workflow. Must have at least one of node or conditional. - `conditional: Optional[List[str]]` The conditional nodes to reference. If the conditional nodes are successful, the node will be successful. If the conditional nodes are skipped, the node will be skipped. Contains an array of node ids used to check the status of the nodes. - `equal: Optional[str]` This is the desired node output value if ref is an if/else node. - `name: Optional[str]` The name of the input or output to reference. If the type is 'workflow', the name is the name of the input of the workflow is required If the type is 'node', the name is not mandatory, except if you want all outputs of the node. To get all outputs of a node, you can use the name 'all'. - `node: Optional[str]` The node id or 'workflow' if the source is a workflow input. - `required: Optional[JobMetadataFlowInputItemRequired]` Set of rules that describes when this input is required: - `always`: Input is always required - `ifNotDefined`: Input is required when another specified input is not defined - `ifDefined`: Input is required when another specified input is defined - `conditionalValues`: Input is required when another input has a specific value By default, the input is not required. - `always: Optional[bool]` Whether the input is always required - `conditional_values: Optional[object]` Makes this input required when another input has a specific value: - Key: name of the input to check - Value: operation and allowed values that trigger the requirement - `if_defined: Optional[object]` Makes this input required when another input is defined: - Key: name of the input that must be defined - Value: message to display when this input is required - `if_not_defined: Optional[object]` Makes this input required when another input is not defined: - Key: name of the input that must be undefined - Value: message to display when this input is required - `step: Optional[float]` The step increment for numeric inputs. Only available for `number` input type. - `value: Optional[object]` The value of the input. This is the value of the input that will be used to run the node. Only available for flows managed by a WorkflowJob. - `kind: Optional[Literal["3d", "audio", "document", 4 more]]` The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `label: Optional[str]` The label displayed in the UI for this input - `mask_from: Optional[str]` The name of the file input field to use as the mask source - `max: Optional[float]` The maximum allowed value. Only available for `number` and `array` input types. - `max_length: Optional[float]` The maximum allowed length for `string` inputs. Also applies to each item in `string_array`. - `max_size: Optional[float]` The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time. - `min: Optional[float]` The minimum allowed value. Only available for `number` and array input types. - `min_length: Optional[float]` The minimum allowed length for string inputs. Also applies to each item in `string_array`. - `model_types: Optional[List[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]]` The allowed model types for this input. Example: `["flux.1-lora"]`. Only available for `model_array` input type. - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `parent: Optional[bool]` Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types. For `file_array`, the parent asset is the first item in the array. - `placeholder: Optional[str]` Placeholder text for the input. Only available for 'string' input type. - `prompt: Optional[bool]` Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type. - `prompt_spark: Optional[bool]` Whether the input is used with prompt spark. Only available for `string` input type. - `ref: Optional[JobMetadataFlowInputRef]` The reference to another input or output of the same workflow. Must have at least one of node or conditional. - `conditional: Optional[List[str]]` The conditional nodes to reference. If the conditional nodes are successful, the node will be successful. If the conditional nodes are skipped, the node will be skipped. Contains an array of node ids used to check the status of the nodes. - `equal: Optional[str]` This is the desired node output value if ref is an if/else node. - `name: Optional[str]` The name of the input or output to reference. If the type is 'workflow', the name is the name of the input of the workflow is required If the type is 'node', the name is not mandatory, except if you want all outputs of the node. To get all outputs of a node, you can use the name 'all'. - `node: Optional[str]` The node id or 'workflow' if the source is a workflow input. - `required: Optional[JobMetadataFlowInputRequired]` Set of rules that describes when this input is required: - `always`: Input is always required - `ifNotDefined`: Input is required when another specified input is not defined - `ifDefined`: Input is required when another specified input is defined - `conditionalValues`: Input is required when another input has a specific value By default, the input is not required. - `always: Optional[bool]` Whether the input is always required - `conditional_values: Optional[object]` Makes this input required when another input has a specific value: - Key: name of the input to check - Value: operation and allowed values that trigger the requirement - `if_defined: Optional[object]` Makes this input required when another input is defined: - Key: name of the input that must be defined - Value: message to display when this input is required - `if_not_defined: Optional[object]` Makes this input required when another input is not defined: - Key: name of the input that must be undefined - Value: message to display when this input is required - `step: Optional[float]` The step increment for numeric inputs. Only available for `number` input type. - `value: Optional[object]` The value of the input. This is the value of the input that will be used to run the node. Only available for flows managed by a WorkflowJob. - `items: Optional[List[str]]` Statically-configured items for a List node. The node outputs this array as-is when executed. Only available for List nodes. The values can be strings, numbers, or asset IDs. - `iteration_index: Optional[float]` Zero-based index of the iteration this node copy belongs to. Set on dynamically-created copies of loop body nodes. - `job_id: Optional[str]` If the flow is part of a WorkflowJob, this is the jobId for the node. jobId is only available for nodes started. A node "Pending" for a running workflow job is not started. - `logic: Optional[JobMetadataFlowLogic]` The logic of the node. Only available for logic nodes. - `cases: Optional[List[JobMetadataFlowLogicCase]]` The cases of the logic. Only available for if/else nodes. - `condition: str` - `value: str` - `default: Optional[str]` The default case of the logic. Contains the id/output of the node to execute if no case is matched. Only available for if/else nodes. - `transform: Optional[str]` The transform of the logic. Only available for transform nodes. - `logic_type: Optional[Literal["if-else"]]` The type of the logic for the node. Only available for logic nodes. - `"if-else"` - `loop_body_node_ids: Optional[List[str]]` IDs of the body template nodes that belong to this ForEach loop. At runtime these templates are cloned once per iteration and marked Skipped. Only available for ForEach nodes. - `loop_node_id: Optional[str]` ID of the ForEach node that spawned this iteration copy. Set on dynamically-created copies of loop body nodes. - `model_id: Optional[str]` The model id for the node. Mainly used for custom model tasks. - `output: Optional[object]` The output of the node. Only available for logic nodes. - `workflow_id: Optional[str]` The workflow id for the node. Mainly used for workflow tasks. - `hint: Optional[str]` Actionable hint for the user explaining what went wrong and how to resolve it. - `input: Optional[Dict[str, object]]` The inputs for the job - `output: Optional[Dict[str, object]]` May contain the output of the job for specific custom models jobs. Only available for custom models which generate non-assets outputs. Example: LLM text results. - `output_model_id: Optional[str]` For voice-clone jobs: the ID of the model being trained. - `workflow_id: Optional[str]` The workflow ID of the job if job is part of a workflow. - `workflow_job_id: Optional[str]` The workflow job ID of the job if job is part of a workflow job. - `progress: float` Progress of the job (between 0 and 1) - `status: Literal["canceled", "failure", "finalizing", 5 more]` The current status of the job - `"canceled"` - `"failure"` - `"finalizing"` - `"in-progress"` - `"pending"` - `"queued"` - `"success"` - `"warming-up"` - `status_history: List[JobStatusHistory]` The history of the different statuses the job went through with the ISO string date of when the job reached each statuses. - `date: str` - `status: Literal["canceled", "failure", "finalizing", 5 more]` - `"canceled"` - `"failure"` - `"finalizing"` - `"in-progress"` - `"pending"` - `"queued"` - `"success"` - `"warming-up"` - `updated_at: str` The job last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `author_id: Optional[str]` The author user ID (example: "dcf121faaa1a0a0bbbd9ca1b73d62aea") - `billing: Optional[JobBilling]` The billing of the job - `cu_cost: float` - `cu_discount: float` - `owner_id: Optional[str]` The owner ID (example: "team_U3Qmc8PCdWXwAQJ4Dvw4tV6D") - `model: Model` - `id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `capabilities: List[Literal["3d23d", "audio2audio", "audio2video", 29 more]]` List of model capabilities (example: ["txt2img", "img2img", "txt2img_ip_adapter", ...]) - `"3d23d"` - `"audio2audio"` - `"audio2video"` - `"controlnet"` - `"controlnet_img2img"` - `"controlnet_inpaint"` - `"controlnet_inpaint_ip_adapter"` - `"controlnet_ip_adapter"` - `"controlnet_reference"` - `"controlnet_texture"` - `"img23d"` - `"img2img"` - `"img2img_ip_adapter"` - `"img2img_texture"` - `"img2txt"` - `"img2video"` - `"inpaint"` - `"inpaint_ip_adapter"` - `"outpaint"` - `"reference"` - `"reference_texture"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2img_ip_adapter"` - `"txt2img_texture"` - `"txt2txt"` - `"txt2video"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `collection_ids: List[str]` A list of CollectionId this model belongs to - `created_at: str` The model creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `custom: bool` Whether the model is a custom model and can be used only with POST /generate/custom/{modelId} endpoint - `example_asset_ids: List[str]` List of all example asset IDs setup by the model owner - `privacy: Literal["private", "public", "unlisted"]` The privacy of the model (default: private) - `"private"` - `"public"` - `"unlisted"` - `source: Literal["civitai", "huggingface", "other", "scenario"]` The source of the model - `"civitai"` - `"huggingface"` - `"other"` - `"scenario"` - `status: Literal["copying", "failed", "new", 3 more]` The model status - `"copying"` - `"failed"` - `"new"` - `"trained"` - `"training"` - `"training-canceled"` - `tags: List[str]` The associated tags (example: ["sci-fi", "landscape"]) - `training_images_number: float` The total number of training images - `type: Literal["custom", "elevenlabs-voice", "flux.1", 34 more]` The model type (example: "flux.1-lora") - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `updated_at: str` The model last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `access_restrictions: Optional[Literal[0, 100, 25, 2 more]]` The access restrictions of the model 0: Free plan 25: Creator plan 50: Pro plan 75: Team plan 100: Enterprise plan - `0` - `100` - `25` - `50` - `75` - `author_id: Optional[str]` The author user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") - `class_: Optional[ModelClass]` The class of the model - `category: str` The category slug of the class (example: "art-style") - `concept_prompt: str` The concept prompt of the class (example: "a sks character design") - `model_id: str` The model ID of the class (example: "stable-diffusion-v1-5") - `name: str` The class name (example: "Character Design") - `prompt: str` The class prompt (example: "a character design") - `slug: str` The class slug (example: "art-style-character-design") - `status: Literal["published", "unpublished"]` The class status (only published classes are listed, but unpublished classes can still appear in existing models) - `"published"` - `"unpublished"` - `thumbnails: List[str]` Some example images URLs to showcase the class - `compliant_model_ids: Optional[List[str]]` List of base model IDs compliant with the model (example: ["flux.1-dev", "flux.1-schnell"]) This attribute is mainly used for Flux LoRA models - `concepts: Optional[List[ModelConcept]]` The concepts is required for the type model: composition - `model_id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: float` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `model_epoch: Optional[str]` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `epoch: Optional[str]` The epoch of the model. Only available for Flux Lora Trained models. If not set, uses the final model epoch (latest) - `epochs: Optional[List[ModelEpoch]]` The epochs of the model. Only available for Flux Lora Trained models. - `epoch: str` The epoch hash to identify the epoch - `assets: Optional[List[ModelEpochAsset]]` The assets of the epoch if sample prompts as been supplied during training - `asset_id: str` The AssetId of the image during training (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the asset - `inputs: Optional[List[ModelInput]]` The inputs of the model. Only used for custom models. To retrieve this list, get it by modelId with GET /models/{modelId} - `name: str` The name that must be user to call the model through the API - `type: Literal["boolean", "file", "file_array", 7 more]` The data type of the input - `"boolean"` - `"file"` - `"file_array"` - `"inputs_array"` - `"model"` - `"model_array"` - `"number"` - `"number_array"` - `"string"` - `"string_array"` - `allowed_values: Optional[List[object]]` The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown. - `background_behavior: Optional[Literal["opaque", "transparent"]]` Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`. - `"opaque"` - `"transparent"` - `color: Optional[bool]` Whether the input is a color or not. Only available for `string` input type. - `cost_impact: Optional[bool]` Whether this input affects the model's cost calculation - `default: Optional[object]` The default value for the input - `description: Optional[str]` Help text displayed in the UI to provide additional information about the input - `group: Optional[str]` Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI. - `hint: Optional[str]` Hint text displayed in the UI as a tooltip to guide the user - `inputs: Optional[List[Dict[str, object]]]` The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs. - `kind: Optional[Literal["3d", "audio", "document", 4 more]]` The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `label: Optional[str]` The label displayed in the UI for this input - `mask_from: Optional[str]` The name of the file input field to use as the mask source - `max: Optional[float]` The maximum allowed value. Only available for `number` and `array` input types. - `max_length: Optional[float]` The maximum allowed length for `string` inputs. Also applies to each item in `string_array`. - `max_size: Optional[float]` The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time. - `min: Optional[float]` The minimum allowed value. Only available for `number` and array input types. - `min_length: Optional[float]` The minimum allowed length for string inputs. Also applies to each item in `string_array`. - `model_types: Optional[List[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]]` The allowed model types for this input. Example: `["flux.1-lora"]`. Only available for `model_array` input type. - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `parent: Optional[bool]` Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types. For `file_array`, the parent asset is the first item in the array. - `placeholder: Optional[str]` Placeholder text for the input. Only available for 'string' input type. - `prompt: Optional[bool]` Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type. - `prompt_spark: Optional[bool]` Whether the input is used with prompt spark. Only available for `string` input type. - `required: Optional[ModelInputRequired]` Set of rules that describes when this input is required: - `always`: Input is always required - `ifNotDefined`: Input is required when another specified input is not defined - `ifDefined`: Input is required when another specified input is defined - `conditionalValues`: Input is required when another input has a specific value By default, the input is not required. - `always: Optional[bool]` Whether the input is always required - `conditional_values: Optional[object]` Makes this input required when another input has a specific value: - Key: name of the input to check - Value: operation and allowed values that trigger the requirement - `if_defined: Optional[object]` Makes this input required when another input is defined: - Key: name of the input that must be defined - Value: message to display when this input is required - `if_not_defined: Optional[object]` Makes this input required when another input is not defined: - Key: name of the input that must be undefined - Value: message to display when this input is required - `step: Optional[float]` The step increment for numeric inputs. Only available for `number` input type. - `model_keyword: Optional[str]` The model keyword, this is a legacy parameter, please use conceptPrompt in parameters - `name: Optional[str]` The model name (example: "Cinematic Realism") - `negative_prompt_embedding: Optional[str]` Fine-tune the model's inferences with negative prompt embedding - `owner_id: Optional[str]` The owner ID (example: "team_VFhihHKMRZyDDnZAJwLb2Q") - `parameters: Optional[ModelParameters]` The parameters of the model - `age: Optional[str]` Age group of the voice (for professional cloning) Only available for ElevenLabs voice training - `batch_size: Optional[float]` The batch size Less steps, and will increase the learning rate Only available for Flux LoRA training - `class_prompt: Optional[str]` The prompt to specify images in the same class as provided instance images Only available for SD15 training - `clone_type: Optional[str]` Type of voice cloning: "instant" (fast) or "professional" (higher quality, requires captcha) Only available for ElevenLabs voice training - `concept_prompt: Optional[str]` The prompt with identifier specifying the instance (or subject) of the class (example: "a daiton dog") Default value varies depending on the model type: - For SD1.5: "daiton" if no class is associated with the model - For SDXL: "daiton" - For Flux: "" - `gender: Optional[str]` Gender of the voice (for professional cloning) Only available for ElevenLabs voice training - `language: Optional[str]` Language of the audio samples (ISO 639-1 code) Only available for ElevenLabs voice training - `learning_rate: Optional[float]` Initial learning rate (after the potential warmup period) Default value varies depending on the model type: - For SD1.5 and SDXL: 0.000005 - For Flux: 0.0001 - `learning_rate_text_encoder: Optional[float]` Initial learning rate (after the potential warmup period) for the text encoder Maximum [Flux LoRA: 0.001] Default [SDXL: 0.00005 | Flux LoRA: 0.00001] Minimum [SDXL: 0 | Flux LoRA: 0.000001] - `learning_rate_unet: Optional[float]` Initial learning rate (after the potential warmup period) for the UNet Only available for SDXL LoRA training - `lr_scheduler: Optional[Literal["constant", "constant-with-warmup", "cosine", 3 more]]` The scheduler type to use (default: "constant") Only available for SD15 and SDXL LoRA training - `"constant"` - `"constant-with-warmup"` - `"cosine"` - `"cosine-with-restarts"` - `"linear"` - `"polynomial"` - `max_train_steps: Optional[float]` Maximum number of training steps to execute (default: varies depending on the model type) For SDXL LoRA training, please use `numTextTrainSteps` and `numUNetTrainSteps` instead Default value varies depending on the model type: - For SD1.5: round((number of training images * 225) / 3) - For SDXL: number of training images * 175 - For Flux: number of training images * 100 Maximum value varies depending on the model type: - For SD1.5 and SDXL: [0, 40000] - For Flux: [0, 10000] - `nb_epochs: Optional[float]` The number of epochs to train for Only available for Flux LoRA training - `nb_repeats: Optional[float]` The number of times to repeat the training Only available for Flux LoRA training - `num_text_train_steps: Optional[float]` The number of training steps for the text encoder Only available for SDXL LoRA training - `num_u_net_train_steps: Optional[float]` The number of training steps for the UNet Only available for SDXL LoRA training - `optimize_for: Optional[Literal["likeness"]]` Optimize the model training task for a specific type of input images. The available values are: - "likeness": optimize training for likeness or portrait (targets specific transformer blocks) - "all": train all transformer blocks - "none": train no specific transformer blocks This parameter controls which double and single transformer blocks are trained during the LoRA training process. Only available for Flux LoRA training - `"likeness"` - `prior_loss_weight: Optional[float]` The weight of prior preservation loss Only available for SD15 and SDXL LoRA training - `random_crop: Optional[bool]` Whether to random crop or center crop images before resizing to the working resolution Only available for SD15 and SDXL LoRA training - `random_crop_ratio: Optional[float]` Ratio of random crops Only available for SD15 and SDXL LoRA training - `random_crop_scale: Optional[float]` Scale of random crops Only available for SD15 and SDXL LoRA training - `rank: Optional[float]` The dimension of the LoRA update matrices Only available for SDXL (deprecated), Flux LoRA and Musubi training Default value varies depending on the model type: - For SDXL (deprecated): 64 - For Flux: 16 - For Musubi: 64 Each trainer enforces its own tighter limit (Flux LoRA: [2; 64], Musubi: [2; 128]) - `remove_background_noise: Optional[bool]` Whether to remove background noise from audio samples before cloning. When enabled, each sample must be at least 5 seconds long. Only available for ElevenLabs voice training - `sample_prompts: Optional[List[str]]` The prompts to use for each epoch Only available for Flux LoRA training - `sample_source_images: Optional[List[str]]` The sample prompt images (AssetIds) paired with samplePrompts Only available for Flux LoRA training Must be the same length as samplePrompts - `scale_lr: Optional[bool]` Whether to scale the learning rate Note: Legacy parameter, will be ignored Only available for SD15 and SDXL LoRA training - `seed: Optional[float]` Used to reproduce previous results. Default: randomly generated number. Only available for SD15 and SDXL LoRA training - `text_encoder_training_ratio: Optional[float]` Whether to train the text encoder or not Example: For 100 steps and a value of 0.2, it means that the text encoder will be trained for 20 steps and then the UNet for 80 steps Note: Legacy parameter, please use `numTextTrainSteps` and `numUNetTrainSteps` Only available for SD15 and SDXL LoRA training - `validation_frequency: Optional[float]` Validation frequency. Cannot be greater than maxTrainSteps value Only available for SD15 and SDXL LoRA training - `validation_prompt: Optional[str]` Validation prompt Only available for SD15 and SDXL LoRA training - `voice_description: Optional[str]` Description of the voice characteristics Only available for ElevenLabs voice training - `wandb_key: Optional[str]` The Weights And Bias key to use for logging. The maximum length is 40 characters - `parent_model_id: Optional[str]` The id of the parent model - `performance_stats: Optional[ModelPerformanceStats]` Aggregated performance stats - `variants: List[ModelPerformanceStatsVariant]` Performance metrics per variant - `capability: str` The generation capability (example: "txt2img", "img2video", "txt2audio") - `computed_at: str` When these stats were last computed (ISO date) - `variant_key: str` Unique variant identifier (example: "txt2img:1K", "img2video:2K", "txt2audio") - `arena_score: Optional[ModelPerformanceStatsVariantArenaScore]` External quality score from arena.ai leaderboard - `arena_category: str` Arena category (example: "text_to_image", "image_to_video") - `arena_model_name: str` Model name on arena.ai - `fetched_at: str` When this score was last fetched (ISO date) - `rank: float` Rank in the arena category - `rating: float` ELO rating - `rating_lower: float` ELO rating confidence interval lower bound - `rating_upper: float` ELO rating confidence interval upper bound - `votes: float` Number of human votes - `cost_per_asset_max_cu: Optional[float]` Maximum cost per output asset (CU) - `cost_per_asset_min_cu: Optional[float]` Minimum cost per output asset (CU) - `cost_per_asset_p50_cu: Optional[float]` Median cost per output asset (CU) - `inference_latency_p50_sec: Optional[float]` Inference latency P50 per output asset (seconds) - `inference_latency_p75_sec: Optional[float]` Inference latency P75 per output asset (seconds) - `resolution: Optional[str]` The resolution bucket (example: "0.5K", "1K", "2K", "4K") - `total_latency_p50_sec: Optional[float]` Total latency P50 per output asset, including queue time (seconds) - `total_latency_p75_sec: Optional[float]` Total latency P75 per output asset, including queue time (seconds) - `default: Optional[str]` Default variant key for quick model comparison - `prompt_embedding: Optional[str]` Fine-tune the model's inferences with prompt embedding - `short_description: Optional[str]` The model short description (example: "This model generates highly detailed cinematic scenes.") - `soft_deletion_on: Optional[str]` The date when the model will be soft deleted (only for Free plan) - `thumbnail: Optional[ModelThumbnail]` A thumbnail for your model - `asset_id: str` The AssetId of the image used as a thumbnail for your model (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the image used as a thumbnail for your model - `training_image_pairs: Optional[List[ModelTrainingImagePair]]` Array of training image pairs - `instruction: Optional[str]` The instruction for the image pair, source to target - `source_id: Optional[str]` The source asset ID (must be a training asset) - `target_id: Optional[str]` The target asset ID (must be a training asset) - `training_images: Optional[List[ModelTrainingImage]]` The URLs of the first 3 training images of the model. To retrieve the full set of images, get it by modelId - `id: str` The training image ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `automatic_captioning: str` Automatic captioning of the image - `created_at: str` The training image upload date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `description: str` Description for the image - `download_url: str` The URL of the image - `name: str` The original file name of the image (example: "my-training-image.jpg") - `training_progress: Optional[ModelTrainingProgress]` Additional information about the training progress of the model - `stage: Literal["pending", "pending-captcha", "queued-for-train", 2 more]` The stage of the request - `"pending"` - `"pending-captcha"` - `"queued-for-train"` - `"running-train"` - `"starting-train"` - `updated_at: float` Timestamp in milliseconds of the last time the training progress was updated - `position: Optional[float]` Position of the job in the queue (ie. the number of job in the queue before this one) - `progress: Optional[float]` The progress of the job - `remaining_time_ms: Optional[float]` The remaining time in milliseconds - `started_at: Optional[float]` The timestamp in millisecond marking the start of the process - `training_stats: Optional[ModelTrainingStats]` Additional information about the model's training - `ended_at: Optional[str]` The training end time as an ISO date string - `queue_duration: Optional[float]` The training queued duration in seconds - `started_at: Optional[str]` The training start time as an ISO date string - `train_duration: Optional[float]` The training duration in seconds - `ui_config: Optional[ModelUiConfig]` The UI configuration for the model - `input_properties: Optional[Dict[str, ModelUiConfigInputProperties]]` Configuration for the input properties - `collapsed: Optional[bool]` - `loras_component: Optional[ModelUiConfigLorasComponent]` Configuration for the loras component - `label: str` The label of the component - `model_input: str` The input name of the model (model_array) - `scale_input: str` The input name of the scale (number_array) - `model_id_input: Optional[str]` The input model id (example: a composition or a single LoRA modelId) If specified, the model id will be attached to the output asset as a metadata If the model-decomposer parser is specified on it, modelInput and scaleInput will be automatically populated - `presets: Optional[List[ModelUiConfigPreset]]` Configuration for the presets - `fields: List[str]` - `presets: object` - `resolution_component: Optional[ModelUiConfigResolutionComponent]` Configuration for the resolution component - `height_input: str` The input name of the height - `label: str` The label of the component - `presets: List[ModelUiConfigResolutionComponentPreset]` The resolution presets - `height: float` - `label: str` - `width: float` - `width_input: str` The input name of the width - `selects: Optional[Dict[str, object]]` Configuration for the selects - `trigger_generate: Optional[ModelUiConfigTriggerGenerate]` Configuration for the trigger generate button - `label: str` - `after: Optional[str]` The 'name' of the input where the trigger generate button will be displayed (after the input). Do not specify both position and after. - `position: Optional[Literal["bottom", "top"]]` The position of the trigger generate button. If position specified, the button will be displayed at the specified position. Do not specify both position and after. - `"bottom"` - `"top"` - `user_id: Optional[str]` (Deprecated) The user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") - `creative_units_cost: Optional[float]` The Creative Units cost for the request billed - `creative_units_discount: Optional[float]` The Creative Units discount for the request billed ### Example ```python import os from scenario_sdk import Scenario client = Scenario( api_key=os.environ.get("SCENARIO_SDK_API_KEY"), # This is the default and can be omitted api_secret=os.environ.get("SCENARIO_SDK_API_SECRET"), # This is the default and can be omitted ) response = client.models.train.trigger( model_id="modelId", ) print(response.job) ``` #### Response ```json { "job": { "createdAt": "createdAt", "jobId": "jobId", "jobType": "assets-download", "metadata": { "assetIds": [ "string" ], "error": "error", "flow": [ { "id": "id", "status": "failure", "type": "custom-model", "assets": [ { "assetId": "assetId", "url": "url" } ], "count": 0, "dependsOn": [ "string" ], "includeOutputsInWorkflowJob": true, "inputs": [ { "name": "name", "type": "boolean", "allowedValues": [ {} ], "backgroundBehavior": "opaque", "color": true, "costImpact": true, "default": {}, "description": "description", "group": "group", "hint": "hint", "inputs": [ { "foo": "bar" } ], "items": [ [ { "name": "name", "type": "boolean", "allowedValues": [ {} ], "backgroundBehavior": "opaque", "color": true, "costImpact": true, "default": {}, "description": "description", "group": "group", "hint": "hint", "inputs": [ { "foo": "bar" } ], "kind": "3d", "label": "label", "maskFrom": "maskFrom", "max": 0, "maxLength": 0, "maxSize": 0, "min": 0, "minLength": 0, "modelTypes": [ "custom" ], "parent": true, "placeholder": "placeholder", "prompt": true, "promptSpark": true, "ref": { "conditional": [ "string" ], "equal": "equal", "name": "name", "node": "node" }, "required": { "always": true, "conditionalValues": {}, "ifDefined": {}, "ifNotDefined": {} }, "step": 1, "value": {} } ] ], "kind": "3d", "label": "label", "maskFrom": "maskFrom", "max": 0, "maxLength": 0, "maxSize": 0, "min": 0, "minLength": 0, "modelTypes": [ "custom" ], "parent": true, "placeholder": "placeholder", "prompt": true, "promptSpark": true, "ref": { "conditional": [ "string" ], "equal": "equal", "name": "name", "node": "node" }, "required": { "always": true, "conditionalValues": {}, "ifDefined": {}, "ifNotDefined": {} }, "step": 1, "value": {} } ], "items": [ "string" ], "iterationIndex": 0, "jobId": "jobId", "logic": { "cases": [ { "condition": "condition", "value": "value" } ], "default": "default", "transform": "transform" }, "logicType": "if-else", "loopBodyNodeIds": [ "string" ], "loopNodeId": "loopNodeId", "modelId": "modelId", "output": {}, "workflowId": "workflowId" } ], "hint": "hint", "input": { "foo": "bar" }, "output": { "foo": "bar" }, "outputModelId": "outputModelId", "workflowId": "workflowId", "workflowJobId": "workflowJobId" }, "progress": 0, "status": "canceled", "statusHistory": [ { "date": "date", "status": "canceled" } ], "updatedAt": "updatedAt", "authorId": "authorId", "billing": { "cuCost": 0, "cuDiscount": 0 }, "ownerId": "ownerId" }, "model": { "id": "id", "capabilities": [ "3d23d" ], "collectionIds": [ "string" ], "createdAt": "createdAt", "custom": true, "exampleAssetIds": [ "string" ], "privacy": "private", "source": "civitai", "status": "copying", "tags": [ "string" ], "trainingImagesNumber": 0, "type": "custom", "updatedAt": "updatedAt", "accessRestrictions": 0, "authorId": "authorId", "class": { "category": "category", "conceptPrompt": "conceptPrompt", "modelId": "modelId", "name": "name", "prompt": "prompt", "slug": "slug", "status": "published", "thumbnails": [ "string" ] }, "compliantModelIds": [ "string" ], "concepts": [ { "modelId": "modelId", "scale": -2, "modelEpoch": "modelEpoch" } ], "epoch": "epoch", "epochs": [ { "epoch": "epoch", "assets": [ { "assetId": "assetId", "url": "url" } ] } ], "inputs": [ { "name": "name", "type": "boolean", "allowedValues": [ {} ], "backgroundBehavior": "opaque", "color": true, "costImpact": true, "default": {}, "description": "description", "group": "group", "hint": "hint", "inputs": [ { "foo": "bar" } ], "kind": "3d", "label": "label", "maskFrom": "maskFrom", "max": 0, "maxLength": 0, "maxSize": 0, "min": 0, "minLength": 0, "modelTypes": [ "custom" ], "parent": true, "placeholder": "placeholder", "prompt": true, "promptSpark": true, "required": { "always": true, "conditionalValues": {}, "ifDefined": {}, "ifNotDefined": {} }, "step": 1 } ], "modelKeyword": "modelKeyword", "name": "name", "negativePromptEmbedding": "negativePromptEmbedding", "ownerId": "ownerId", "parameters": { "age": "age", "batchSize": 1, "classPrompt": "classPrompt", "cloneType": "cloneType", "conceptPrompt": "conceptPrompt", "gender": "gender", "language": "language", "learningRate": 1, "learningRateTextEncoder": 0.0005, "learningRateUnet": 1, "lrScheduler": "constant", "maxTrainSteps": 0, "nbEpochs": 1, "nbRepeats": 1, "numTextTrainSteps": 0, "numUNetTrainSteps": 0, "optimizeFor": "likeness", "priorLossWeight": 1, "randomCrop": true, "randomCropRatio": 0, "randomCropScale": 0, "rank": 2, "removeBackgroundNoise": true, "samplePrompts": [ "string" ], "sampleSourceImages": [ "string" ], "scaleLr": true, "seed": 0, "textEncoderTrainingRatio": 0, "validationFrequency": 0, "validationPrompt": "validationPrompt", "voiceDescription": "voiceDescription", "wandbKey": "wandbKey" }, "parentModelId": "parentModelId", "performanceStats": { "variants": [ { "capability": "capability", "computedAt": "computedAt", "variantKey": "variantKey", "arenaScore": { "arenaCategory": "arenaCategory", "arenaModelName": "arenaModelName", "fetchedAt": "fetchedAt", "rank": 0, "rating": 0, "ratingLower": 0, "ratingUpper": 0, "votes": 0 }, "costPerAssetMaxCU": 0, "costPerAssetMinCU": 0, "costPerAssetP50CU": 0, "inferenceLatencyP50Sec": 0, "inferenceLatencyP75Sec": 0, "resolution": "resolution", "totalLatencyP50Sec": 0, "totalLatencyP75Sec": 0 } ], "default": "default" }, "promptEmbedding": "promptEmbedding", "shortDescription": "shortDescription", "softDeletionOn": "softDeletionOn", "thumbnail": { "assetId": "assetId", "url": "url" }, "trainingImagePairs": [ { "instruction": "instruction", "sourceId": "sourceId", "targetId": "targetId" } ], "trainingImages": [ { "id": "id", "automaticCaptioning": "automaticCaptioning", "createdAt": "createdAt", "description": "description", "downloadUrl": "downloadUrl", "name": "name" } ], "trainingProgress": { "stage": "pending", "updatedAt": 0, "position": 0, "progress": 0, "remainingTimeMs": 0, "startedAt": 0 }, "trainingStats": { "endedAt": "endedAt", "queueDuration": 0, "startedAt": "startedAt", "trainDuration": 0 }, "uiConfig": { "inputProperties": { "foo": { "collapsed": true } }, "lorasComponent": { "label": "label", "modelInput": "modelInput", "scaleInput": "scaleInput", "modelIdInput": "modelIdInput" }, "presets": [ { "fields": [ "string" ], "presets": {} } ], "resolutionComponent": { "heightInput": "heightInput", "label": "label", "presets": [ { "height": 0, "label": "label", "width": 0 } ], "widthInput": "widthInput" }, "selects": { "foo": {} }, "triggerGenerate": { "label": "label", "after": "after", "position": "bottom" } }, "userId": "userId" }, "creativeUnitsCost": 0, "creativeUnitsDiscount": 0 } ``` ## Action `models.train.action(strmodel_id, TrainActionParams**kwargs) -> TrainActionResponse` **post** `/models/{modelId}/train/action` Trigger an action on a model training: cancel ### Parameters - `model_id: str` - `action: Literal["cancel"]` The action to perform on the model training - `"cancel"` - `original_assets: Optional[bool]` If set to true, returns the original asset without transformation ### Returns - `class TrainActionResponse: …` - `model: Model` - `id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `capabilities: List[Literal["3d23d", "audio2audio", "audio2video", 29 more]]` List of model capabilities (example: ["txt2img", "img2img", "txt2img_ip_adapter", ...]) - `"3d23d"` - `"audio2audio"` - `"audio2video"` - `"controlnet"` - `"controlnet_img2img"` - `"controlnet_inpaint"` - `"controlnet_inpaint_ip_adapter"` - `"controlnet_ip_adapter"` - `"controlnet_reference"` - `"controlnet_texture"` - `"img23d"` - `"img2img"` - `"img2img_ip_adapter"` - `"img2img_texture"` - `"img2txt"` - `"img2video"` - `"inpaint"` - `"inpaint_ip_adapter"` - `"outpaint"` - `"reference"` - `"reference_texture"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2img_ip_adapter"` - `"txt2img_texture"` - `"txt2txt"` - `"txt2video"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `collection_ids: List[str]` A list of CollectionId this model belongs to - `created_at: str` The model creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `custom: bool` Whether the model is a custom model and can be used only with POST /generate/custom/{modelId} endpoint - `example_asset_ids: List[str]` List of all example asset IDs setup by the model owner - `privacy: Literal["private", "public", "unlisted"]` The privacy of the model (default: private) - `"private"` - `"public"` - `"unlisted"` - `source: Literal["civitai", "huggingface", "other", "scenario"]` The source of the model - `"civitai"` - `"huggingface"` - `"other"` - `"scenario"` - `status: Literal["copying", "failed", "new", 3 more]` The model status - `"copying"` - `"failed"` - `"new"` - `"trained"` - `"training"` - `"training-canceled"` - `tags: List[str]` The associated tags (example: ["sci-fi", "landscape"]) - `training_images_number: float` The total number of training images - `type: Literal["custom", "elevenlabs-voice", "flux.1", 34 more]` The model type (example: "flux.1-lora") - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `updated_at: str` The model last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `access_restrictions: Optional[Literal[0, 100, 25, 2 more]]` The access restrictions of the model 0: Free plan 25: Creator plan 50: Pro plan 75: Team plan 100: Enterprise plan - `0` - `100` - `25` - `50` - `75` - `author_id: Optional[str]` The author user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") - `class_: Optional[ModelClass]` The class of the model - `category: str` The category slug of the class (example: "art-style") - `concept_prompt: str` The concept prompt of the class (example: "a sks character design") - `model_id: str` The model ID of the class (example: "stable-diffusion-v1-5") - `name: str` The class name (example: "Character Design") - `prompt: str` The class prompt (example: "a character design") - `slug: str` The class slug (example: "art-style-character-design") - `status: Literal["published", "unpublished"]` The class status (only published classes are listed, but unpublished classes can still appear in existing models) - `"published"` - `"unpublished"` - `thumbnails: List[str]` Some example images URLs to showcase the class - `compliant_model_ids: Optional[List[str]]` List of base model IDs compliant with the model (example: ["flux.1-dev", "flux.1-schnell"]) This attribute is mainly used for Flux LoRA models - `concepts: Optional[List[ModelConcept]]` The concepts is required for the type model: composition - `model_id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: float` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `model_epoch: Optional[str]` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `epoch: Optional[str]` The epoch of the model. Only available for Flux Lora Trained models. If not set, uses the final model epoch (latest) - `epochs: Optional[List[ModelEpoch]]` The epochs of the model. Only available for Flux Lora Trained models. - `epoch: str` The epoch hash to identify the epoch - `assets: Optional[List[ModelEpochAsset]]` The assets of the epoch if sample prompts as been supplied during training - `asset_id: str` The AssetId of the image during training (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the asset - `inputs: Optional[List[ModelInput]]` The inputs of the model. Only used for custom models. To retrieve this list, get it by modelId with GET /models/{modelId} - `name: str` The name that must be user to call the model through the API - `type: Literal["boolean", "file", "file_array", 7 more]` The data type of the input - `"boolean"` - `"file"` - `"file_array"` - `"inputs_array"` - `"model"` - `"model_array"` - `"number"` - `"number_array"` - `"string"` - `"string_array"` - `allowed_values: Optional[List[object]]` The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown. - `background_behavior: Optional[Literal["opaque", "transparent"]]` Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`. - `"opaque"` - `"transparent"` - `color: Optional[bool]` Whether the input is a color or not. Only available for `string` input type. - `cost_impact: Optional[bool]` Whether this input affects the model's cost calculation - `default: Optional[object]` The default value for the input - `description: Optional[str]` Help text displayed in the UI to provide additional information about the input - `group: Optional[str]` Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI. - `hint: Optional[str]` Hint text displayed in the UI as a tooltip to guide the user - `inputs: Optional[List[Dict[str, object]]]` The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs. - `kind: Optional[Literal["3d", "audio", "document", 4 more]]` The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `label: Optional[str]` The label displayed in the UI for this input - `mask_from: Optional[str]` The name of the file input field to use as the mask source - `max: Optional[float]` The maximum allowed value. Only available for `number` and `array` input types. - `max_length: Optional[float]` The maximum allowed length for `string` inputs. Also applies to each item in `string_array`. - `max_size: Optional[float]` The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time. - `min: Optional[float]` The minimum allowed value. Only available for `number` and array input types. - `min_length: Optional[float]` The minimum allowed length for string inputs. Also applies to each item in `string_array`. - `model_types: Optional[List[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]]` The allowed model types for this input. Example: `["flux.1-lora"]`. Only available for `model_array` input type. - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `parent: Optional[bool]` Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types. For `file_array`, the parent asset is the first item in the array. - `placeholder: Optional[str]` Placeholder text for the input. Only available for 'string' input type. - `prompt: Optional[bool]` Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type. - `prompt_spark: Optional[bool]` Whether the input is used with prompt spark. Only available for `string` input type. - `required: Optional[ModelInputRequired]` Set of rules that describes when this input is required: - `always`: Input is always required - `ifNotDefined`: Input is required when another specified input is not defined - `ifDefined`: Input is required when another specified input is defined - `conditionalValues`: Input is required when another input has a specific value By default, the input is not required. - `always: Optional[bool]` Whether the input is always required - `conditional_values: Optional[object]` Makes this input required when another input has a specific value: - Key: name of the input to check - Value: operation and allowed values that trigger the requirement - `if_defined: Optional[object]` Makes this input required when another input is defined: - Key: name of the input that must be defined - Value: message to display when this input is required - `if_not_defined: Optional[object]` Makes this input required when another input is not defined: - Key: name of the input that must be undefined - Value: message to display when this input is required - `step: Optional[float]` The step increment for numeric inputs. Only available for `number` input type. - `model_keyword: Optional[str]` The model keyword, this is a legacy parameter, please use conceptPrompt in parameters - `name: Optional[str]` The model name (example: "Cinematic Realism") - `negative_prompt_embedding: Optional[str]` Fine-tune the model's inferences with negative prompt embedding - `owner_id: Optional[str]` The owner ID (example: "team_VFhihHKMRZyDDnZAJwLb2Q") - `parameters: Optional[ModelParameters]` The parameters of the model - `age: Optional[str]` Age group of the voice (for professional cloning) Only available for ElevenLabs voice training - `batch_size: Optional[float]` The batch size Less steps, and will increase the learning rate Only available for Flux LoRA training - `class_prompt: Optional[str]` The prompt to specify images in the same class as provided instance images Only available for SD15 training - `clone_type: Optional[str]` Type of voice cloning: "instant" (fast) or "professional" (higher quality, requires captcha) Only available for ElevenLabs voice training - `concept_prompt: Optional[str]` The prompt with identifier specifying the instance (or subject) of the class (example: "a daiton dog") Default value varies depending on the model type: - For SD1.5: "daiton" if no class is associated with the model - For SDXL: "daiton" - For Flux: "" - `gender: Optional[str]` Gender of the voice (for professional cloning) Only available for ElevenLabs voice training - `language: Optional[str]` Language of the audio samples (ISO 639-1 code) Only available for ElevenLabs voice training - `learning_rate: Optional[float]` Initial learning rate (after the potential warmup period) Default value varies depending on the model type: - For SD1.5 and SDXL: 0.000005 - For Flux: 0.0001 - `learning_rate_text_encoder: Optional[float]` Initial learning rate (after the potential warmup period) for the text encoder Maximum [Flux LoRA: 0.001] Default [SDXL: 0.00005 | Flux LoRA: 0.00001] Minimum [SDXL: 0 | Flux LoRA: 0.000001] - `learning_rate_unet: Optional[float]` Initial learning rate (after the potential warmup period) for the UNet Only available for SDXL LoRA training - `lr_scheduler: Optional[Literal["constant", "constant-with-warmup", "cosine", 3 more]]` The scheduler type to use (default: "constant") Only available for SD15 and SDXL LoRA training - `"constant"` - `"constant-with-warmup"` - `"cosine"` - `"cosine-with-restarts"` - `"linear"` - `"polynomial"` - `max_train_steps: Optional[float]` Maximum number of training steps to execute (default: varies depending on the model type) For SDXL LoRA training, please use `numTextTrainSteps` and `numUNetTrainSteps` instead Default value varies depending on the model type: - For SD1.5: round((number of training images * 225) / 3) - For SDXL: number of training images * 175 - For Flux: number of training images * 100 Maximum value varies depending on the model type: - For SD1.5 and SDXL: [0, 40000] - For Flux: [0, 10000] - `nb_epochs: Optional[float]` The number of epochs to train for Only available for Flux LoRA training - `nb_repeats: Optional[float]` The number of times to repeat the training Only available for Flux LoRA training - `num_text_train_steps: Optional[float]` The number of training steps for the text encoder Only available for SDXL LoRA training - `num_u_net_train_steps: Optional[float]` The number of training steps for the UNet Only available for SDXL LoRA training - `optimize_for: Optional[Literal["likeness"]]` Optimize the model training task for a specific type of input images. The available values are: - "likeness": optimize training for likeness or portrait (targets specific transformer blocks) - "all": train all transformer blocks - "none": train no specific transformer blocks This parameter controls which double and single transformer blocks are trained during the LoRA training process. Only available for Flux LoRA training - `"likeness"` - `prior_loss_weight: Optional[float]` The weight of prior preservation loss Only available for SD15 and SDXL LoRA training - `random_crop: Optional[bool]` Whether to random crop or center crop images before resizing to the working resolution Only available for SD15 and SDXL LoRA training - `random_crop_ratio: Optional[float]` Ratio of random crops Only available for SD15 and SDXL LoRA training - `random_crop_scale: Optional[float]` Scale of random crops Only available for SD15 and SDXL LoRA training - `rank: Optional[float]` The dimension of the LoRA update matrices Only available for SDXL (deprecated), Flux LoRA and Musubi training Default value varies depending on the model type: - For SDXL (deprecated): 64 - For Flux: 16 - For Musubi: 64 Each trainer enforces its own tighter limit (Flux LoRA: [2; 64], Musubi: [2; 128]) - `remove_background_noise: Optional[bool]` Whether to remove background noise from audio samples before cloning. When enabled, each sample must be at least 5 seconds long. Only available for ElevenLabs voice training - `sample_prompts: Optional[List[str]]` The prompts to use for each epoch Only available for Flux LoRA training - `sample_source_images: Optional[List[str]]` The sample prompt images (AssetIds) paired with samplePrompts Only available for Flux LoRA training Must be the same length as samplePrompts - `scale_lr: Optional[bool]` Whether to scale the learning rate Note: Legacy parameter, will be ignored Only available for SD15 and SDXL LoRA training - `seed: Optional[float]` Used to reproduce previous results. Default: randomly generated number. Only available for SD15 and SDXL LoRA training - `text_encoder_training_ratio: Optional[float]` Whether to train the text encoder or not Example: For 100 steps and a value of 0.2, it means that the text encoder will be trained for 20 steps and then the UNet for 80 steps Note: Legacy parameter, please use `numTextTrainSteps` and `numUNetTrainSteps` Only available for SD15 and SDXL LoRA training - `validation_frequency: Optional[float]` Validation frequency. Cannot be greater than maxTrainSteps value Only available for SD15 and SDXL LoRA training - `validation_prompt: Optional[str]` Validation prompt Only available for SD15 and SDXL LoRA training - `voice_description: Optional[str]` Description of the voice characteristics Only available for ElevenLabs voice training - `wandb_key: Optional[str]` The Weights And Bias key to use for logging. The maximum length is 40 characters - `parent_model_id: Optional[str]` The id of the parent model - `performance_stats: Optional[ModelPerformanceStats]` Aggregated performance stats - `variants: List[ModelPerformanceStatsVariant]` Performance metrics per variant - `capability: str` The generation capability (example: "txt2img", "img2video", "txt2audio") - `computed_at: str` When these stats were last computed (ISO date) - `variant_key: str` Unique variant identifier (example: "txt2img:1K", "img2video:2K", "txt2audio") - `arena_score: Optional[ModelPerformanceStatsVariantArenaScore]` External quality score from arena.ai leaderboard - `arena_category: str` Arena category (example: "text_to_image", "image_to_video") - `arena_model_name: str` Model name on arena.ai - `fetched_at: str` When this score was last fetched (ISO date) - `rank: float` Rank in the arena category - `rating: float` ELO rating - `rating_lower: float` ELO rating confidence interval lower bound - `rating_upper: float` ELO rating confidence interval upper bound - `votes: float` Number of human votes - `cost_per_asset_max_cu: Optional[float]` Maximum cost per output asset (CU) - `cost_per_asset_min_cu: Optional[float]` Minimum cost per output asset (CU) - `cost_per_asset_p50_cu: Optional[float]` Median cost per output asset (CU) - `inference_latency_p50_sec: Optional[float]` Inference latency P50 per output asset (seconds) - `inference_latency_p75_sec: Optional[float]` Inference latency P75 per output asset (seconds) - `resolution: Optional[str]` The resolution bucket (example: "0.5K", "1K", "2K", "4K") - `total_latency_p50_sec: Optional[float]` Total latency P50 per output asset, including queue time (seconds) - `total_latency_p75_sec: Optional[float]` Total latency P75 per output asset, including queue time (seconds) - `default: Optional[str]` Default variant key for quick model comparison - `prompt_embedding: Optional[str]` Fine-tune the model's inferences with prompt embedding - `short_description: Optional[str]` The model short description (example: "This model generates highly detailed cinematic scenes.") - `soft_deletion_on: Optional[str]` The date when the model will be soft deleted (only for Free plan) - `thumbnail: Optional[ModelThumbnail]` A thumbnail for your model - `asset_id: str` The AssetId of the image used as a thumbnail for your model (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the image used as a thumbnail for your model - `training_image_pairs: Optional[List[ModelTrainingImagePair]]` Array of training image pairs - `instruction: Optional[str]` The instruction for the image pair, source to target - `source_id: Optional[str]` The source asset ID (must be a training asset) - `target_id: Optional[str]` The target asset ID (must be a training asset) - `training_images: Optional[List[ModelTrainingImage]]` The URLs of the first 3 training images of the model. To retrieve the full set of images, get it by modelId - `id: str` The training image ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `automatic_captioning: str` Automatic captioning of the image - `created_at: str` The training image upload date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `description: str` Description for the image - `download_url: str` The URL of the image - `name: str` The original file name of the image (example: "my-training-image.jpg") - `training_progress: Optional[ModelTrainingProgress]` Additional information about the training progress of the model - `stage: Literal["pending", "pending-captcha", "queued-for-train", 2 more]` The stage of the request - `"pending"` - `"pending-captcha"` - `"queued-for-train"` - `"running-train"` - `"starting-train"` - `updated_at: float` Timestamp in milliseconds of the last time the training progress was updated - `position: Optional[float]` Position of the job in the queue (ie. the number of job in the queue before this one) - `progress: Optional[float]` The progress of the job - `remaining_time_ms: Optional[float]` The remaining time in milliseconds - `started_at: Optional[float]` The timestamp in millisecond marking the start of the process - `training_stats: Optional[ModelTrainingStats]` Additional information about the model's training - `ended_at: Optional[str]` The training end time as an ISO date string - `queue_duration: Optional[float]` The training queued duration in seconds - `started_at: Optional[str]` The training start time as an ISO date string - `train_duration: Optional[float]` The training duration in seconds - `ui_config: Optional[ModelUiConfig]` The UI configuration for the model - `input_properties: Optional[Dict[str, ModelUiConfigInputProperties]]` Configuration for the input properties - `collapsed: Optional[bool]` - `loras_component: Optional[ModelUiConfigLorasComponent]` Configuration for the loras component - `label: str` The label of the component - `model_input: str` The input name of the model (model_array) - `scale_input: str` The input name of the scale (number_array) - `model_id_input: Optional[str]` The input model id (example: a composition or a single LoRA modelId) If specified, the model id will be attached to the output asset as a metadata If the model-decomposer parser is specified on it, modelInput and scaleInput will be automatically populated - `presets: Optional[List[ModelUiConfigPreset]]` Configuration for the presets - `fields: List[str]` - `presets: object` - `resolution_component: Optional[ModelUiConfigResolutionComponent]` Configuration for the resolution component - `height_input: str` The input name of the height - `label: str` The label of the component - `presets: List[ModelUiConfigResolutionComponentPreset]` The resolution presets - `height: float` - `label: str` - `width: float` - `width_input: str` The input name of the width - `selects: Optional[Dict[str, object]]` Configuration for the selects - `trigger_generate: Optional[ModelUiConfigTriggerGenerate]` Configuration for the trigger generate button - `label: str` - `after: Optional[str]` The 'name' of the input where the trigger generate button will be displayed (after the input). Do not specify both position and after. - `position: Optional[Literal["bottom", "top"]]` The position of the trigger generate button. If position specified, the button will be displayed at the specified position. Do not specify both position and after. - `"bottom"` - `"top"` - `user_id: Optional[str]` (Deprecated) The user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") ### Example ```python import os from scenario_sdk import Scenario client = Scenario( api_key=os.environ.get("SCENARIO_SDK_API_KEY"), # This is the default and can be omitted api_secret=os.environ.get("SCENARIO_SDK_API_SECRET"), # This is the default and can be omitted ) response = client.models.train.action( model_id="modelId", action="cancel", ) print(response.model) ``` #### Response ```json { "model": { "id": "id", "capabilities": [ "3d23d" ], "collectionIds": [ "string" ], "createdAt": "createdAt", "custom": true, "exampleAssetIds": [ "string" ], "privacy": "private", "source": "civitai", "status": "copying", "tags": [ "string" ], "trainingImagesNumber": 0, "type": "custom", "updatedAt": "updatedAt", "accessRestrictions": 0, "authorId": "authorId", "class": { "category": "category", "conceptPrompt": "conceptPrompt", "modelId": "modelId", "name": "name", "prompt": "prompt", "slug": "slug", "status": "published", "thumbnails": [ "string" ] }, "compliantModelIds": [ "string" ], "concepts": [ { "modelId": "modelId", "scale": -2, "modelEpoch": "modelEpoch" } ], "epoch": "epoch", "epochs": [ { "epoch": "epoch", "assets": [ { "assetId": "assetId", "url": "url" } ] } ], "inputs": [ { "name": "name", "type": "boolean", "allowedValues": [ {} ], "backgroundBehavior": "opaque", "color": true, "costImpact": true, "default": {}, "description": "description", "group": "group", "hint": "hint", "inputs": [ { "foo": "bar" } ], "kind": "3d", "label": "label", "maskFrom": "maskFrom", "max": 0, "maxLength": 0, "maxSize": 0, "min": 0, "minLength": 0, "modelTypes": [ "custom" ], "parent": true, "placeholder": "placeholder", "prompt": true, "promptSpark": true, "required": { "always": true, "conditionalValues": {}, "ifDefined": {}, "ifNotDefined": {} }, "step": 1 } ], "modelKeyword": "modelKeyword", "name": "name", "negativePromptEmbedding": "negativePromptEmbedding", "ownerId": "ownerId", "parameters": { "age": "age", "batchSize": 1, "classPrompt": "classPrompt", "cloneType": "cloneType", "conceptPrompt": "conceptPrompt", "gender": "gender", "language": "language", "learningRate": 1, "learningRateTextEncoder": 0.0005, "learningRateUnet": 1, "lrScheduler": "constant", "maxTrainSteps": 0, "nbEpochs": 1, "nbRepeats": 1, "numTextTrainSteps": 0, "numUNetTrainSteps": 0, "optimizeFor": "likeness", "priorLossWeight": 1, "randomCrop": true, "randomCropRatio": 0, "randomCropScale": 0, "rank": 2, "removeBackgroundNoise": true, "samplePrompts": [ "string" ], "sampleSourceImages": [ "string" ], "scaleLr": true, "seed": 0, "textEncoderTrainingRatio": 0, "validationFrequency": 0, "validationPrompt": "validationPrompt", "voiceDescription": "voiceDescription", "wandbKey": "wandbKey" }, "parentModelId": "parentModelId", "performanceStats": { "variants": [ { "capability": "capability", "computedAt": "computedAt", "variantKey": "variantKey", "arenaScore": { "arenaCategory": "arenaCategory", "arenaModelName": "arenaModelName", "fetchedAt": "fetchedAt", "rank": 0, "rating": 0, "ratingLower": 0, "ratingUpper": 0, "votes": 0 }, "costPerAssetMaxCU": 0, "costPerAssetMinCU": 0, "costPerAssetP50CU": 0, "inferenceLatencyP50Sec": 0, "inferenceLatencyP75Sec": 0, "resolution": "resolution", "totalLatencyP50Sec": 0, "totalLatencyP75Sec": 0 } ], "default": "default" }, "promptEmbedding": "promptEmbedding", "shortDescription": "shortDescription", "softDeletionOn": "softDeletionOn", "thumbnail": { "assetId": "assetId", "url": "url" }, "trainingImagePairs": [ { "instruction": "instruction", "sourceId": "sourceId", "targetId": "targetId" } ], "trainingImages": [ { "id": "id", "automaticCaptioning": "automaticCaptioning", "createdAt": "createdAt", "description": "description", "downloadUrl": "downloadUrl", "name": "name" } ], "trainingProgress": { "stage": "pending", "updatedAt": 0, "position": 0, "progress": 0, "remainingTimeMs": 0, "startedAt": 0 }, "trainingStats": { "endedAt": "endedAt", "queueDuration": 0, "startedAt": "startedAt", "trainDuration": 0 }, "uiConfig": { "inputProperties": { "foo": { "collapsed": true } }, "lorasComponent": { "label": "label", "modelInput": "modelInput", "scaleInput": "scaleInput", "modelIdInput": "modelIdInput" }, "presets": [ { "fields": [ "string" ], "presets": {} } ], "resolutionComponent": { "heightInput": "heightInput", "label": "label", "presets": [ { "height": 0, "label": "label", "width": 0 } ], "widthInput": "widthInput" }, "selects": { "foo": {} }, "triggerGenerate": { "label": "label", "after": "after", "position": "bottom" } }, "userId": "userId" } } ``` ## Domain Types ### Train Trigger Response - `class TrainTriggerResponse: …` - `job: Job` - `created_at: str` The job creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `job_id: str` The job ID (example: "job_ocZCnG1Df35XRL1QyCZSRxAG8") - `job_type: Literal["assets-download", "canvas-export", "caption", 36 more]` The type of job - `"assets-download"` - `"canvas-export"` - `"caption"` - `"caption-llava"` - `"custom"` - `"describe-style"` - `"detection"` - `"embed"` - `"flux"` - `"flux-model-training"` - `"generate-prompt"` - `"image-generation"` - `"image-prompt-editing"` - `"inference"` - `"mesh-preview-rendering"` - `"model-download"` - `"model-import"` - `"model-training"` - `"musubi-model-training"` - `"openai-image-generation"` - `"patch-image"` - `"pixelate"` - `"reframe"` - `"remove-background"` - `"repaint"` - `"restyle"` - `"segment"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"skybox-upscale-360"` - `"texture"` - `"translate"` - `"upload"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"vectorize"` - `"workflow"` - `metadata: JobMetadata` Metadata of the job with some additional information - `asset_ids: Optional[List[str]]` List of produced assets for this job - `error: Optional[str]` Eventual error for the job - `flow: Optional[List[JobMetadataFlow]]` The flow of the job. Only available for workflow jobs. - `id: str` The id of the node. - `status: Literal["failure", "pending", "processing", 2 more]` The status of the node. Only available for WorkflowJob nodes. - `"failure"` - `"pending"` - `"processing"` - `"skipped"` - `"success"` - `type: Literal["custom-model", "for-each", "generate-prompt", 7 more]` The type of the job for the node. - `"custom-model"` - `"for-each"` - `"generate-prompt"` - `"list"` - `"logic"` - `"model"` - `"remove-background"` - `"transform"` - `"user-approval"` - `"workflow"` - `assets: Optional[List[JobMetadataFlowAsset]]` List of produced assets for this node. - `asset_id: str` - `url: str` - `count: Optional[float]` Fixed number of iterations for a ForEach node. When set, the loop runs exactly `count` times regardless of array input. When not set, the loop iterates over the resolved array input. Only available for ForEach nodes. - `depends_on: Optional[List[str]]` The nodes that this node depends on. Only available for nodes that have dependencies. Mainly used for user approval nodes. - `include_outputs_in_workflow_job: Optional[Literal[true]]` If true, the outputs of this node will be included in the workflow job's final output. Only applicable to producing nodes (custom-model, inference, etc.). By default, only last nodes (nodes not referenced by other nodes) contribute to outputs. Set this to true to also include intermediate nodes in the final output. Note: This should only be set to `true` or left undefined. - `true` - `inputs: Optional[List[JobMetadataFlowInput]]` The inputs of the node. - `name: str` The name that must be user to call the model through the API - `type: Literal["boolean", "file", "file_array", 7 more]` The data type of the input - `"boolean"` - `"file"` - `"file_array"` - `"inputs_array"` - `"model"` - `"model_array"` - `"number"` - `"number_array"` - `"string"` - `"string_array"` - `allowed_values: Optional[List[object]]` The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown. - `background_behavior: Optional[Literal["opaque", "transparent"]]` Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`. - `"opaque"` - `"transparent"` - `color: Optional[bool]` Whether the input is a color or not. Only available for `string` input type. - `cost_impact: Optional[bool]` Whether this input affects the model's cost calculation - `default: Optional[object]` The default value for the input - `description: Optional[str]` Help text displayed in the UI to provide additional information about the input - `group: Optional[str]` Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI. - `hint: Optional[str]` Hint text displayed in the UI as a tooltip to guide the user - `inputs: Optional[List[Dict[str, object]]]` The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs. - `items: Optional[List[List[JobMetadataFlowInputItem]]]` The configured items for inputs_array type inputs. Each item is an array of SubNodeInput that need ref/value resolution. Only available for inputs_array type. - `name: str` The name that must be user to call the model through the API - `type: Literal["boolean", "file", "file_array", 7 more]` The data type of the input - `"boolean"` - `"file"` - `"file_array"` - `"inputs_array"` - `"model"` - `"model_array"` - `"number"` - `"number_array"` - `"string"` - `"string_array"` - `allowed_values: Optional[List[object]]` The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown. - `background_behavior: Optional[Literal["opaque", "transparent"]]` Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`. - `"opaque"` - `"transparent"` - `color: Optional[bool]` Whether the input is a color or not. Only available for `string` input type. - `cost_impact: Optional[bool]` Whether this input affects the model's cost calculation - `default: Optional[object]` The default value for the input - `description: Optional[str]` Help text displayed in the UI to provide additional information about the input - `group: Optional[str]` Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI. - `hint: Optional[str]` Hint text displayed in the UI as a tooltip to guide the user - `inputs: Optional[List[Dict[str, object]]]` The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs. - `kind: Optional[Literal["3d", "audio", "document", 4 more]]` The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `label: Optional[str]` The label displayed in the UI for this input - `mask_from: Optional[str]` The name of the file input field to use as the mask source - `max: Optional[float]` The maximum allowed value. Only available for `number` and `array` input types. - `max_length: Optional[float]` The maximum allowed length for `string` inputs. Also applies to each item in `string_array`. - `max_size: Optional[float]` The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time. - `min: Optional[float]` The minimum allowed value. Only available for `number` and array input types. - `min_length: Optional[float]` The minimum allowed length for string inputs. Also applies to each item in `string_array`. - `model_types: Optional[List[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]]` The allowed model types for this input. Example: `["flux.1-lora"]`. Only available for `model_array` input type. - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `parent: Optional[bool]` Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types. For `file_array`, the parent asset is the first item in the array. - `placeholder: Optional[str]` Placeholder text for the input. Only available for 'string' input type. - `prompt: Optional[bool]` Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type. - `prompt_spark: Optional[bool]` Whether the input is used with prompt spark. Only available for `string` input type. - `ref: Optional[JobMetadataFlowInputItemRef]` The reference to another input or output of the same workflow. Must have at least one of node or conditional. - `conditional: Optional[List[str]]` The conditional nodes to reference. If the conditional nodes are successful, the node will be successful. If the conditional nodes are skipped, the node will be skipped. Contains an array of node ids used to check the status of the nodes. - `equal: Optional[str]` This is the desired node output value if ref is an if/else node. - `name: Optional[str]` The name of the input or output to reference. If the type is 'workflow', the name is the name of the input of the workflow is required If the type is 'node', the name is not mandatory, except if you want all outputs of the node. To get all outputs of a node, you can use the name 'all'. - `node: Optional[str]` The node id or 'workflow' if the source is a workflow input. - `required: Optional[JobMetadataFlowInputItemRequired]` Set of rules that describes when this input is required: - `always`: Input is always required - `ifNotDefined`: Input is required when another specified input is not defined - `ifDefined`: Input is required when another specified input is defined - `conditionalValues`: Input is required when another input has a specific value By default, the input is not required. - `always: Optional[bool]` Whether the input is always required - `conditional_values: Optional[object]` Makes this input required when another input has a specific value: - Key: name of the input to check - Value: operation and allowed values that trigger the requirement - `if_defined: Optional[object]` Makes this input required when another input is defined: - Key: name of the input that must be defined - Value: message to display when this input is required - `if_not_defined: Optional[object]` Makes this input required when another input is not defined: - Key: name of the input that must be undefined - Value: message to display when this input is required - `step: Optional[float]` The step increment for numeric inputs. Only available for `number` input type. - `value: Optional[object]` The value of the input. This is the value of the input that will be used to run the node. Only available for flows managed by a WorkflowJob. - `kind: Optional[Literal["3d", "audio", "document", 4 more]]` The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `label: Optional[str]` The label displayed in the UI for this input - `mask_from: Optional[str]` The name of the file input field to use as the mask source - `max: Optional[float]` The maximum allowed value. Only available for `number` and `array` input types. - `max_length: Optional[float]` The maximum allowed length for `string` inputs. Also applies to each item in `string_array`. - `max_size: Optional[float]` The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time. - `min: Optional[float]` The minimum allowed value. Only available for `number` and array input types. - `min_length: Optional[float]` The minimum allowed length for string inputs. Also applies to each item in `string_array`. - `model_types: Optional[List[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]]` The allowed model types for this input. Example: `["flux.1-lora"]`. Only available for `model_array` input type. - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `parent: Optional[bool]` Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types. For `file_array`, the parent asset is the first item in the array. - `placeholder: Optional[str]` Placeholder text for the input. Only available for 'string' input type. - `prompt: Optional[bool]` Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type. - `prompt_spark: Optional[bool]` Whether the input is used with prompt spark. Only available for `string` input type. - `ref: Optional[JobMetadataFlowInputRef]` The reference to another input or output of the same workflow. Must have at least one of node or conditional. - `conditional: Optional[List[str]]` The conditional nodes to reference. If the conditional nodes are successful, the node will be successful. If the conditional nodes are skipped, the node will be skipped. Contains an array of node ids used to check the status of the nodes. - `equal: Optional[str]` This is the desired node output value if ref is an if/else node. - `name: Optional[str]` The name of the input or output to reference. If the type is 'workflow', the name is the name of the input of the workflow is required If the type is 'node', the name is not mandatory, except if you want all outputs of the node. To get all outputs of a node, you can use the name 'all'. - `node: Optional[str]` The node id or 'workflow' if the source is a workflow input. - `required: Optional[JobMetadataFlowInputRequired]` Set of rules that describes when this input is required: - `always`: Input is always required - `ifNotDefined`: Input is required when another specified input is not defined - `ifDefined`: Input is required when another specified input is defined - `conditionalValues`: Input is required when another input has a specific value By default, the input is not required. - `always: Optional[bool]` Whether the input is always required - `conditional_values: Optional[object]` Makes this input required when another input has a specific value: - Key: name of the input to check - Value: operation and allowed values that trigger the requirement - `if_defined: Optional[object]` Makes this input required when another input is defined: - Key: name of the input that must be defined - Value: message to display when this input is required - `if_not_defined: Optional[object]` Makes this input required when another input is not defined: - Key: name of the input that must be undefined - Value: message to display when this input is required - `step: Optional[float]` The step increment for numeric inputs. Only available for `number` input type. - `value: Optional[object]` The value of the input. This is the value of the input that will be used to run the node. Only available for flows managed by a WorkflowJob. - `items: Optional[List[str]]` Statically-configured items for a List node. The node outputs this array as-is when executed. Only available for List nodes. The values can be strings, numbers, or asset IDs. - `iteration_index: Optional[float]` Zero-based index of the iteration this node copy belongs to. Set on dynamically-created copies of loop body nodes. - `job_id: Optional[str]` If the flow is part of a WorkflowJob, this is the jobId for the node. jobId is only available for nodes started. A node "Pending" for a running workflow job is not started. - `logic: Optional[JobMetadataFlowLogic]` The logic of the node. Only available for logic nodes. - `cases: Optional[List[JobMetadataFlowLogicCase]]` The cases of the logic. Only available for if/else nodes. - `condition: str` - `value: str` - `default: Optional[str]` The default case of the logic. Contains the id/output of the node to execute if no case is matched. Only available for if/else nodes. - `transform: Optional[str]` The transform of the logic. Only available for transform nodes. - `logic_type: Optional[Literal["if-else"]]` The type of the logic for the node. Only available for logic nodes. - `"if-else"` - `loop_body_node_ids: Optional[List[str]]` IDs of the body template nodes that belong to this ForEach loop. At runtime these templates are cloned once per iteration and marked Skipped. Only available for ForEach nodes. - `loop_node_id: Optional[str]` ID of the ForEach node that spawned this iteration copy. Set on dynamically-created copies of loop body nodes. - `model_id: Optional[str]` The model id for the node. Mainly used for custom model tasks. - `output: Optional[object]` The output of the node. Only available for logic nodes. - `workflow_id: Optional[str]` The workflow id for the node. Mainly used for workflow tasks. - `hint: Optional[str]` Actionable hint for the user explaining what went wrong and how to resolve it. - `input: Optional[Dict[str, object]]` The inputs for the job - `output: Optional[Dict[str, object]]` May contain the output of the job for specific custom models jobs. Only available for custom models which generate non-assets outputs. Example: LLM text results. - `output_model_id: Optional[str]` For voice-clone jobs: the ID of the model being trained. - `workflow_id: Optional[str]` The workflow ID of the job if job is part of a workflow. - `workflow_job_id: Optional[str]` The workflow job ID of the job if job is part of a workflow job. - `progress: float` Progress of the job (between 0 and 1) - `status: Literal["canceled", "failure", "finalizing", 5 more]` The current status of the job - `"canceled"` - `"failure"` - `"finalizing"` - `"in-progress"` - `"pending"` - `"queued"` - `"success"` - `"warming-up"` - `status_history: List[JobStatusHistory]` The history of the different statuses the job went through with the ISO string date of when the job reached each statuses. - `date: str` - `status: Literal["canceled", "failure", "finalizing", 5 more]` - `"canceled"` - `"failure"` - `"finalizing"` - `"in-progress"` - `"pending"` - `"queued"` - `"success"` - `"warming-up"` - `updated_at: str` The job last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `author_id: Optional[str]` The author user ID (example: "dcf121faaa1a0a0bbbd9ca1b73d62aea") - `billing: Optional[JobBilling]` The billing of the job - `cu_cost: float` - `cu_discount: float` - `owner_id: Optional[str]` The owner ID (example: "team_U3Qmc8PCdWXwAQJ4Dvw4tV6D") - `model: Model` - `id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `capabilities: List[Literal["3d23d", "audio2audio", "audio2video", 29 more]]` List of model capabilities (example: ["txt2img", "img2img", "txt2img_ip_adapter", ...]) - `"3d23d"` - `"audio2audio"` - `"audio2video"` - `"controlnet"` - `"controlnet_img2img"` - `"controlnet_inpaint"` - `"controlnet_inpaint_ip_adapter"` - `"controlnet_ip_adapter"` - `"controlnet_reference"` - `"controlnet_texture"` - `"img23d"` - `"img2img"` - `"img2img_ip_adapter"` - `"img2img_texture"` - `"img2txt"` - `"img2video"` - `"inpaint"` - `"inpaint_ip_adapter"` - `"outpaint"` - `"reference"` - `"reference_texture"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2img_ip_adapter"` - `"txt2img_texture"` - `"txt2txt"` - `"txt2video"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `collection_ids: List[str]` A list of CollectionId this model belongs to - `created_at: str` The model creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `custom: bool` Whether the model is a custom model and can be used only with POST /generate/custom/{modelId} endpoint - `example_asset_ids: List[str]` List of all example asset IDs setup by the model owner - `privacy: Literal["private", "public", "unlisted"]` The privacy of the model (default: private) - `"private"` - `"public"` - `"unlisted"` - `source: Literal["civitai", "huggingface", "other", "scenario"]` The source of the model - `"civitai"` - `"huggingface"` - `"other"` - `"scenario"` - `status: Literal["copying", "failed", "new", 3 more]` The model status - `"copying"` - `"failed"` - `"new"` - `"trained"` - `"training"` - `"training-canceled"` - `tags: List[str]` The associated tags (example: ["sci-fi", "landscape"]) - `training_images_number: float` The total number of training images - `type: Literal["custom", "elevenlabs-voice", "flux.1", 34 more]` The model type (example: "flux.1-lora") - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `updated_at: str` The model last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `access_restrictions: Optional[Literal[0, 100, 25, 2 more]]` The access restrictions of the model 0: Free plan 25: Creator plan 50: Pro plan 75: Team plan 100: Enterprise plan - `0` - `100` - `25` - `50` - `75` - `author_id: Optional[str]` The author user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") - `class_: Optional[ModelClass]` The class of the model - `category: str` The category slug of the class (example: "art-style") - `concept_prompt: str` The concept prompt of the class (example: "a sks character design") - `model_id: str` The model ID of the class (example: "stable-diffusion-v1-5") - `name: str` The class name (example: "Character Design") - `prompt: str` The class prompt (example: "a character design") - `slug: str` The class slug (example: "art-style-character-design") - `status: Literal["published", "unpublished"]` The class status (only published classes are listed, but unpublished classes can still appear in existing models) - `"published"` - `"unpublished"` - `thumbnails: List[str]` Some example images URLs to showcase the class - `compliant_model_ids: Optional[List[str]]` List of base model IDs compliant with the model (example: ["flux.1-dev", "flux.1-schnell"]) This attribute is mainly used for Flux LoRA models - `concepts: Optional[List[ModelConcept]]` The concepts is required for the type model: composition - `model_id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: float` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `model_epoch: Optional[str]` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `epoch: Optional[str]` The epoch of the model. Only available for Flux Lora Trained models. If not set, uses the final model epoch (latest) - `epochs: Optional[List[ModelEpoch]]` The epochs of the model. Only available for Flux Lora Trained models. - `epoch: str` The epoch hash to identify the epoch - `assets: Optional[List[ModelEpochAsset]]` The assets of the epoch if sample prompts as been supplied during training - `asset_id: str` The AssetId of the image during training (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the asset - `inputs: Optional[List[ModelInput]]` The inputs of the model. Only used for custom models. To retrieve this list, get it by modelId with GET /models/{modelId} - `name: str` The name that must be user to call the model through the API - `type: Literal["boolean", "file", "file_array", 7 more]` The data type of the input - `"boolean"` - `"file"` - `"file_array"` - `"inputs_array"` - `"model"` - `"model_array"` - `"number"` - `"number_array"` - `"string"` - `"string_array"` - `allowed_values: Optional[List[object]]` The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown. - `background_behavior: Optional[Literal["opaque", "transparent"]]` Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`. - `"opaque"` - `"transparent"` - `color: Optional[bool]` Whether the input is a color or not. Only available for `string` input type. - `cost_impact: Optional[bool]` Whether this input affects the model's cost calculation - `default: Optional[object]` The default value for the input - `description: Optional[str]` Help text displayed in the UI to provide additional information about the input - `group: Optional[str]` Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI. - `hint: Optional[str]` Hint text displayed in the UI as a tooltip to guide the user - `inputs: Optional[List[Dict[str, object]]]` The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs. - `kind: Optional[Literal["3d", "audio", "document", 4 more]]` The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `label: Optional[str]` The label displayed in the UI for this input - `mask_from: Optional[str]` The name of the file input field to use as the mask source - `max: Optional[float]` The maximum allowed value. Only available for `number` and `array` input types. - `max_length: Optional[float]` The maximum allowed length for `string` inputs. Also applies to each item in `string_array`. - `max_size: Optional[float]` The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time. - `min: Optional[float]` The minimum allowed value. Only available for `number` and array input types. - `min_length: Optional[float]` The minimum allowed length for string inputs. Also applies to each item in `string_array`. - `model_types: Optional[List[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]]` The allowed model types for this input. Example: `["flux.1-lora"]`. Only available for `model_array` input type. - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `parent: Optional[bool]` Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types. For `file_array`, the parent asset is the first item in the array. - `placeholder: Optional[str]` Placeholder text for the input. Only available for 'string' input type. - `prompt: Optional[bool]` Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type. - `prompt_spark: Optional[bool]` Whether the input is used with prompt spark. Only available for `string` input type. - `required: Optional[ModelInputRequired]` Set of rules that describes when this input is required: - `always`: Input is always required - `ifNotDefined`: Input is required when another specified input is not defined - `ifDefined`: Input is required when another specified input is defined - `conditionalValues`: Input is required when another input has a specific value By default, the input is not required. - `always: Optional[bool]` Whether the input is always required - `conditional_values: Optional[object]` Makes this input required when another input has a specific value: - Key: name of the input to check - Value: operation and allowed values that trigger the requirement - `if_defined: Optional[object]` Makes this input required when another input is defined: - Key: name of the input that must be defined - Value: message to display when this input is required - `if_not_defined: Optional[object]` Makes this input required when another input is not defined: - Key: name of the input that must be undefined - Value: message to display when this input is required - `step: Optional[float]` The step increment for numeric inputs. Only available for `number` input type. - `model_keyword: Optional[str]` The model keyword, this is a legacy parameter, please use conceptPrompt in parameters - `name: Optional[str]` The model name (example: "Cinematic Realism") - `negative_prompt_embedding: Optional[str]` Fine-tune the model's inferences with negative prompt embedding - `owner_id: Optional[str]` The owner ID (example: "team_VFhihHKMRZyDDnZAJwLb2Q") - `parameters: Optional[ModelParameters]` The parameters of the model - `age: Optional[str]` Age group of the voice (for professional cloning) Only available for ElevenLabs voice training - `batch_size: Optional[float]` The batch size Less steps, and will increase the learning rate Only available for Flux LoRA training - `class_prompt: Optional[str]` The prompt to specify images in the same class as provided instance images Only available for SD15 training - `clone_type: Optional[str]` Type of voice cloning: "instant" (fast) or "professional" (higher quality, requires captcha) Only available for ElevenLabs voice training - `concept_prompt: Optional[str]` The prompt with identifier specifying the instance (or subject) of the class (example: "a daiton dog") Default value varies depending on the model type: - For SD1.5: "daiton" if no class is associated with the model - For SDXL: "daiton" - For Flux: "" - `gender: Optional[str]` Gender of the voice (for professional cloning) Only available for ElevenLabs voice training - `language: Optional[str]` Language of the audio samples (ISO 639-1 code) Only available for ElevenLabs voice training - `learning_rate: Optional[float]` Initial learning rate (after the potential warmup period) Default value varies depending on the model type: - For SD1.5 and SDXL: 0.000005 - For Flux: 0.0001 - `learning_rate_text_encoder: Optional[float]` Initial learning rate (after the potential warmup period) for the text encoder Maximum [Flux LoRA: 0.001] Default [SDXL: 0.00005 | Flux LoRA: 0.00001] Minimum [SDXL: 0 | Flux LoRA: 0.000001] - `learning_rate_unet: Optional[float]` Initial learning rate (after the potential warmup period) for the UNet Only available for SDXL LoRA training - `lr_scheduler: Optional[Literal["constant", "constant-with-warmup", "cosine", 3 more]]` The scheduler type to use (default: "constant") Only available for SD15 and SDXL LoRA training - `"constant"` - `"constant-with-warmup"` - `"cosine"` - `"cosine-with-restarts"` - `"linear"` - `"polynomial"` - `max_train_steps: Optional[float]` Maximum number of training steps to execute (default: varies depending on the model type) For SDXL LoRA training, please use `numTextTrainSteps` and `numUNetTrainSteps` instead Default value varies depending on the model type: - For SD1.5: round((number of training images * 225) / 3) - For SDXL: number of training images * 175 - For Flux: number of training images * 100 Maximum value varies depending on the model type: - For SD1.5 and SDXL: [0, 40000] - For Flux: [0, 10000] - `nb_epochs: Optional[float]` The number of epochs to train for Only available for Flux LoRA training - `nb_repeats: Optional[float]` The number of times to repeat the training Only available for Flux LoRA training - `num_text_train_steps: Optional[float]` The number of training steps for the text encoder Only available for SDXL LoRA training - `num_u_net_train_steps: Optional[float]` The number of training steps for the UNet Only available for SDXL LoRA training - `optimize_for: Optional[Literal["likeness"]]` Optimize the model training task for a specific type of input images. The available values are: - "likeness": optimize training for likeness or portrait (targets specific transformer blocks) - "all": train all transformer blocks - "none": train no specific transformer blocks This parameter controls which double and single transformer blocks are trained during the LoRA training process. Only available for Flux LoRA training - `"likeness"` - `prior_loss_weight: Optional[float]` The weight of prior preservation loss Only available for SD15 and SDXL LoRA training - `random_crop: Optional[bool]` Whether to random crop or center crop images before resizing to the working resolution Only available for SD15 and SDXL LoRA training - `random_crop_ratio: Optional[float]` Ratio of random crops Only available for SD15 and SDXL LoRA training - `random_crop_scale: Optional[float]` Scale of random crops Only available for SD15 and SDXL LoRA training - `rank: Optional[float]` The dimension of the LoRA update matrices Only available for SDXL (deprecated), Flux LoRA and Musubi training Default value varies depending on the model type: - For SDXL (deprecated): 64 - For Flux: 16 - For Musubi: 64 Each trainer enforces its own tighter limit (Flux LoRA: [2; 64], Musubi: [2; 128]) - `remove_background_noise: Optional[bool]` Whether to remove background noise from audio samples before cloning. When enabled, each sample must be at least 5 seconds long. Only available for ElevenLabs voice training - `sample_prompts: Optional[List[str]]` The prompts to use for each epoch Only available for Flux LoRA training - `sample_source_images: Optional[List[str]]` The sample prompt images (AssetIds) paired with samplePrompts Only available for Flux LoRA training Must be the same length as samplePrompts - `scale_lr: Optional[bool]` Whether to scale the learning rate Note: Legacy parameter, will be ignored Only available for SD15 and SDXL LoRA training - `seed: Optional[float]` Used to reproduce previous results. Default: randomly generated number. Only available for SD15 and SDXL LoRA training - `text_encoder_training_ratio: Optional[float]` Whether to train the text encoder or not Example: For 100 steps and a value of 0.2, it means that the text encoder will be trained for 20 steps and then the UNet for 80 steps Note: Legacy parameter, please use `numTextTrainSteps` and `numUNetTrainSteps` Only available for SD15 and SDXL LoRA training - `validation_frequency: Optional[float]` Validation frequency. Cannot be greater than maxTrainSteps value Only available for SD15 and SDXL LoRA training - `validation_prompt: Optional[str]` Validation prompt Only available for SD15 and SDXL LoRA training - `voice_description: Optional[str]` Description of the voice characteristics Only available for ElevenLabs voice training - `wandb_key: Optional[str]` The Weights And Bias key to use for logging. The maximum length is 40 characters - `parent_model_id: Optional[str]` The id of the parent model - `performance_stats: Optional[ModelPerformanceStats]` Aggregated performance stats - `variants: List[ModelPerformanceStatsVariant]` Performance metrics per variant - `capability: str` The generation capability (example: "txt2img", "img2video", "txt2audio") - `computed_at: str` When these stats were last computed (ISO date) - `variant_key: str` Unique variant identifier (example: "txt2img:1K", "img2video:2K", "txt2audio") - `arena_score: Optional[ModelPerformanceStatsVariantArenaScore]` External quality score from arena.ai leaderboard - `arena_category: str` Arena category (example: "text_to_image", "image_to_video") - `arena_model_name: str` Model name on arena.ai - `fetched_at: str` When this score was last fetched (ISO date) - `rank: float` Rank in the arena category - `rating: float` ELO rating - `rating_lower: float` ELO rating confidence interval lower bound - `rating_upper: float` ELO rating confidence interval upper bound - `votes: float` Number of human votes - `cost_per_asset_max_cu: Optional[float]` Maximum cost per output asset (CU) - `cost_per_asset_min_cu: Optional[float]` Minimum cost per output asset (CU) - `cost_per_asset_p50_cu: Optional[float]` Median cost per output asset (CU) - `inference_latency_p50_sec: Optional[float]` Inference latency P50 per output asset (seconds) - `inference_latency_p75_sec: Optional[float]` Inference latency P75 per output asset (seconds) - `resolution: Optional[str]` The resolution bucket (example: "0.5K", "1K", "2K", "4K") - `total_latency_p50_sec: Optional[float]` Total latency P50 per output asset, including queue time (seconds) - `total_latency_p75_sec: Optional[float]` Total latency P75 per output asset, including queue time (seconds) - `default: Optional[str]` Default variant key for quick model comparison - `prompt_embedding: Optional[str]` Fine-tune the model's inferences with prompt embedding - `short_description: Optional[str]` The model short description (example: "This model generates highly detailed cinematic scenes.") - `soft_deletion_on: Optional[str]` The date when the model will be soft deleted (only for Free plan) - `thumbnail: Optional[ModelThumbnail]` A thumbnail for your model - `asset_id: str` The AssetId of the image used as a thumbnail for your model (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the image used as a thumbnail for your model - `training_image_pairs: Optional[List[ModelTrainingImagePair]]` Array of training image pairs - `instruction: Optional[str]` The instruction for the image pair, source to target - `source_id: Optional[str]` The source asset ID (must be a training asset) - `target_id: Optional[str]` The target asset ID (must be a training asset) - `training_images: Optional[List[ModelTrainingImage]]` The URLs of the first 3 training images of the model. To retrieve the full set of images, get it by modelId - `id: str` The training image ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `automatic_captioning: str` Automatic captioning of the image - `created_at: str` The training image upload date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `description: str` Description for the image - `download_url: str` The URL of the image - `name: str` The original file name of the image (example: "my-training-image.jpg") - `training_progress: Optional[ModelTrainingProgress]` Additional information about the training progress of the model - `stage: Literal["pending", "pending-captcha", "queued-for-train", 2 more]` The stage of the request - `"pending"` - `"pending-captcha"` - `"queued-for-train"` - `"running-train"` - `"starting-train"` - `updated_at: float` Timestamp in milliseconds of the last time the training progress was updated - `position: Optional[float]` Position of the job in the queue (ie. the number of job in the queue before this one) - `progress: Optional[float]` The progress of the job - `remaining_time_ms: Optional[float]` The remaining time in milliseconds - `started_at: Optional[float]` The timestamp in millisecond marking the start of the process - `training_stats: Optional[ModelTrainingStats]` Additional information about the model's training - `ended_at: Optional[str]` The training end time as an ISO date string - `queue_duration: Optional[float]` The training queued duration in seconds - `started_at: Optional[str]` The training start time as an ISO date string - `train_duration: Optional[float]` The training duration in seconds - `ui_config: Optional[ModelUiConfig]` The UI configuration for the model - `input_properties: Optional[Dict[str, ModelUiConfigInputProperties]]` Configuration for the input properties - `collapsed: Optional[bool]` - `loras_component: Optional[ModelUiConfigLorasComponent]` Configuration for the loras component - `label: str` The label of the component - `model_input: str` The input name of the model (model_array) - `scale_input: str` The input name of the scale (number_array) - `model_id_input: Optional[str]` The input model id (example: a composition or a single LoRA modelId) If specified, the model id will be attached to the output asset as a metadata If the model-decomposer parser is specified on it, modelInput and scaleInput will be automatically populated - `presets: Optional[List[ModelUiConfigPreset]]` Configuration for the presets - `fields: List[str]` - `presets: object` - `resolution_component: Optional[ModelUiConfigResolutionComponent]` Configuration for the resolution component - `height_input: str` The input name of the height - `label: str` The label of the component - `presets: List[ModelUiConfigResolutionComponentPreset]` The resolution presets - `height: float` - `label: str` - `width: float` - `width_input: str` The input name of the width - `selects: Optional[Dict[str, object]]` Configuration for the selects - `trigger_generate: Optional[ModelUiConfigTriggerGenerate]` Configuration for the trigger generate button - `label: str` - `after: Optional[str]` The 'name' of the input where the trigger generate button will be displayed (after the input). Do not specify both position and after. - `position: Optional[Literal["bottom", "top"]]` The position of the trigger generate button. If position specified, the button will be displayed at the specified position. Do not specify both position and after. - `"bottom"` - `"top"` - `user_id: Optional[str]` (Deprecated) The user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") - `creative_units_cost: Optional[float]` The Creative Units cost for the request billed - `creative_units_discount: Optional[float]` The Creative Units discount for the request billed ### Train Action Response - `class TrainActionResponse: …` - `model: Model` - `id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `capabilities: List[Literal["3d23d", "audio2audio", "audio2video", 29 more]]` List of model capabilities (example: ["txt2img", "img2img", "txt2img_ip_adapter", ...]) - `"3d23d"` - `"audio2audio"` - `"audio2video"` - `"controlnet"` - `"controlnet_img2img"` - `"controlnet_inpaint"` - `"controlnet_inpaint_ip_adapter"` - `"controlnet_ip_adapter"` - `"controlnet_reference"` - `"controlnet_texture"` - `"img23d"` - `"img2img"` - `"img2img_ip_adapter"` - `"img2img_texture"` - `"img2txt"` - `"img2video"` - `"inpaint"` - `"inpaint_ip_adapter"` - `"outpaint"` - `"reference"` - `"reference_texture"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2img_ip_adapter"` - `"txt2img_texture"` - `"txt2txt"` - `"txt2video"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `collection_ids: List[str]` A list of CollectionId this model belongs to - `created_at: str` The model creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `custom: bool` Whether the model is a custom model and can be used only with POST /generate/custom/{modelId} endpoint - `example_asset_ids: List[str]` List of all example asset IDs setup by the model owner - `privacy: Literal["private", "public", "unlisted"]` The privacy of the model (default: private) - `"private"` - `"public"` - `"unlisted"` - `source: Literal["civitai", "huggingface", "other", "scenario"]` The source of the model - `"civitai"` - `"huggingface"` - `"other"` - `"scenario"` - `status: Literal["copying", "failed", "new", 3 more]` The model status - `"copying"` - `"failed"` - `"new"` - `"trained"` - `"training"` - `"training-canceled"` - `tags: List[str]` The associated tags (example: ["sci-fi", "landscape"]) - `training_images_number: float` The total number of training images - `type: Literal["custom", "elevenlabs-voice", "flux.1", 34 more]` The model type (example: "flux.1-lora") - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `updated_at: str` The model last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `access_restrictions: Optional[Literal[0, 100, 25, 2 more]]` The access restrictions of the model 0: Free plan 25: Creator plan 50: Pro plan 75: Team plan 100: Enterprise plan - `0` - `100` - `25` - `50` - `75` - `author_id: Optional[str]` The author user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") - `class_: Optional[ModelClass]` The class of the model - `category: str` The category slug of the class (example: "art-style") - `concept_prompt: str` The concept prompt of the class (example: "a sks character design") - `model_id: str` The model ID of the class (example: "stable-diffusion-v1-5") - `name: str` The class name (example: "Character Design") - `prompt: str` The class prompt (example: "a character design") - `slug: str` The class slug (example: "art-style-character-design") - `status: Literal["published", "unpublished"]` The class status (only published classes are listed, but unpublished classes can still appear in existing models) - `"published"` - `"unpublished"` - `thumbnails: List[str]` Some example images URLs to showcase the class - `compliant_model_ids: Optional[List[str]]` List of base model IDs compliant with the model (example: ["flux.1-dev", "flux.1-schnell"]) This attribute is mainly used for Flux LoRA models - `concepts: Optional[List[ModelConcept]]` The concepts is required for the type model: composition - `model_id: str` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: float` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `model_epoch: Optional[str]` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `epoch: Optional[str]` The epoch of the model. Only available for Flux Lora Trained models. If not set, uses the final model epoch (latest) - `epochs: Optional[List[ModelEpoch]]` The epochs of the model. Only available for Flux Lora Trained models. - `epoch: str` The epoch hash to identify the epoch - `assets: Optional[List[ModelEpochAsset]]` The assets of the epoch if sample prompts as been supplied during training - `asset_id: str` The AssetId of the image during training (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the asset - `inputs: Optional[List[ModelInput]]` The inputs of the model. Only used for custom models. To retrieve this list, get it by modelId with GET /models/{modelId} - `name: str` The name that must be user to call the model through the API - `type: Literal["boolean", "file", "file_array", 7 more]` The data type of the input - `"boolean"` - `"file"` - `"file_array"` - `"inputs_array"` - `"model"` - `"model_array"` - `"number"` - `"number_array"` - `"string"` - `"string_array"` - `allowed_values: Optional[List[object]]` The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown. - `background_behavior: Optional[Literal["opaque", "transparent"]]` Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`. - `"opaque"` - `"transparent"` - `color: Optional[bool]` Whether the input is a color or not. Only available for `string` input type. - `cost_impact: Optional[bool]` Whether this input affects the model's cost calculation - `default: Optional[object]` The default value for the input - `description: Optional[str]` Help text displayed in the UI to provide additional information about the input - `group: Optional[str]` Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI. - `hint: Optional[str]` Hint text displayed in the UI as a tooltip to guide the user - `inputs: Optional[List[Dict[str, object]]]` The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs. - `kind: Optional[Literal["3d", "audio", "document", 4 more]]` The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `label: Optional[str]` The label displayed in the UI for this input - `mask_from: Optional[str]` The name of the file input field to use as the mask source - `max: Optional[float]` The maximum allowed value. Only available for `number` and `array` input types. - `max_length: Optional[float]` The maximum allowed length for `string` inputs. Also applies to each item in `string_array`. - `max_size: Optional[float]` The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time. - `min: Optional[float]` The minimum allowed value. Only available for `number` and array input types. - `min_length: Optional[float]` The minimum allowed length for string inputs. Also applies to each item in `string_array`. - `model_types: Optional[List[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]]` The allowed model types for this input. Example: `["flux.1-lora"]`. Only available for `model_array` input type. - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `parent: Optional[bool]` Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types. For `file_array`, the parent asset is the first item in the array. - `placeholder: Optional[str]` Placeholder text for the input. Only available for 'string' input type. - `prompt: Optional[bool]` Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type. - `prompt_spark: Optional[bool]` Whether the input is used with prompt spark. Only available for `string` input type. - `required: Optional[ModelInputRequired]` Set of rules that describes when this input is required: - `always`: Input is always required - `ifNotDefined`: Input is required when another specified input is not defined - `ifDefined`: Input is required when another specified input is defined - `conditionalValues`: Input is required when another input has a specific value By default, the input is not required. - `always: Optional[bool]` Whether the input is always required - `conditional_values: Optional[object]` Makes this input required when another input has a specific value: - Key: name of the input to check - Value: operation and allowed values that trigger the requirement - `if_defined: Optional[object]` Makes this input required when another input is defined: - Key: name of the input that must be defined - Value: message to display when this input is required - `if_not_defined: Optional[object]` Makes this input required when another input is not defined: - Key: name of the input that must be undefined - Value: message to display when this input is required - `step: Optional[float]` The step increment for numeric inputs. Only available for `number` input type. - `model_keyword: Optional[str]` The model keyword, this is a legacy parameter, please use conceptPrompt in parameters - `name: Optional[str]` The model name (example: "Cinematic Realism") - `negative_prompt_embedding: Optional[str]` Fine-tune the model's inferences with negative prompt embedding - `owner_id: Optional[str]` The owner ID (example: "team_VFhihHKMRZyDDnZAJwLb2Q") - `parameters: Optional[ModelParameters]` The parameters of the model - `age: Optional[str]` Age group of the voice (for professional cloning) Only available for ElevenLabs voice training - `batch_size: Optional[float]` The batch size Less steps, and will increase the learning rate Only available for Flux LoRA training - `class_prompt: Optional[str]` The prompt to specify images in the same class as provided instance images Only available for SD15 training - `clone_type: Optional[str]` Type of voice cloning: "instant" (fast) or "professional" (higher quality, requires captcha) Only available for ElevenLabs voice training - `concept_prompt: Optional[str]` The prompt with identifier specifying the instance (or subject) of the class (example: "a daiton dog") Default value varies depending on the model type: - For SD1.5: "daiton" if no class is associated with the model - For SDXL: "daiton" - For Flux: "" - `gender: Optional[str]` Gender of the voice (for professional cloning) Only available for ElevenLabs voice training - `language: Optional[str]` Language of the audio samples (ISO 639-1 code) Only available for ElevenLabs voice training - `learning_rate: Optional[float]` Initial learning rate (after the potential warmup period) Default value varies depending on the model type: - For SD1.5 and SDXL: 0.000005 - For Flux: 0.0001 - `learning_rate_text_encoder: Optional[float]` Initial learning rate (after the potential warmup period) for the text encoder Maximum [Flux LoRA: 0.001] Default [SDXL: 0.00005 | Flux LoRA: 0.00001] Minimum [SDXL: 0 | Flux LoRA: 0.000001] - `learning_rate_unet: Optional[float]` Initial learning rate (after the potential warmup period) for the UNet Only available for SDXL LoRA training - `lr_scheduler: Optional[Literal["constant", "constant-with-warmup", "cosine", 3 more]]` The scheduler type to use (default: "constant") Only available for SD15 and SDXL LoRA training - `"constant"` - `"constant-with-warmup"` - `"cosine"` - `"cosine-with-restarts"` - `"linear"` - `"polynomial"` - `max_train_steps: Optional[float]` Maximum number of training steps to execute (default: varies depending on the model type) For SDXL LoRA training, please use `numTextTrainSteps` and `numUNetTrainSteps` instead Default value varies depending on the model type: - For SD1.5: round((number of training images * 225) / 3) - For SDXL: number of training images * 175 - For Flux: number of training images * 100 Maximum value varies depending on the model type: - For SD1.5 and SDXL: [0, 40000] - For Flux: [0, 10000] - `nb_epochs: Optional[float]` The number of epochs to train for Only available for Flux LoRA training - `nb_repeats: Optional[float]` The number of times to repeat the training Only available for Flux LoRA training - `num_text_train_steps: Optional[float]` The number of training steps for the text encoder Only available for SDXL LoRA training - `num_u_net_train_steps: Optional[float]` The number of training steps for the UNet Only available for SDXL LoRA training - `optimize_for: Optional[Literal["likeness"]]` Optimize the model training task for a specific type of input images. The available values are: - "likeness": optimize training for likeness or portrait (targets specific transformer blocks) - "all": train all transformer blocks - "none": train no specific transformer blocks This parameter controls which double and single transformer blocks are trained during the LoRA training process. Only available for Flux LoRA training - `"likeness"` - `prior_loss_weight: Optional[float]` The weight of prior preservation loss Only available for SD15 and SDXL LoRA training - `random_crop: Optional[bool]` Whether to random crop or center crop images before resizing to the working resolution Only available for SD15 and SDXL LoRA training - `random_crop_ratio: Optional[float]` Ratio of random crops Only available for SD15 and SDXL LoRA training - `random_crop_scale: Optional[float]` Scale of random crops Only available for SD15 and SDXL LoRA training - `rank: Optional[float]` The dimension of the LoRA update matrices Only available for SDXL (deprecated), Flux LoRA and Musubi training Default value varies depending on the model type: - For SDXL (deprecated): 64 - For Flux: 16 - For Musubi: 64 Each trainer enforces its own tighter limit (Flux LoRA: [2; 64], Musubi: [2; 128]) - `remove_background_noise: Optional[bool]` Whether to remove background noise from audio samples before cloning. When enabled, each sample must be at least 5 seconds long. Only available for ElevenLabs voice training - `sample_prompts: Optional[List[str]]` The prompts to use for each epoch Only available for Flux LoRA training - `sample_source_images: Optional[List[str]]` The sample prompt images (AssetIds) paired with samplePrompts Only available for Flux LoRA training Must be the same length as samplePrompts - `scale_lr: Optional[bool]` Whether to scale the learning rate Note: Legacy parameter, will be ignored Only available for SD15 and SDXL LoRA training - `seed: Optional[float]` Used to reproduce previous results. Default: randomly generated number. Only available for SD15 and SDXL LoRA training - `text_encoder_training_ratio: Optional[float]` Whether to train the text encoder or not Example: For 100 steps and a value of 0.2, it means that the text encoder will be trained for 20 steps and then the UNet for 80 steps Note: Legacy parameter, please use `numTextTrainSteps` and `numUNetTrainSteps` Only available for SD15 and SDXL LoRA training - `validation_frequency: Optional[float]` Validation frequency. Cannot be greater than maxTrainSteps value Only available for SD15 and SDXL LoRA training - `validation_prompt: Optional[str]` Validation prompt Only available for SD15 and SDXL LoRA training - `voice_description: Optional[str]` Description of the voice characteristics Only available for ElevenLabs voice training - `wandb_key: Optional[str]` The Weights And Bias key to use for logging. The maximum length is 40 characters - `parent_model_id: Optional[str]` The id of the parent model - `performance_stats: Optional[ModelPerformanceStats]` Aggregated performance stats - `variants: List[ModelPerformanceStatsVariant]` Performance metrics per variant - `capability: str` The generation capability (example: "txt2img", "img2video", "txt2audio") - `computed_at: str` When these stats were last computed (ISO date) - `variant_key: str` Unique variant identifier (example: "txt2img:1K", "img2video:2K", "txt2audio") - `arena_score: Optional[ModelPerformanceStatsVariantArenaScore]` External quality score from arena.ai leaderboard - `arena_category: str` Arena category (example: "text_to_image", "image_to_video") - `arena_model_name: str` Model name on arena.ai - `fetched_at: str` When this score was last fetched (ISO date) - `rank: float` Rank in the arena category - `rating: float` ELO rating - `rating_lower: float` ELO rating confidence interval lower bound - `rating_upper: float` ELO rating confidence interval upper bound - `votes: float` Number of human votes - `cost_per_asset_max_cu: Optional[float]` Maximum cost per output asset (CU) - `cost_per_asset_min_cu: Optional[float]` Minimum cost per output asset (CU) - `cost_per_asset_p50_cu: Optional[float]` Median cost per output asset (CU) - `inference_latency_p50_sec: Optional[float]` Inference latency P50 per output asset (seconds) - `inference_latency_p75_sec: Optional[float]` Inference latency P75 per output asset (seconds) - `resolution: Optional[str]` The resolution bucket (example: "0.5K", "1K", "2K", "4K") - `total_latency_p50_sec: Optional[float]` Total latency P50 per output asset, including queue time (seconds) - `total_latency_p75_sec: Optional[float]` Total latency P75 per output asset, including queue time (seconds) - `default: Optional[str]` Default variant key for quick model comparison - `prompt_embedding: Optional[str]` Fine-tune the model's inferences with prompt embedding - `short_description: Optional[str]` The model short description (example: "This model generates highly detailed cinematic scenes.") - `soft_deletion_on: Optional[str]` The date when the model will be soft deleted (only for Free plan) - `thumbnail: Optional[ModelThumbnail]` A thumbnail for your model - `asset_id: str` The AssetId of the image used as a thumbnail for your model (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: str` The url of the image used as a thumbnail for your model - `training_image_pairs: Optional[List[ModelTrainingImagePair]]` Array of training image pairs - `instruction: Optional[str]` The instruction for the image pair, source to target - `source_id: Optional[str]` The source asset ID (must be a training asset) - `target_id: Optional[str]` The target asset ID (must be a training asset) - `training_images: Optional[List[ModelTrainingImage]]` The URLs of the first 3 training images of the model. To retrieve the full set of images, get it by modelId - `id: str` The training image ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `automatic_captioning: str` Automatic captioning of the image - `created_at: str` The training image upload date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `description: str` Description for the image - `download_url: str` The URL of the image - `name: str` The original file name of the image (example: "my-training-image.jpg") - `training_progress: Optional[ModelTrainingProgress]` Additional information about the training progress of the model - `stage: Literal["pending", "pending-captcha", "queued-for-train", 2 more]` The stage of the request - `"pending"` - `"pending-captcha"` - `"queued-for-train"` - `"running-train"` - `"starting-train"` - `updated_at: float` Timestamp in milliseconds of the last time the training progress was updated - `position: Optional[float]` Position of the job in the queue (ie. the number of job in the queue before this one) - `progress: Optional[float]` The progress of the job - `remaining_time_ms: Optional[float]` The remaining time in milliseconds - `started_at: Optional[float]` The timestamp in millisecond marking the start of the process - `training_stats: Optional[ModelTrainingStats]` Additional information about the model's training - `ended_at: Optional[str]` The training end time as an ISO date string - `queue_duration: Optional[float]` The training queued duration in seconds - `started_at: Optional[str]` The training start time as an ISO date string - `train_duration: Optional[float]` The training duration in seconds - `ui_config: Optional[ModelUiConfig]` The UI configuration for the model - `input_properties: Optional[Dict[str, ModelUiConfigInputProperties]]` Configuration for the input properties - `collapsed: Optional[bool]` - `loras_component: Optional[ModelUiConfigLorasComponent]` Configuration for the loras component - `label: str` The label of the component - `model_input: str` The input name of the model (model_array) - `scale_input: str` The input name of the scale (number_array) - `model_id_input: Optional[str]` The input model id (example: a composition or a single LoRA modelId) If specified, the model id will be attached to the output asset as a metadata If the model-decomposer parser is specified on it, modelInput and scaleInput will be automatically populated - `presets: Optional[List[ModelUiConfigPreset]]` Configuration for the presets - `fields: List[str]` - `presets: object` - `resolution_component: Optional[ModelUiConfigResolutionComponent]` Configuration for the resolution component - `height_input: str` The input name of the height - `label: str` The label of the component - `presets: List[ModelUiConfigResolutionComponentPreset]` The resolution presets - `height: float` - `label: str` - `width: float` - `width_input: str` The input name of the width - `selects: Optional[Dict[str, object]]` Configuration for the selects - `trigger_generate: Optional[ModelUiConfigTriggerGenerate]` Configuration for the trigger generate button - `label: str` - `after: Optional[str]` The 'name' of the input where the trigger generate button will be displayed (after the input). Do not specify both position and after. - `position: Optional[Literal["bottom", "top"]]` The position of the trigger generate button. If position specified, the button will be displayed at the specified position. Do not specify both position and after. - `"bottom"` - `"top"` - `user_id: Optional[str]` (Deprecated) The user ID (example: "user_VFhihHKMRZyDDnZAJwLb2Q") # Training Images ## Add `models.training_images.add(strmodel_id, TrainingImageAddParams**kwargs) -> TrainingImageAddResponse` **post** `/models/{modelId}/training-images` Add a new training image to the given `modelId` ### Parameters - `model_id: str` - `original_assets: Optional[bool]` If set to true, returns the original asset without transformation - `asset_id: Optional[str]` The asset ID to use as a training image (example: "asset_GTrL3mq4SXWyMxkOHRxlpw"). If provided, "data" and "name" parameters will be ignored. - `asset_ids: Optional[Sequence[str]]` The asset IDs to use as training images (example: ["asset_GTrL3mq4SXWyMxkOHRxlpw", "asset_GTrL3mq4SXWyMxkOHRxlpw"]) Used in batch mode, up to 10 asset IDs are allowed. Cannot be used with "assetId" or "data" and "name" parameters. - `data: Optional[str]` The training image as a data URL (example: "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVQYV2NgYAAAAAMAAWgmWQ0AAAAASUVORK5CYII=") - `name: Optional[str]` The original file name of the image (example: "my-training-image.jpg") - `preset: Optional[Literal["default", "style", "subject"]]` The preset to use for training images - `"default"` - `"style"` - `"subject"` ### Returns - `class TrainingImageAddResponse: …` - `training_image: TrainingImage` - `id: str` The training image ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `automatic_captioning: str` Automatic captioning of the image - `created_at: str` The training image upload date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `description: str` Description for the image - `download_url: str` The URL of the image - `name: str` The original file name of the image (example: "my-training-image.jpg") ### Example ```python import os from scenario_sdk import Scenario client = Scenario( api_key=os.environ.get("SCENARIO_SDK_API_KEY"), # This is the default and can be omitted api_secret=os.environ.get("SCENARIO_SDK_API_SECRET"), # This is the default and can be omitted ) response = client.models.training_images.add( model_id="modelId", ) print(response.training_image) ``` #### Response ```json { "trainingImage": { "id": "id", "automaticCaptioning": "automaticCaptioning", "createdAt": "createdAt", "description": "description", "downloadUrl": "downloadUrl", "name": "name" } } ``` ## Replace Pairs `models.training_images.replace_pairs(strmodel_id, TrainingImageReplacePairsParams**kwargs) -> TrainingImageReplacePairsResponse` **put** `/models/{modelId}/training-images/pairs` Replace all training image pairs for the given `modelId` ### Parameters - `model_id: str` - `body: Iterable[Body]` Array of training image pairs - `instruction: Optional[str]` The instruction for the image pair, source to target - `source_id: Optional[str]` The source asset ID (must be a training asset) - `target_id: Optional[str]` The target asset ID (must be a training asset) ### Returns - `class TrainingImageReplacePairsResponse: …` - `count: float` Number of training image pairs - `pairs: List[Pair]` Array of training image pairs - `instruction: Optional[str]` The instruction for the image pair, source to target - `source_id: Optional[str]` The source asset ID (must be a training asset) - `target_id: Optional[str]` The target asset ID (must be a training asset) ### Example ```python import os from scenario_sdk import Scenario client = Scenario( api_key=os.environ.get("SCENARIO_SDK_API_KEY"), # This is the default and can be omitted api_secret=os.environ.get("SCENARIO_SDK_API_SECRET"), # This is the default and can be omitted ) response = client.models.training_images.replace_pairs( model_id="modelId", body=[{}], ) print(response.count) ``` #### Response ```json { "count": 0, "pairs": [ { "instruction": "instruction", "sourceId": "sourceId", "targetId": "targetId" } ] } ``` ## Replace `models.training_images.replace(strtraining_image_id, TrainingImageReplaceParams**kwargs) -> TrainingImageReplaceResponse` **put** `/models/{modelId}/training-images/{trainingImageId}` Replace the given `trainingImageId` for the given `modelId` ### Parameters - `model_id: str` - `training_image_id: str` - `original_assets: Optional[bool]` If set to true, returns the original asset without transformation - `asset_id: Optional[str]` The asset ID to use as a training image (example: "asset_GTrL3mq4SXWyMxkOHRxlpw"). If provided, "data" and "name" parameters will be ignored. - `asset_ids: Optional[Sequence[str]]` The asset IDs to use as training images (example: ["asset_GTrL3mq4SXWyMxkOHRxlpw", "asset_GTrL3mq4SXWyMxkOHRxlpw"]) Used in batch mode, up to 10 asset IDs are allowed. Cannot be used with "assetId" or "data" and "name" parameters. - `data: Optional[str]` The training image as a data URL (example: "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVQYV2NgYAAAAAMAAWgmWQ0AAAAASUVORK5CYII=") - `name: Optional[str]` The original file name of the image (example: "my-training-image.jpg") - `preset: Optional[Literal["default", "style", "subject"]]` The preset to use for training images - `"default"` - `"style"` - `"subject"` ### Returns - `class TrainingImageReplaceResponse: …` - `training_image: TrainingImage` - `id: str` The training image ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `automatic_captioning: str` Automatic captioning of the image - `created_at: str` The training image upload date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `description: str` Description for the image - `download_url: str` The URL of the image - `name: str` The original file name of the image (example: "my-training-image.jpg") ### Example ```python import os from scenario_sdk import Scenario client = Scenario( api_key=os.environ.get("SCENARIO_SDK_API_KEY"), # This is the default and can be omitted api_secret=os.environ.get("SCENARIO_SDK_API_SECRET"), # This is the default and can be omitted ) response = client.models.training_images.replace( training_image_id="trainingImageId", model_id="modelId", ) print(response.training_image) ``` #### Response ```json { "trainingImage": { "id": "id", "automaticCaptioning": "automaticCaptioning", "createdAt": "createdAt", "description": "description", "downloadUrl": "downloadUrl", "name": "name" } } ``` ## Delete `models.training_images.delete(strtraining_image_id, TrainingImageDeleteParams**kwargs) -> object` **delete** `/models/{modelId}/training-images/{trainingImageId}` Delete the given `trainingImageId` from the given `modelId` ### Parameters - `model_id: str` - `training_image_id: str` ### Returns - `object` ### Example ```python import os from scenario_sdk import Scenario client = Scenario( api_key=os.environ.get("SCENARIO_SDK_API_KEY"), # This is the default and can be omitted api_secret=os.environ.get("SCENARIO_SDK_API_SECRET"), # This is the default and can be omitted ) training_image = client.models.training_images.delete( training_image_id="trainingImageId", model_id="modelId", ) print(training_image) ``` #### Response ```json {} ``` ## Domain Types ### Training Image Add Response - `class TrainingImageAddResponse: …` - `training_image: TrainingImage` - `id: str` The training image ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `automatic_captioning: str` Automatic captioning of the image - `created_at: str` The training image upload date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `description: str` Description for the image - `download_url: str` The URL of the image - `name: str` The original file name of the image (example: "my-training-image.jpg") ### Training Image Replace Pairs Response - `class TrainingImageReplacePairsResponse: …` - `count: float` Number of training image pairs - `pairs: List[Pair]` Array of training image pairs - `instruction: Optional[str]` The instruction for the image pair, source to target - `source_id: Optional[str]` The source asset ID (must be a training asset) - `target_id: Optional[str]` The target asset ID (must be a training asset) ### Training Image Replace Response - `class TrainingImageReplaceResponse: …` - `training_image: TrainingImage` - `id: str` The training image ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `automatic_captioning: str` Automatic captioning of the image - `created_at: str` The training image upload date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `description: str` Description for the image - `download_url: str` The URL of the image - `name: str` The original file name of the image (example: "my-training-image.jpg")