Skip to content
Get started

List

GET/models/{modelId}/examples

List all examples of the given modelId

Path ParametersExpand Collapse
modelId: string
Query ParametersExpand Collapse
originalAssets: optional boolean

If set to true, returns the original asset without transformation

ReturnsExpand Collapse
examples: array of object { asset, modelId, inferenceId, 2 more }
asset: object { id, authorId, collectionIds, 24 more }

Asset generated by the inference

id: string

The asset ID (example: “asset_GTrL3mq4SXWyMxkOHRxlpw”)

authorId: string

The author user ID (example: “dcf121faaa1a0a0bbbd9ca1b73d62aea”)

collectionIds: array of string

A list of CollectionId this asset belongs to

createdAt: string

The asset creation date as an ISO string (example: “2023-02-03T11:19:41.579Z”)

editCapabilities: array of "DETECTION" or "GENERATIVE_FILL" or "PIXELATE" or 8 more

List of edit capabilities

One of the following:
"DETECTION"
"GENERATIVE_FILL"
"PIXELATE"
"PROMPT_EDITING"
"REFINE"
"REFRAME"
"REMOVE_BACKGROUND"
"SEGMENTATION"
"UPSCALE"
"UPSCALE_360"
"VECTORIZATION"
kind: "3d" or "audio" or "document" or 4 more

The kind of asset

One of the following:
"3d"
"audio"
"document"
"image"
"image-hdr"
"json"
"video"
metadata: object { kind, type, angular, 106 more }

Metadata of the asset with some additional information

kind: "3d" or "audio" or "document" or 4 more
One of the following:
"3d"
"audio"
"document"
"image"
"image-hdr"
"json"
"video"
type: "3d-texture" or "3d-texture-albedo" or "3d-texture-metallic" or 72 more

The type of the asset. Ex: ‘inference-txt2img’ will represent an asset generated from a text to image model

One of the following:
"3d-texture"
"3d-texture-albedo"
"3d-texture-metallic"
"3d-texture-mtl"
"3d-texture-normal"
"3d-texture-roughness"
"3d23d"
"3d23d-texture"
"audio2audio"
"audio2video"
"background-removal"
"canvas"
"canvas-drawing"
"canvas-export"
"detection"
"generative-fill"
"image-prompt-editing"
"img23d"
"img2img"
"img2video"
"inference-controlnet"
"inference-controlnet-img2img"
"inference-controlnet-inpaint"
"inference-controlnet-inpaint-ip-adapter"
"inference-controlnet-ip-adapter"
"inference-controlnet-reference"
"inference-controlnet-texture"
"inference-img2img"
"inference-img2img-ip-adapter"
"inference-img2img-texture"
"inference-inpaint"
"inference-inpaint-ip-adapter"
"inference-reference"
"inference-reference-texture"
"inference-txt2img"
"inference-txt2img-ip-adapter"
"inference-txt2img-texture"
"patch"
"pixelization"
"reframe"
"restyle"
"segment"
"segmentation-image"
"segmentation-mask"
"skybox-3d"
"skybox-base-360"
"skybox-hdri"
"texture"
"texture-albedo"
"texture-ao"
"texture-edge"
"texture-height"
"texture-metallic"
"texture-normal"
"texture-smoothness"
"txt23d"
"txt2audio"
"txt2img"
"txt2video"
"unknown"
"uploaded"
"uploaded-3d"
"uploaded-audio"
"uploaded-avatar"
"uploaded-video"
"upscale"
"upscale-skybox"
"upscale-texture"
"upscale-video"
"vectorization"
"video23d"
"video2audio"
"video2img"
"video2video"
"voice-clone"
angular: optional number

How angular is the surface? 0 is like a sphere, 1 is like a mechanical object

maximum1
minimum0
aspectRatio: optional string

The optional aspect ratio given for the generation, only applicable for some models

backgroundOpacity: optional number

Int to set between 0 and 255 for the opacity of the background in the result images.

maximum255
minimum0
baseModelId: optional string

The baseModelId that maybe changed at inference time

bbox: optional array of number

A bounding box around the object of interest, in the format [x1, y1, x2, y2].

betterQuality: optional boolean

Remove small dark spots (i.e. “pepper”) and connect small bright cracks.

cannyStructureImage: optional string

The control image already processed by canny detector. Must reference an existing AssetId.

clustering: optional boolean

Activate clustering.

colorCorrection: optional boolean

Ensure upscaled tile have the same color histogram as original tile.

colorMode: optional string
colorPrecision: optional number
concepts: optional array of object { modelId, scale, modelEpoch }

Flux Kontext LoRA to style the image. For Flux Kontext Prompt Editing.

modelId: string

The model ID (example: “model_eyVcnFJcR92BxBkz7N6g5w”)

scale: number

The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2.

maximum2
minimum-2
modelEpoch: optional string

The epoch of the model (example: “000001”) Only available for Flux Lora Trained models

contours: optional array of array of array of array of number
controlEnd: optional number

End step for control.

copiedAt: optional string

The date when the asset was copied to a project

cornerThreshold: optional number
creativity: optional number

Allow the generation of “hallucinations” during the upscale process, which adds additional details and deviates from the original image. Default: optimized for your preset and style.

maximum100
minimum0
creativityDecay: optional number

Amount of decay in creativity over the upscale process. The lowest the value, the less the creativity will be preserved over the upscale process.

maximum100
minimum0
defaultParameters: optional boolean

If true, use the default parameters

depthFidelity: optional number

The depth fidelity if a depth image provided

maximum100
minimum0
depthImage: optional string

The control image processed by depth estimator. Must reference an existing AssetId.

detailsLevel: optional number

Amount of details to remove or add

maximum50
minimum-50
dilate: optional number

The number of pixels to dilate the result masks.

maximum30
minimum0
factor: optional number

Contrast factor for Grayscale detector

filterSpeckle: optional number
fractality: optional number

Determine the scale at which the upscale process works.

  • With a small value, the upscale works at the largest scale, resulting in fewer added details and more coherent images. Ideal for portraits, for example.
  • With a large value, the upscale works at the smallest scale, resulting in more added details and more hallucinations. Ideal for landscapes, for example.

(info): A small value is slower and more expensive to run.

maximum100
minimum0
geometryEnforcement: optional number

Apply extra control to the Skybox 360 geometry. The higher the value, the more the 360 geometry will influence the generated skybox image.

Use with caution. Default is adapted to the other parameters.

maximum100
minimum0
guidance: optional number

The guidance used to generate this asset

halfMode: optional boolean
hdr: optional number
height: optional number
highThreshold: optional number

High threshold for Canny detector

horizontalExpansionRatio: optional number

(deprecated) Horizontal expansion ratio.

maximum2
minimum1
image: optional string

The input image to process. Must reference an existing AssetId or be a data URL.

imageFidelity: optional number

Strengthen the similarity to the original image during the upscale. Default: optimized for your preset and style.

maximum100
minimum0
imageType: optional "seamfull" or "skybox" or "texture"

Preserve the seamless properties of skybox or texture images. Input has to be of same type (seamless).

One of the following:
"seamfull"
"skybox"
"texture"
inferenceId: optional string

The id of the Inference describing how this image was generated

inputFidelity: optional "high" or "low"

When set to high, allows to better preserve details from the input images in the output. This is especially useful when using images that contain elements like faces or logos that require accurate preservation in the generated image.

You can provide multiple input images that will all be preserved with high fidelity, but keep in mind that the first image will be preserved with richer textures and finer details, so if you include elements such as faces, consider placing them in the first image.

Only available for the gpt-image-1 model.

One of the following:
"high"
"low"
inputLocation: optional "bottom" or "left" or "middle" or 2 more

Location of the input image in the output.

One of the following:
"bottom"
"left"
"middle"
"right"
"top"
invert: optional boolean

To invert the relief

keypointThreshold: optional number

How polished is the surface? 0 is like a rough surface, 1 is like a mirror

maximum1
minimum0
layerDifference: optional number
lengthThreshold: optional number
lockExpiresAt: optional string

The ISO timestamp when the lock on the canvas will expire

lowThreshold: optional number

Low threshold for Canny detector

mask: optional string

The mask used for the asset generation or editing

maxIterations: optional number
maxThreshold: optional number

Maximum threshold for Grayscale conversion

minThreshold: optional number

Minimum threshold for Grayscale conversion

modality: optional "canny" or "depth" or "grayscale" or 7 more

Modality to detect

One of the following:
"canny"
"depth"
"grayscale"
"lineart_anime"
"mlsd"
"normal"
"pose"
"scribble"
"segmentation"
"sketch"
mode: optional string
modelId: optional string

The modelId used to generate this asset

modelType: optional "custom" or "elevenlabs-voice" or "flux.1" or 34 more

The type of the generator used

One of the following:
"custom"
"elevenlabs-voice"
"flux.1"
"flux.1-composition"
"flux.1-kontext-dev"
"flux.1-kontext-lora"
"flux.1-krea-dev"
"flux.1-krea-lora"
"flux.1-lora"
"flux.1-pro"
"flux.1.1-pro-ultra"
"flux.2-dev-edit-lora"
"flux.2-dev-lora"
"flux.2-klein-4b-edit-lora"
"flux.2-klein-4b-lora"
"flux.2-klein-9b-edit-lora"
"flux.2-klein-9b-lora"
"flux.2-klein-base-4b-edit-lora"
"flux.2-klein-base-4b-lora"
"flux.2-klein-base-9b-edit-lora"
"flux.2-klein-base-9b-lora"
"flux1.1-pro"
"gpt-image-1"
"qwen-image-2512-lora"
"qwen-image-edit-2509-lora"
"qwen-image-edit-2511-lora"
"qwen-image-edit-lora"
"qwen-image-lora"
"sd-1_5"
"sd-1_5-composition"
"sd-1_5-lora"
"sd-xl"
"sd-xl-composition"
"sd-xl-lora"
"zimage-de-turbo-lora"
"zimage-lora"
"zimage-turbo-lora"
name: optional string
nbMasks: optional number
negativePrompt: optional string

The negative prompt used to generate this asset

negativePromptStrength: optional number

Controls the influence of the negative prompt. Default 0 means the negative prompt has no effect. Higher values increase negative prompt influence. Must be > 0 if negativePrompt is provided.

maximum10
minimum0
numInferenceSteps: optional number

The number of denoising steps for each image generation.

maximum50
minimum5
numOutputs: optional number

The number of outputs to generate.

maximum8
minimum1
originalAssetId: optional string
outputIndex: optional number
overlapPercentage: optional number

Overlap percentage for the output image.

maximum0.5
minimum0
overrideEmbeddings: optional boolean

Override the embeddings of the model. Only your prompt and negativePrompt will be used. Use with caution.

parentId: optional string
parentJobId: optional string
pathPrecision: optional number
points: optional array of array of number

List of points (label, x, y) in the image where label = 0 for background and 1 for object.

polished: optional number

How polished is the surface? 0 is like a rough surface, 1 is like a mirror

maximum1
minimum0
preset: optional string
progressPercent: optional number
prompt: optional string

The prompt that guided the asset generation or editing

promptFidelity: optional number

Increase the fidelity to the prompt during upscale. Default: optimized for your preset and style.

maximum100
minimum0
raised: optional number

How raised is the surface? 0 is flat like water, 1 is like a very rough rock

maximum1
minimum0
referenceImages: optional array of string

The reference images used for the asset generation or editing

refinementSteps: optional number

Additional refinement steps before scaling.

If scalingFactor == 1, the refinement process will be applied (1 + refinementSteps) times. If scalingFactor > 1, the refinement process will be applied refinementSteps times.

maximum4
minimum0
removeBackground: optional boolean

Remove background for Grayscale detector

resizeOption: optional number

Size proportion of the input image in the output.

maximum1
minimum0.1
resultContours: optional boolean

Boolean to output the contours.

resultImage: optional boolean

Boolean to able output the cut out object.

resultMask: optional boolean

Boolean to able return the masks (binary image) in the response.

rootParentId: optional string
saveFlipbook: optional boolean

Save a flipbook of the texture. Deactivated when the input texture is larger than 2048x2048px

scalingFactor: optional number

Scaling factor (when targetWidth not specified)

maximum16
minimum1
scheduler: optional string

The scheduler used to generate this asset

seed: optional string

The seed used to generate this asset. <!> Can be a string or a number in some cases <!>.

sharpen: optional boolean

Sharpen tiles.

shiny: optional number

How shiny is the surface? 0 is like a matte surface, 1 is like a diamond

maximum1
minimum0
size: optional number
sketch: optional boolean

Activate sketch detection instead of canny.

sourceProjectId: optional string
spliceThreshold: optional number
strength: optional number

The strength

Only available for the flux-kontext LoRA model.

structureFidelity: optional number

Strength for the input image structure preservation

maximum100
minimum0
structureImage: optional string

The control image for structure. A canny detector will be applied to this image. Must reference an existing AssetId.

style: optional "3d-cartoon" or "3d-rendered" or "anime" or 23 more
One of the following:
"3d-cartoon"
"3d-rendered"
"anime"
"cartoon"
"cinematic"
"claymation"
"cloud-skydome"
"comic"
"cyberpunk"
"enchanted"
"fantasy"
"ink"
"manga"
"manga-color"
"minimalist"
"neon-tron"
"oil-painting"
"pastel"
"photo"
"photography"
"psychedelic"
"retro-fantasy"
"scifi-concept-art"
"space"
"standard"
"whimsical"
styleFidelity: optional number

The higher the value the more it will look like the style image(s)

maximum100
minimum0
styleImages: optional array of string

List of style images. Most of the time, only one image is enough. It must be existing AssetIds.

styleImagesFidelity: optional number

Condition the influence of the style image(s). The higher the value, the more the style images will influence the upscaled image.

maximum100
minimum0
targetHeight: optional number

The target height of the output image.

maximum2048
minimum0
targetWidth: optional number

Target width for the upscaled image, take priority over scaling factor

maximum16000
minimum1024
text: optional string

A textual description / keywords describing the object of interest.

maxLength100
texture: optional string

The asset to convert in texture maps. Must reference an existing AssetId.

thumbnail: optional object { assetId, url }

The thumbnail of the canvas

assetId: string

The AssetId of the image used as a thumbnail for the canvas (example: “asset_GTrL3mq4SXWyMxkOHRxlpw”)

url: string

The url of the image used as a thumbnail for the canvas

tileStyle: optional boolean

If set to true, during the upscaling process, the model will match tiles of the source image with tiles of the style image(s). This will result in a more coherent restyle. Works best with style images that have a similar composition.

trainingImage: optional boolean
verticalExpansionRatio: optional number

(deprecated) Vertical expansion ratio.

maximum2
minimum1
width: optional number

The width of the rendered image.

maximum2048
minimum1024
mimeType: string

The mime type of the asset (example: “image/png”)

ownerId: string

The owner (project) ID (example: “proj_23tlk332lkht3kl2” or “team_dlkhgs23tlk3hlkth32lkht3kl2” for old teams)

privacy: "private" or "public" or "unlisted"

The privacy of the asset

One of the following:
"private"
"public"
"unlisted"
properties: object { size, animationFrameCount, bitrate, 20 more }

The properties of the asset, content may depend on the kind of asset returned

size: number
animationFrameCount: optional number

Number of animation frames if animations exist

bitrate: optional number

Bitrate of the media in bits per second

boneCount: optional number

Number of bones if skeleton exists

channels: optional number

Number of channels of the audio

classification: optional "effect" or "interview" or "music" or 5 more

Classification of the audio

One of the following:
"effect"
"interview"
"music"
"other"
"sound"
"speech"
"text"
"unknown"
codecName: optional string

Codec name of the media

description: optional string

Description of the audio

dimensions: optional array of number

Bounding box dimensions [width, height, depth]

duration: optional number

Duration of the media in seconds

faceCount: optional number

Number of faces/triangles in the mesh

format: optional string

Format of the mesh file (e.g. ‘glb’, etc.)

frameRate: optional number

Frame rate of the video in frames per second

hasAnimations: optional boolean

Whether the mesh has animations

hasNormals: optional boolean

Whether the mesh has normal vectors

hasSkeleton: optional boolean

Whether the mesh has bones/skeleton

hasUVs: optional boolean

Whether the mesh has UV coordinates

height: optional number
nbFrames: optional number

Number of frames in the video

sampleRate: optional number

Sample rate of the media in Hz

transcription: optional object { text }

Transcription of the audio

text: string
vertexCount: optional number

Number of vertices in the mesh

width: optional number
source: "3d23d" or "3d23d:texture" or "3d:texture" or 72 more

source of the asset

One of the following:
"3d23d"
"3d23d:texture"
"3d:texture"
"3d:texture:albedo"
"3d:texture:metallic"
"3d:texture:mtl"
"3d:texture:normal"
"3d:texture:roughness"
"audio2audio"
"audio2video"
"background-removal"
"canvas"
"canvas-drawing"
"canvas-export"
"detection"
"generative-fill"
"image-prompt-editing"
"img23d"
"img2img"
"img2video"
"inference-control-net"
"inference-control-net-img"
"inference-control-net-inpainting"
"inference-control-net-inpainting-ip-adapter"
"inference-control-net-ip-adapter"
"inference-control-net-reference"
"inference-control-net-texture"
"inference-img"
"inference-img-ip-adapter"
"inference-img-texture"
"inference-in-paint"
"inference-in-paint-ip-adapter"
"inference-reference"
"inference-reference-texture"
"inference-txt"
"inference-txt-ip-adapter"
"inference-txt-texture"
"patch"
"pixelization"
"reframe"
"restyle"
"segment"
"segmentation-image"
"segmentation-mask"
"skybox-3d"
"skybox-base-360"
"skybox-hdri"
"texture"
"texture:albedo"
"texture:ao"
"texture:edge"
"texture:height"
"texture:metallic"
"texture:normal"
"texture:smoothness"
"txt23d"
"txt2audio"
"txt2img"
"txt2video"
"unknown"
"uploaded"
"uploaded-3d"
"uploaded-audio"
"uploaded-avatar"
"uploaded-video"
"upscale"
"upscale-skybox"
"upscale-texture"
"upscale-video"
"vectorization"
"video23d"
"video2audio"
"video2img"
"video2video"
"voice-clone"
status: "error" or "pending" or "success"

The actual status

One of the following:
"error"
"pending"
"success"
tags: array of string

The associated tags (example: [“sci-fi”, “landscape”])

updatedAt: string

The asset last update date as an ISO string (example: “2023-02-03T11:19:41.579Z”)

url: string

Signed URL to get the asset content

automaticCaptioning: optional string

Automatic captioning of the asset

description: optional string

The description, it will contain in priority:

  • the manual description
  • the advanced captioning when the asset is used in training flow
  • the automatic captioning
embedding: optional array of number

The embedding of the asset when requested.

Only available when an asset can be embedded (ie: not Detection maps)

firstFrame: optional object { assetId, url }

The video asset’s first frame.

Contains the assetId and the url of the first frame.

assetId: string
url: string
isHidden: optional boolean

Whether the asset is hidden.

lastFrame: optional object { assetId, url }

The video asset’s last frame.

Contains the assetId and the url of the last frame.

assetId: string
url: string
nsfw: optional array of string

The NSFW labels

originalFileUrl: optional string

The original file url.

Contains the url of the original file. without any conversion. Only available for some specific video, audio and threeD assets. Is only specified if the given asset data has been replaced with a new file during the creation of the asset.

outputIndex: optional number

The output index of the asset within a job This index is an positive integer that starts at 0 It is used to differentiate between multiple outputs of the same job If the job has only one output, this index is 0

preview: optional object { assetId, url }

The asset’s preview.

Contains the assetId and the url of the preview.

assetId: string
url: string
thumbnail: optional object { assetId, url }

The asset’s thumbnail.

Contains the assetId and the url of the thumbnail.

assetId: string
url: string
modelId: string

Model id of the model used to generate the asset

inferenceId: optional string

Inference id of the inference used to generate the asset

inferenceParameters: optional object { prompt, type, aspectRatio, 36 more }

The inference parameters used to generate the asset

prompt: string

Full text prompt including the model placeholder. (example: “an illustration of phoenix in a fantasy world, flying over a mountain, 8k, bokeh effect”)

type: "controlnet" or "controlnet_img2img" or "controlnet_inpaint" or 15 more

The type of inference to use. Example: txt2img, img2img, etc.

Selecting the right type will condition the expected parameters.

Note: if model.type is sd-xl* or sd-1_5*, when using the "inpaint" inference type, Scenario determines the best available baseModel for a given modelId: one of `[“stable-diffusion-inpainting”, “stable-diffusion-xl-1.0-inpainting-0.1”] will be used.

One of the following:
"controlnet"
"controlnet_img2img"
"controlnet_inpaint"
"controlnet_inpaint_ip_adapter"
"controlnet_ip_adapter"
"controlnet_reference"
"controlnet_texture"
"img2img"
"img2img_ip_adapter"
"img2img_texture"
"inpaint"
"inpaint_ip_adapter"
"outpaint"
"reference"
"reference_texture"
"txt2img"
"txt2img_ip_adapter"
"txt2img_texture"
aspectRatio: optional "16:9" or "1:1" or "21:9" or 8 more

The aspect ratio of the generated images. Only used for the model flux.1.1-pro-ultra. The aspect ratio is a string formatted as “width:height” (example: “16:9”).

One of the following:
"16:9"
"1:1"
"21:9"
"2:3"
"3:2"
"3:4"
"4:3"
"4:5"
"5:4"
"9:16"
"9:21"
baseModelId: optional string

The base model to use for the inference. Only Flux LoRA models can use this parameter. Allowed values are available in the model’s attribute: compliantModelIds

concepts: optional array of object { modelId, scale, modelEpoch }
modelId: string

The model ID (example: “model_eyVcnFJcR92BxBkz7N6g5w”)

scale: number

The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2.

maximum2
minimum-2
modelEpoch: optional string

The epoch of the model (example: “000001”) Only available for Flux Lora Trained models

controlEnd: optional number

Specifies how long the ControlNet guidance should be applied during the inference process.

Only available for Flux.1-dev based models.

The value represents the percentage of total inference steps where the ControlNet guidance is active. For example:

  • 1.0: ControlNet guidance is applied during all inference steps
  • 0.5: ControlNet guidance is only applied during the first half of inference steps

Default values:

  • 0.5 for Canny modality
  • 0.6 for all other modalities
maximum1
minimum0.1
controlImage: optional string

Signed URL to display the controlnet input image

controlImageId: optional string

Asset id of the controlnet input image

controlStart: optional number

Specifies the starting point of the ControlNet guidance during the inference process.

Only available for Flux.1-dev based models.

The value represents the percentage of total inference steps where the ControlNet guidance starts. For example:

  • 0.0: ControlNet guidance starts at the beginning of the inference steps
  • 0.5: ControlNet guidance starts at the middle of the inference steps
maximum0.9
minimum0
disableMerging: optional boolean

If set to true, the entire input image will likely change during inpainting. This results in faster inferences, but the output image will be harder to integrate if the input is just a small part of a larger image.

disableModalityDetection: optional boolean

If false, the process uses the given image to detect the modality. If true (default), the process will not try to detect the modality of the given image.

For example: with pose modality and false value, the process will detect the pose of people in the given image with depth modality and false value, the process will detect the depth of the given image with scribble modality and truevalue, the process will use the given image as a scribble

⚠️ For models of the FLUX schnell or dev families, this parameter is ignored. The modality detection is always disabled. ⚠️

guidance: optional number

Controls how closely the generated image follows the prompt. Higher values result in stronger adherence to the prompt. Default and allowed values depend on the model type:

  • For Flux dev models, the default is 3.5 and allowed values are within [0, 10]
  • For Flux pro models, the default is 3 and allowed values are within [2, 5]
  • For SDXL models, the default is 6 and allowed values are within [0, 20]
  • For SD1.5 models, the default is 7.5 and allowed values are within [0, 20]
maximum20
minimum0
height: optional number

The height of the generated images, must be a 8 multiple (within [64, 2048], default: 512) If model.type is sd-xl, sd-xl-lora, sd-xl-composition the height must be within [512, 2048] If model.type is sd-1_5, the height must be within [64, 1024] If model.type is flux.1.1-pro-ultra, you can use the aspectRatio parameter instead

maximum2048
minimum64
multipleOf8
hideResults: optional boolean

If set, generated assets will be hidden and not returned in the list of images of the inference or when listing assets (default: false)

image: optional string

Signed URL to display the input image

imageId: optional string

Asset id of the input image

intermediateImages: optional boolean

Enable or disable the intermediate images generation (default: false)

ipAdapterImage: optional string

Signed URL to display the IpAdapter image

ipAdapterImageId: optional string

Asset id of the input IpAdapter image

ipAdapterImageIds: optional array of string

Asset id of the input IpAdapter images

ipAdapterImages: optional array of string

Signed URL to display the IpAdapter images

ipAdapterScale: optional number

IpAdapter scale factor (within [0.0, 1.0], default: 0.9).

maximum1
minimum0
ipAdapterScales: optional array of number

IpAdapter scale factors (within [0.0, 1.0], default: 0.9).

maximum1
minimum0
ipAdapterType: optional "character" or "style"

The type of IP Adapter model to use. Must be one of [style, character], default to `style“

One of the following:
"character"
"style"
mask: optional string

Signed URL to display the mask image

maskId: optional string

Asset id of the mask image

modality: optional string

The modality associated with the control image used for the generation: it can either be an object with a combination of maximum

For models of SD1.5 family:

  • up to 3 modalities from canny, pose, depth, lines, seg, scribble, lineart, normal-map, illusion
  • or one of the following presets: character, landscape, city, interior.

For models of the SDXL family:

  • up to 3 modalities from canny, pose, depth, seg, illusion, scribble
  • or one of the following presets: character, landscape.

For models of the FLUX schnell or dev families:

  • one modality from: canny, tile, depth, blur, pose, gray, low-quality

Optionally, you can associate a value to these modalities or presets. The value must be within ]0.0, 1.0].

Examples:

  • canny
  • depth:0.5,pose:1.0
  • canny:0.5,depth:0.5,lines:0.3
  • landscape
  • character:0.5
  • illusion:1

Note: if you use a value that is not supported by the model family, this will result in an error.

modelEpoch: optional string

The epoch of the model to use for the inference. Only available for Flux Lora Trained models.

negativePrompt: optional string

The prompt not to guide the image generation, ignored when guidance < 1 (example: “((ugly face))”) For Flux based model (not Fast-Flux): requires negativePromptStrength > 0 and active only for inference types txt2img / img2img / controlnet.

negativePromptStrength: optional number

Only applicable for flux-dev based models for txt2img, img2img, and controlnet inference types.

Controls the influence of the negative prompt. Default 0 means the negative prompt has no effect. Higher values increase negative prompt influence. Must be > 0 if negativePrompt is provided.

maximum10
minimum0
numInferenceSteps: optional number

The number of denoising steps for each image generation (within [1, 150], default: 30)

maximum150
minimum1
numSamples: optional number

The number of images to generate (within [1, 128], default: 4)

maximum128
minimum1
referenceAdain: optional boolean

Whether to use reference adain Only for “reference” inference type

referenceAttn: optional boolean

Whether to use reference query for self attention’s context Only for “reference” inference type

scheduler: optional "DDIMScheduler" or "DDPMScheduler" or "DEISMultistepScheduler" or 12 more

The scheduler to use to override the default configured for the model. See detailed documentation for more details.

One of the following:
"DDIMScheduler"
"DDPMScheduler"
"DEISMultistepScheduler"
"DPMSolverMultistepScheduler"
"DPMSolverSinglestepScheduler"
"EulerAncestralDiscreteScheduler"
"EulerDiscreteScheduler"
"HeunDiscreteScheduler"
"KDPM2AncestralDiscreteScheduler"
"KDPM2DiscreteScheduler"
"LCMScheduler"
"LMSDiscreteScheduler"
"PNDMScheduler"
"TCDScheduler"
"UniPCMultistepScheduler"
seed: optional string

Used to reproduce previous results. Default: randomly generated number.

maximum2147483647
minimum0
strength: optional number

Controls the noise intensity introduced to the input image, where a value of 1.0 completely erases the original image’s details. Available for img2img and inpainting. (within [0.01, 1.0], default: 0.75)

maximum1
minimum0.01
styleFidelity: optional number

If style_fidelity=1.0, control more important, else if style_fidelity=0.0, prompt more important, else balanced Only for “reference” inference type

maximum1
minimum0
width: optional number

The width of the generated images, must be a 8 multiple (within [64, 2048], default: 512) If model.type is sd-xl, sd-xl-lora, sd-xl-composition the width must be within [512, 2048] If model.type is sd-1_5, the width must be within [64, 1024] If model.type is flux.1.1-pro-ultra, you can use the aspectRatio parameter instead

maximum2048
minimum64
multipleOf8
job: optional object { createdAt, jobId, jobType, 8 more }

The job associated with the asset

createdAt: string

The job creation date as an ISO string (example: “2023-02-03T11:19:41.579Z”)

jobId: string

The job ID (example: “job_ocZCnG1Df35XRL1QyCZSRxAG8”)

jobType: "assets-download" or "canvas-export" or "caption" or 36 more

The type of job

One of the following:
"assets-download"
"canvas-export"
"caption"
"caption-llava"
"custom"
"describe-style"
"detection"
"embed"
"flux"
"flux-model-training"
"generate-prompt"
"image-generation"
"image-prompt-editing"
"inference"
"mesh-preview-rendering"
"model-download"
"model-import"
"model-training"
"musubi-model-training"
"openai-image-generation"
"patch-image"
"pixelate"
"reframe"
"remove-background"
"repaint"
"restyle"
"segment"
"skybox-3d"
"skybox-base-360"
"skybox-hdri"
"skybox-upscale-360"
"texture"
"translate"
"upload"
"upscale"
"upscale-skybox"
"upscale-texture"
"vectorize"
"workflow"
metadata: object { assetIds, error, flow, 6 more }

Metadata of the job with some additional information

assetIds: optional array of string

List of produced assets for this job

error: optional string

Eventual error for the job

flow: optional array of object { id, status, type, 15 more }

The flow of the job. Only available for workflow jobs.

id: string

The id of the node.

status: "failure" or "pending" or "processing" or 2 more

The status of the node. Only available for WorkflowJob nodes.

One of the following:
"failure"
"pending"
"processing"
"skipped"
"success"
type: "custom-model" or "for-each" or "generate-prompt" or 7 more

The type of the job for the node.

One of the following:
"custom-model"
"for-each"
"generate-prompt"
"list"
"logic"
"model"
"remove-background"
"transform"
"user-approval"
"workflow"
assets: optional array of object { assetId, url }

List of produced assets for this node.

assetId: string
url: string
count: optional number

Fixed number of iterations for a ForEach node. When set, the loop runs exactly count times regardless of array input. When not set, the loop iterates over the resolved array input. Only available for ForEach nodes.

dependsOn: optional array of string

The nodes that this node depends on. Only available for nodes that have dependencies. Mainly used for user approval nodes.

includeOutputsInWorkflowJob: optional true

If true, the outputs of this node will be included in the workflow job’s final output. Only applicable to producing nodes (custom-model, inference, etc.). By default, only last nodes (nodes not referenced by other nodes) contribute to outputs. Set this to true to also include intermediate nodes in the final output. Note: This should only be set to true or left undefined.

inputs: optional array of object { name, type, allowedValues, 26 more }

The inputs of the node.

name: string

The name that must be user to call the model through the API

type: "boolean" or "file" or "file_array" or 7 more

The data type of the input

One of the following:
"boolean"
"file"
"file_array"
"inputs_array"
"model"
"model_array"
"number"
"number_array"
"string"
"string_array"
allowedValues: optional array of unknown

The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown.

backgroundBehavior: optional "opaque" or "transparent"

Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`.

One of the following:
"opaque"
"transparent"
color: optional boolean

Whether the input is a color or not. Only available for `string` input type.

costImpact: optional boolean

Whether this input affects the model’s cost calculation

default: optional unknown

The default value for the input

description: optional string

Help text displayed in the UI to provide additional information about the input

group: optional string

Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI.

hint: optional string

Hint text displayed in the UI as a tooltip to guide the user

inputs: optional array of map[unknown]

The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs.

items: optional array of array of object { name, type, allowedValues, 25 more }

The configured items for inputs_array type inputs. Each item is an array of SubNodeInput that need ref/value resolution. Only available for inputs_array type.

name: string

The name that must be user to call the model through the API

type: "boolean" or "file" or "file_array" or 7 more

The data type of the input

One of the following:
"boolean"
"file"
"file_array"
"inputs_array"
"model"
"model_array"
"number"
"number_array"
"string"
"string_array"
allowedValues: optional array of unknown

The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown.

backgroundBehavior: optional "opaque" or "transparent"

Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`.

One of the following:
"opaque"
"transparent"
color: optional boolean

Whether the input is a color or not. Only available for `string` input type.

costImpact: optional boolean

Whether this input affects the model’s cost calculation

default: optional unknown

The default value for the input

description: optional string

Help text displayed in the UI to provide additional information about the input

group: optional string

Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI.

hint: optional string

Hint text displayed in the UI as a tooltip to guide the user

inputs: optional array of map[unknown]

The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs.

kind: optional "3d" or "audio" or "document" or 4 more

The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix

One of the following:
"3d"
"audio"
"document"
"image"
"image-hdr"
"json"
"video"
label: optional string

The label displayed in the UI for this input

maskFrom: optional string

The name of the file input field to use as the mask source

max: optional number

The maximum allowed value. Only available for `number` and `array` input types.

maxLength: optional number

The maximum allowed length for `string` inputs. Also applies to each item in `string_array`.

maxSize: optional number

The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time.

min: optional number

The minimum allowed value. Only available for `number` and array input types.

minLength: optional number

The minimum allowed length for string inputs. Also applies to each item in `string_array`.

modelTypes: optional array of "custom" or "elevenlabs-voice" or "flux.1" or 34 more

The allowed model types for this input. Example: `[“flux.1-lora”]`. Only available for `model_array` input type.

One of the following:
"custom"
"elevenlabs-voice"
"flux.1"
"flux.1-composition"
"flux.1-kontext-dev"
"flux.1-kontext-lora"
"flux.1-krea-dev"
"flux.1-krea-lora"
"flux.1-lora"
"flux.1-pro"
"flux.1.1-pro-ultra"
"flux.2-dev-edit-lora"
"flux.2-dev-lora"
"flux.2-klein-4b-edit-lora"
"flux.2-klein-4b-lora"
"flux.2-klein-9b-edit-lora"
"flux.2-klein-9b-lora"
"flux.2-klein-base-4b-edit-lora"
"flux.2-klein-base-4b-lora"
"flux.2-klein-base-9b-edit-lora"
"flux.2-klein-base-9b-lora"
"flux1.1-pro"
"gpt-image-1"
"qwen-image-2512-lora"
"qwen-image-edit-2509-lora"
"qwen-image-edit-2511-lora"
"qwen-image-edit-lora"
"qwen-image-lora"
"sd-1_5"
"sd-1_5-composition"
"sd-1_5-lora"
"sd-xl"
"sd-xl-composition"
"sd-xl-lora"
"zimage-de-turbo-lora"
"zimage-lora"
"zimage-turbo-lora"
parent: optional boolean

Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types.

For `file_array`, the parent asset is the first item in the array.

placeholder: optional string

Placeholder text for the input. Only available for ‘string’ input type.

prompt: optional boolean

Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type.

promptSpark: optional boolean

Whether the input is used with prompt spark. Only available for `string` input type.

ref: optional object { conditional, equal, name, node }

The reference to another input or output of the same workflow. Must have at least one of node or conditional.

conditional: optional array of string

The conditional nodes to reference. If the conditional nodes are successful, the node will be successful. If the conditional nodes are skipped, the node will be skipped. Contains an array of node ids used to check the status of the nodes.

equal: optional string

This is the desired node output value if ref is an if/else node.

name: optional string

The name of the input or output to reference. If the type is ‘workflow’, the name is the name of the input of the workflow is required If the type is ‘node’, the name is not mandatory, except if you want all outputs of the node. To get all outputs of a node, you can use the name ‘all’.

node: optional string

The node id or ‘workflow’ if the source is a workflow input.

required: optional object { always, conditionalValues, ifDefined, ifNotDefined }

Set of rules that describes when this input is required:

  • `always`: Input is always required
  • `ifNotDefined`: Input is required when another specified input is not defined
  • `ifDefined`: Input is required when another specified input is defined
  • `conditionalValues`: Input is required when another input has a specific value

By default, the input is not required.

always: optional boolean

Whether the input is always required

conditionalValues: optional unknown

Makes this input required when another input has a specific value:

  • Key: name of the input to check
  • Value: operation and allowed values that trigger the requirement
ifDefined: optional unknown

Makes this input required when another input is defined:

  • Key: name of the input that must be defined
  • Value: message to display when this input is required
ifNotDefined: optional unknown

Makes this input required when another input is not defined:

  • Key: name of the input that must be undefined
  • Value: message to display when this input is required
step: optional number

The step increment for numeric inputs. Only available for `number` input type.

minimum1
value: optional unknown

The value of the input. This is the value of the input that will be used to run the node. Only available for flows managed by a WorkflowJob.

kind: optional "3d" or "audio" or "document" or 4 more

The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix

One of the following:
"3d"
"audio"
"document"
"image"
"image-hdr"
"json"
"video"
label: optional string

The label displayed in the UI for this input

maskFrom: optional string

The name of the file input field to use as the mask source

max: optional number

The maximum allowed value. Only available for `number` and `array` input types.

maxLength: optional number

The maximum allowed length for `string` inputs. Also applies to each item in `string_array`.

maxSize: optional number

The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time.

min: optional number

The minimum allowed value. Only available for `number` and array input types.

minLength: optional number

The minimum allowed length for string inputs. Also applies to each item in `string_array`.

modelTypes: optional array of "custom" or "elevenlabs-voice" or "flux.1" or 34 more

The allowed model types for this input. Example: `[“flux.1-lora”]`. Only available for `model_array` input type.

One of the following:
"custom"
"elevenlabs-voice"
"flux.1"
"flux.1-composition"
"flux.1-kontext-dev"
"flux.1-kontext-lora"
"flux.1-krea-dev"
"flux.1-krea-lora"
"flux.1-lora"
"flux.1-pro"
"flux.1.1-pro-ultra"
"flux.2-dev-edit-lora"
"flux.2-dev-lora"
"flux.2-klein-4b-edit-lora"
"flux.2-klein-4b-lora"
"flux.2-klein-9b-edit-lora"
"flux.2-klein-9b-lora"
"flux.2-klein-base-4b-edit-lora"
"flux.2-klein-base-4b-lora"
"flux.2-klein-base-9b-edit-lora"
"flux.2-klein-base-9b-lora"
"flux1.1-pro"
"gpt-image-1"
"qwen-image-2512-lora"
"qwen-image-edit-2509-lora"
"qwen-image-edit-2511-lora"
"qwen-image-edit-lora"
"qwen-image-lora"
"sd-1_5"
"sd-1_5-composition"
"sd-1_5-lora"
"sd-xl"
"sd-xl-composition"
"sd-xl-lora"
"zimage-de-turbo-lora"
"zimage-lora"
"zimage-turbo-lora"
parent: optional boolean

Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types.

For `file_array`, the parent asset is the first item in the array.

placeholder: optional string

Placeholder text for the input. Only available for ‘string’ input type.

prompt: optional boolean

Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type.

promptSpark: optional boolean

Whether the input is used with prompt spark. Only available for `string` input type.

ref: optional object { conditional, equal, name, node }

The reference to another input or output of the same workflow. Must have at least one of node or conditional.

conditional: optional array of string

The conditional nodes to reference. If the conditional nodes are successful, the node will be successful. If the conditional nodes are skipped, the node will be skipped. Contains an array of node ids used to check the status of the nodes.

equal: optional string

This is the desired node output value if ref is an if/else node.

name: optional string

The name of the input or output to reference. If the type is ‘workflow’, the name is the name of the input of the workflow is required If the type is ‘node’, the name is not mandatory, except if you want all outputs of the node. To get all outputs of a node, you can use the name ‘all’.

node: optional string

The node id or ‘workflow’ if the source is a workflow input.

required: optional object { always, conditionalValues, ifDefined, ifNotDefined }

Set of rules that describes when this input is required:

  • `always`: Input is always required
  • `ifNotDefined`: Input is required when another specified input is not defined
  • `ifDefined`: Input is required when another specified input is defined
  • `conditionalValues`: Input is required when another input has a specific value

By default, the input is not required.

always: optional boolean

Whether the input is always required

conditionalValues: optional unknown

Makes this input required when another input has a specific value:

  • Key: name of the input to check
  • Value: operation and allowed values that trigger the requirement
ifDefined: optional unknown

Makes this input required when another input is defined:

  • Key: name of the input that must be defined
  • Value: message to display when this input is required
ifNotDefined: optional unknown

Makes this input required when another input is not defined:

  • Key: name of the input that must be undefined
  • Value: message to display when this input is required
step: optional number

The step increment for numeric inputs. Only available for `number` input type.

minimum1
value: optional unknown

The value of the input. This is the value of the input that will be used to run the node. Only available for flows managed by a WorkflowJob.

items: optional array of string

Statically-configured items for a List node. The node outputs this array as-is when executed. Only available for List nodes. The values can be strings, numbers, or asset IDs.

iterationIndex: optional number

Zero-based index of the iteration this node copy belongs to. Set on dynamically-created copies of loop body nodes.

jobId: optional string

If the flow is part of a WorkflowJob, this is the jobId for the node. jobId is only available for nodes started. A node “Pending” for a running workflow job is not started.

logic: optional object { cases, default, transform }

The logic of the node. Only available for logic nodes.

cases: optional array of object { condition, value }

The cases of the logic. Only available for if/else nodes.

condition: string
value: string
default: optional string

The default case of the logic. Contains the id/output of the node to execute if no case is matched. Only available for if/else nodes.

transform: optional string

The transform of the logic. Only available for transform nodes.

logicType: optional "if-else"

The type of the logic for the node. Only available for logic nodes.

loopBodyNodeIds: optional array of string

IDs of the body template nodes that belong to this ForEach loop. At runtime these templates are cloned once per iteration and marked Skipped. Only available for ForEach nodes.

loopNodeId: optional string

ID of the ForEach node that spawned this iteration copy. Set on dynamically-created copies of loop body nodes.

modelId: optional string

The model id for the node. Mainly used for custom model tasks.

output: optional unknown

The output of the node. Only available for logic nodes.

workflowId: optional string

The workflow id for the node. Mainly used for workflow tasks.

hint: optional string

Actionable hint for the user explaining what went wrong and how to resolve it.

input: optional map[unknown]

The inputs for the job

output: optional map[unknown]

May contain the output of the job for specific custom models jobs. Only available for custom models which generate non-assets outputs. Example: LLM text results.

outputModelId: optional string

For voice-clone jobs: the ID of the model being trained.

workflowId: optional string

The workflow ID of the job if job is part of a workflow.

workflowJobId: optional string

The workflow job ID of the job if job is part of a workflow job.

progress: number

Progress of the job (between 0 and 1)

status: "canceled" or "failure" or "finalizing" or 5 more

The current status of the job

One of the following:
"canceled"
"failure"
"finalizing"
"in-progress"
"pending"
"queued"
"success"
"warming-up"
statusHistory: array of object { date, status }

The history of the different statuses the job went through with the ISO string date of when the job reached each statuses.

date: string
status: "canceled" or "failure" or "finalizing" or 5 more
One of the following:
"canceled"
"failure"
"finalizing"
"in-progress"
"pending"
"queued"
"success"
"warming-up"
updatedAt: string

The job last update date as an ISO string (example: “2023-02-03T11:19:41.579Z”)

authorId: optional string

The author user ID (example: “dcf121faaa1a0a0bbbd9ca1b73d62aea”)

billing: optional object { cuCost, cuDiscount }

The billing of the job

cuCost: number
cuDiscount: number
ownerId: optional string

The owner ID (example: “team_U3Qmc8PCdWXwAQJ4Dvw4tV6D”)

List

curl https://api.cloud.scenario.com/v1/models/$MODEL_ID/examples \
    -u "$SCENARIO_SDK_API_KEY:SCENARIO_SDK_API_SECRET"
{
  "examples": [
    {
      "asset": {
        "id": "id",
        "authorId": "authorId",
        "collectionIds": [
          "string"
        ],
        "createdAt": "createdAt",
        "editCapabilities": [
          "DETECTION"
        ],
        "kind": "3d",
        "metadata": {
          "kind": "3d",
          "type": "3d-texture",
          "angular": 0,
          "aspectRatio": "aspectRatio",
          "backgroundOpacity": 0,
          "baseModelId": "baseModelId",
          "bbox": [
            0,
            0,
            0,
            0
          ],
          "betterQuality": true,
          "cannyStructureImage": "cannyStructureImage",
          "clustering": true,
          "colorCorrection": true,
          "colorMode": "colorMode",
          "colorPrecision": 0,
          "concepts": [
            {
              "modelId": "modelId",
              "scale": -2,
              "modelEpoch": "modelEpoch"
            }
          ],
          "contours": [
            [
              [
                [
                  0
                ]
              ]
            ]
          ],
          "controlEnd": 0,
          "copiedAt": "copiedAt",
          "cornerThreshold": 0,
          "creativity": 0,
          "creativityDecay": 0,
          "defaultParameters": true,
          "depthFidelity": 0,
          "depthImage": "depthImage",
          "detailsLevel": -50,
          "dilate": 0,
          "factor": 0,
          "filterSpeckle": 0,
          "fractality": 0,
          "geometryEnforcement": 0,
          "guidance": 0,
          "halfMode": true,
          "hdr": 0,
          "height": 0,
          "highThreshold": 0,
          "horizontalExpansionRatio": 1,
          "image": "image",
          "imageFidelity": 0,
          "imageType": "seamfull",
          "inferenceId": "inferenceId",
          "inputFidelity": "high",
          "inputLocation": "bottom",
          "invert": true,
          "keypointThreshold": 0,
          "layerDifference": 0,
          "lengthThreshold": 0,
          "lockExpiresAt": "lockExpiresAt",
          "lowThreshold": 0,
          "mask": "mask",
          "maxIterations": 0,
          "maxThreshold": 0,
          "minThreshold": 0,
          "modality": "canny",
          "mode": "mode",
          "modelId": "modelId",
          "modelType": "custom",
          "name": "name",
          "nbMasks": 0,
          "negativePrompt": "negativePrompt",
          "negativePromptStrength": 0,
          "numInferenceSteps": 5,
          "numOutputs": 1,
          "originalAssetId": "originalAssetId",
          "outputIndex": 0,
          "overlapPercentage": 0,
          "overrideEmbeddings": true,
          "parentId": "parentId",
          "parentJobId": "parentJobId",
          "pathPrecision": 0,
          "points": [
            [
              0
            ],
            [
              0
            ],
            [
              0
            ]
          ],
          "polished": 0,
          "preset": "preset",
          "progressPercent": 0,
          "prompt": "prompt",
          "promptFidelity": 0,
          "raised": 0,
          "referenceImages": [
            "string"
          ],
          "refinementSteps": 0,
          "removeBackground": true,
          "resizeOption": 0.1,
          "resultContours": true,
          "resultImage": true,
          "resultMask": true,
          "rootParentId": "rootParentId",
          "saveFlipbook": true,
          "scalingFactor": 1,
          "scheduler": "scheduler",
          "seed": "seed",
          "sharpen": true,
          "shiny": 0,
          "size": 0,
          "sketch": true,
          "sourceProjectId": "sourceProjectId",
          "spliceThreshold": 0,
          "strength": 0,
          "structureFidelity": 0,
          "structureImage": "structureImage",
          "style": "3d-cartoon",
          "styleFidelity": 0,
          "styleImages": [
            "string"
          ],
          "styleImagesFidelity": 0,
          "targetHeight": 0,
          "targetWidth": 1024,
          "text": "text",
          "texture": "texture",
          "thumbnail": {
            "assetId": "assetId",
            "url": "url"
          },
          "tileStyle": true,
          "trainingImage": true,
          "verticalExpansionRatio": 1,
          "width": 1024
        },
        "mimeType": "mimeType",
        "ownerId": "ownerId",
        "privacy": "private",
        "properties": {
          "size": 0,
          "animationFrameCount": 0,
          "bitrate": 0,
          "boneCount": 0,
          "channels": 0,
          "classification": "effect",
          "codecName": "codecName",
          "description": "description",
          "dimensions": [
            0,
            0,
            0
          ],
          "duration": 0,
          "faceCount": 0,
          "format": "format",
          "frameRate": 0,
          "hasAnimations": true,
          "hasNormals": true,
          "hasSkeleton": true,
          "hasUVs": true,
          "height": 0,
          "nbFrames": 0,
          "sampleRate": 0,
          "transcription": {
            "text": "text"
          },
          "vertexCount": 0,
          "width": 0
        },
        "source": "3d23d",
        "status": "error",
        "tags": [
          "string"
        ],
        "updatedAt": "updatedAt",
        "url": "url",
        "automaticCaptioning": "automaticCaptioning",
        "description": "description",
        "embedding": [
          0
        ],
        "firstFrame": {
          "assetId": "assetId",
          "url": "url"
        },
        "isHidden": true,
        "lastFrame": {
          "assetId": "assetId",
          "url": "url"
        },
        "nsfw": [
          "string"
        ],
        "originalFileUrl": "originalFileUrl",
        "outputIndex": 0,
        "preview": {
          "assetId": "assetId",
          "url": "url"
        },
        "thumbnail": {
          "assetId": "assetId",
          "url": "url"
        }
      },
      "modelId": "modelId",
      "inferenceId": "inferenceId",
      "inferenceParameters": {
        "prompt": "prompt",
        "type": "controlnet",
        "aspectRatio": "16:9",
        "baseModelId": "baseModelId",
        "concepts": [
          {
            "modelId": "modelId",
            "scale": -2,
            "modelEpoch": "modelEpoch"
          }
        ],
        "controlEnd": 0.1,
        "controlImage": "controlImage",
        "controlImageId": "controlImageId",
        "controlStart": 0,
        "disableMerging": true,
        "disableModalityDetection": true,
        "guidance": 0,
        "height": 64,
        "hideResults": true,
        "image": "image",
        "imageId": "imageId",
        "intermediateImages": true,
        "ipAdapterImage": "ipAdapterImage",
        "ipAdapterImageId": "ipAdapterImageId",
        "ipAdapterImageIds": [
          "string"
        ],
        "ipAdapterImages": [
          "string"
        ],
        "ipAdapterScale": 0,
        "ipAdapterScales": [
          0
        ],
        "ipAdapterType": "character",
        "mask": "mask",
        "maskId": "maskId",
        "modality": "modality",
        "modelEpoch": "modelEpoch",
        "negativePrompt": "negativePrompt",
        "negativePromptStrength": 0,
        "numInferenceSteps": 1,
        "numSamples": 1,
        "referenceAdain": true,
        "referenceAttn": true,
        "scheduler": "DDIMScheduler",
        "seed": "seed",
        "strength": 0.01,
        "styleFidelity": 0,
        "width": 64
      },
      "job": {
        "createdAt": "createdAt",
        "jobId": "jobId",
        "jobType": "assets-download",
        "metadata": {
          "assetIds": [
            "string"
          ],
          "error": "error",
          "flow": [
            {
              "id": "id",
              "status": "failure",
              "type": "custom-model",
              "assets": [
                {
                  "assetId": "assetId",
                  "url": "url"
                }
              ],
              "count": 0,
              "dependsOn": [
                "string"
              ],
              "includeOutputsInWorkflowJob": true,
              "inputs": [
                {
                  "name": "name",
                  "type": "boolean",
                  "allowedValues": [
                    {}
                  ],
                  "backgroundBehavior": "opaque",
                  "color": true,
                  "costImpact": true,
                  "default": {},
                  "description": "description",
                  "group": "group",
                  "hint": "hint",
                  "inputs": [
                    {
                      "foo": "bar"
                    }
                  ],
                  "items": [
                    [
                      {
                        "name": "name",
                        "type": "boolean",
                        "allowedValues": [
                          {}
                        ],
                        "backgroundBehavior": "opaque",
                        "color": true,
                        "costImpact": true,
                        "default": {},
                        "description": "description",
                        "group": "group",
                        "hint": "hint",
                        "inputs": [
                          {
                            "foo": "bar"
                          }
                        ],
                        "kind": "3d",
                        "label": "label",
                        "maskFrom": "maskFrom",
                        "max": 0,
                        "maxLength": 0,
                        "maxSize": 0,
                        "min": 0,
                        "minLength": 0,
                        "modelTypes": [
                          "custom"
                        ],
                        "parent": true,
                        "placeholder": "placeholder",
                        "prompt": true,
                        "promptSpark": true,
                        "ref": {
                          "conditional": [
                            "string"
                          ],
                          "equal": "equal",
                          "name": "name",
                          "node": "node"
                        },
                        "required": {
                          "always": true,
                          "conditionalValues": {},
                          "ifDefined": {},
                          "ifNotDefined": {}
                        },
                        "step": 1,
                        "value": {}
                      }
                    ]
                  ],
                  "kind": "3d",
                  "label": "label",
                  "maskFrom": "maskFrom",
                  "max": 0,
                  "maxLength": 0,
                  "maxSize": 0,
                  "min": 0,
                  "minLength": 0,
                  "modelTypes": [
                    "custom"
                  ],
                  "parent": true,
                  "placeholder": "placeholder",
                  "prompt": true,
                  "promptSpark": true,
                  "ref": {
                    "conditional": [
                      "string"
                    ],
                    "equal": "equal",
                    "name": "name",
                    "node": "node"
                  },
                  "required": {
                    "always": true,
                    "conditionalValues": {},
                    "ifDefined": {},
                    "ifNotDefined": {}
                  },
                  "step": 1,
                  "value": {}
                }
              ],
              "items": [
                "string"
              ],
              "iterationIndex": 0,
              "jobId": "jobId",
              "logic": {
                "cases": [
                  {
                    "condition": "condition",
                    "value": "value"
                  }
                ],
                "default": "default",
                "transform": "transform"
              },
              "logicType": "if-else",
              "loopBodyNodeIds": [
                "string"
              ],
              "loopNodeId": "loopNodeId",
              "modelId": "modelId",
              "output": {},
              "workflowId": "workflowId"
            }
          ],
          "hint": "hint",
          "input": {
            "foo": "bar"
          },
          "output": {
            "foo": "bar"
          },
          "outputModelId": "outputModelId",
          "workflowId": "workflowId",
          "workflowJobId": "workflowJobId"
        },
        "progress": 0,
        "status": "canceled",
        "statusHistory": [
          {
            "date": "date",
            "status": "canceled"
          }
        ],
        "updatedAt": "updatedAt",
        "authorId": "authorId",
        "billing": {
          "cuCost": 0,
          "cuDiscount": 0
        },
        "ownerId": "ownerId"
      }
    }
  ]
}
Returns Examples
{
  "examples": [
    {
      "asset": {
        "id": "id",
        "authorId": "authorId",
        "collectionIds": [
          "string"
        ],
        "createdAt": "createdAt",
        "editCapabilities": [
          "DETECTION"
        ],
        "kind": "3d",
        "metadata": {
          "kind": "3d",
          "type": "3d-texture",
          "angular": 0,
          "aspectRatio": "aspectRatio",
          "backgroundOpacity": 0,
          "baseModelId": "baseModelId",
          "bbox": [
            0,
            0,
            0,
            0
          ],
          "betterQuality": true,
          "cannyStructureImage": "cannyStructureImage",
          "clustering": true,
          "colorCorrection": true,
          "colorMode": "colorMode",
          "colorPrecision": 0,
          "concepts": [
            {
              "modelId": "modelId",
              "scale": -2,
              "modelEpoch": "modelEpoch"
            }
          ],
          "contours": [
            [
              [
                [
                  0
                ]
              ]
            ]
          ],
          "controlEnd": 0,
          "copiedAt": "copiedAt",
          "cornerThreshold": 0,
          "creativity": 0,
          "creativityDecay": 0,
          "defaultParameters": true,
          "depthFidelity": 0,
          "depthImage": "depthImage",
          "detailsLevel": -50,
          "dilate": 0,
          "factor": 0,
          "filterSpeckle": 0,
          "fractality": 0,
          "geometryEnforcement": 0,
          "guidance": 0,
          "halfMode": true,
          "hdr": 0,
          "height": 0,
          "highThreshold": 0,
          "horizontalExpansionRatio": 1,
          "image": "image",
          "imageFidelity": 0,
          "imageType": "seamfull",
          "inferenceId": "inferenceId",
          "inputFidelity": "high",
          "inputLocation": "bottom",
          "invert": true,
          "keypointThreshold": 0,
          "layerDifference": 0,
          "lengthThreshold": 0,
          "lockExpiresAt": "lockExpiresAt",
          "lowThreshold": 0,
          "mask": "mask",
          "maxIterations": 0,
          "maxThreshold": 0,
          "minThreshold": 0,
          "modality": "canny",
          "mode": "mode",
          "modelId": "modelId",
          "modelType": "custom",
          "name": "name",
          "nbMasks": 0,
          "negativePrompt": "negativePrompt",
          "negativePromptStrength": 0,
          "numInferenceSteps": 5,
          "numOutputs": 1,
          "originalAssetId": "originalAssetId",
          "outputIndex": 0,
          "overlapPercentage": 0,
          "overrideEmbeddings": true,
          "parentId": "parentId",
          "parentJobId": "parentJobId",
          "pathPrecision": 0,
          "points": [
            [
              0
            ],
            [
              0
            ],
            [
              0
            ]
          ],
          "polished": 0,
          "preset": "preset",
          "progressPercent": 0,
          "prompt": "prompt",
          "promptFidelity": 0,
          "raised": 0,
          "referenceImages": [
            "string"
          ],
          "refinementSteps": 0,
          "removeBackground": true,
          "resizeOption": 0.1,
          "resultContours": true,
          "resultImage": true,
          "resultMask": true,
          "rootParentId": "rootParentId",
          "saveFlipbook": true,
          "scalingFactor": 1,
          "scheduler": "scheduler",
          "seed": "seed",
          "sharpen": true,
          "shiny": 0,
          "size": 0,
          "sketch": true,
          "sourceProjectId": "sourceProjectId",
          "spliceThreshold": 0,
          "strength": 0,
          "structureFidelity": 0,
          "structureImage": "structureImage",
          "style": "3d-cartoon",
          "styleFidelity": 0,
          "styleImages": [
            "string"
          ],
          "styleImagesFidelity": 0,
          "targetHeight": 0,
          "targetWidth": 1024,
          "text": "text",
          "texture": "texture",
          "thumbnail": {
            "assetId": "assetId",
            "url": "url"
          },
          "tileStyle": true,
          "trainingImage": true,
          "verticalExpansionRatio": 1,
          "width": 1024
        },
        "mimeType": "mimeType",
        "ownerId": "ownerId",
        "privacy": "private",
        "properties": {
          "size": 0,
          "animationFrameCount": 0,
          "bitrate": 0,
          "boneCount": 0,
          "channels": 0,
          "classification": "effect",
          "codecName": "codecName",
          "description": "description",
          "dimensions": [
            0,
            0,
            0
          ],
          "duration": 0,
          "faceCount": 0,
          "format": "format",
          "frameRate": 0,
          "hasAnimations": true,
          "hasNormals": true,
          "hasSkeleton": true,
          "hasUVs": true,
          "height": 0,
          "nbFrames": 0,
          "sampleRate": 0,
          "transcription": {
            "text": "text"
          },
          "vertexCount": 0,
          "width": 0
        },
        "source": "3d23d",
        "status": "error",
        "tags": [
          "string"
        ],
        "updatedAt": "updatedAt",
        "url": "url",
        "automaticCaptioning": "automaticCaptioning",
        "description": "description",
        "embedding": [
          0
        ],
        "firstFrame": {
          "assetId": "assetId",
          "url": "url"
        },
        "isHidden": true,
        "lastFrame": {
          "assetId": "assetId",
          "url": "url"
        },
        "nsfw": [
          "string"
        ],
        "originalFileUrl": "originalFileUrl",
        "outputIndex": 0,
        "preview": {
          "assetId": "assetId",
          "url": "url"
        },
        "thumbnail": {
          "assetId": "assetId",
          "url": "url"
        }
      },
      "modelId": "modelId",
      "inferenceId": "inferenceId",
      "inferenceParameters": {
        "prompt": "prompt",
        "type": "controlnet",
        "aspectRatio": "16:9",
        "baseModelId": "baseModelId",
        "concepts": [
          {
            "modelId": "modelId",
            "scale": -2,
            "modelEpoch": "modelEpoch"
          }
        ],
        "controlEnd": 0.1,
        "controlImage": "controlImage",
        "controlImageId": "controlImageId",
        "controlStart": 0,
        "disableMerging": true,
        "disableModalityDetection": true,
        "guidance": 0,
        "height": 64,
        "hideResults": true,
        "image": "image",
        "imageId": "imageId",
        "intermediateImages": true,
        "ipAdapterImage": "ipAdapterImage",
        "ipAdapterImageId": "ipAdapterImageId",
        "ipAdapterImageIds": [
          "string"
        ],
        "ipAdapterImages": [
          "string"
        ],
        "ipAdapterScale": 0,
        "ipAdapterScales": [
          0
        ],
        "ipAdapterType": "character",
        "mask": "mask",
        "maskId": "maskId",
        "modality": "modality",
        "modelEpoch": "modelEpoch",
        "negativePrompt": "negativePrompt",
        "negativePromptStrength": 0,
        "numInferenceSteps": 1,
        "numSamples": 1,
        "referenceAdain": true,
        "referenceAttn": true,
        "scheduler": "DDIMScheduler",
        "seed": "seed",
        "strength": 0.01,
        "styleFidelity": 0,
        "width": 64
      },
      "job": {
        "createdAt": "createdAt",
        "jobId": "jobId",
        "jobType": "assets-download",
        "metadata": {
          "assetIds": [
            "string"
          ],
          "error": "error",
          "flow": [
            {
              "id": "id",
              "status": "failure",
              "type": "custom-model",
              "assets": [
                {
                  "assetId": "assetId",
                  "url": "url"
                }
              ],
              "count": 0,
              "dependsOn": [
                "string"
              ],
              "includeOutputsInWorkflowJob": true,
              "inputs": [
                {
                  "name": "name",
                  "type": "boolean",
                  "allowedValues": [
                    {}
                  ],
                  "backgroundBehavior": "opaque",
                  "color": true,
                  "costImpact": true,
                  "default": {},
                  "description": "description",
                  "group": "group",
                  "hint": "hint",
                  "inputs": [
                    {
                      "foo": "bar"
                    }
                  ],
                  "items": [
                    [
                      {
                        "name": "name",
                        "type": "boolean",
                        "allowedValues": [
                          {}
                        ],
                        "backgroundBehavior": "opaque",
                        "color": true,
                        "costImpact": true,
                        "default": {},
                        "description": "description",
                        "group": "group",
                        "hint": "hint",
                        "inputs": [
                          {
                            "foo": "bar"
                          }
                        ],
                        "kind": "3d",
                        "label": "label",
                        "maskFrom": "maskFrom",
                        "max": 0,
                        "maxLength": 0,
                        "maxSize": 0,
                        "min": 0,
                        "minLength": 0,
                        "modelTypes": [
                          "custom"
                        ],
                        "parent": true,
                        "placeholder": "placeholder",
                        "prompt": true,
                        "promptSpark": true,
                        "ref": {
                          "conditional": [
                            "string"
                          ],
                          "equal": "equal",
                          "name": "name",
                          "node": "node"
                        },
                        "required": {
                          "always": true,
                          "conditionalValues": {},
                          "ifDefined": {},
                          "ifNotDefined": {}
                        },
                        "step": 1,
                        "value": {}
                      }
                    ]
                  ],
                  "kind": "3d",
                  "label": "label",
                  "maskFrom": "maskFrom",
                  "max": 0,
                  "maxLength": 0,
                  "maxSize": 0,
                  "min": 0,
                  "minLength": 0,
                  "modelTypes": [
                    "custom"
                  ],
                  "parent": true,
                  "placeholder": "placeholder",
                  "prompt": true,
                  "promptSpark": true,
                  "ref": {
                    "conditional": [
                      "string"
                    ],
                    "equal": "equal",
                    "name": "name",
                    "node": "node"
                  },
                  "required": {
                    "always": true,
                    "conditionalValues": {},
                    "ifDefined": {},
                    "ifNotDefined": {}
                  },
                  "step": 1,
                  "value": {}
                }
              ],
              "items": [
                "string"
              ],
              "iterationIndex": 0,
              "jobId": "jobId",
              "logic": {
                "cases": [
                  {
                    "condition": "condition",
                    "value": "value"
                  }
                ],
                "default": "default",
                "transform": "transform"
              },
              "logicType": "if-else",
              "loopBodyNodeIds": [
                "string"
              ],
              "loopNodeId": "loopNodeId",
              "modelId": "modelId",
              "output": {},
              "workflowId": "workflowId"
            }
          ],
          "hint": "hint",
          "input": {
            "foo": "bar"
          },
          "output": {
            "foo": "bar"
          },
          "outputModelId": "outputModelId",
          "workflowId": "workflowId",
          "workflowJobId": "workflowJobId"
        },
        "progress": 0,
        "status": "canceled",
        "statusHistory": [
          {
            "date": "date",
            "status": "canceled"
          }
        ],
        "updatedAt": "updatedAt",
        "authorId": "authorId",
        "billing": {
          "cuCost": 0,
          "cuDiscount": 0
        },
        "ownerId": "ownerId"
      }
    }
  ]
}