Skip to content
Get started

Usages

List
usages.list(UsageListParams**kwargs) -> UsageListResponse
GET/usages
ModelsExpand Collapse
class UsageListResponse:
activity: Optional[List[Activity]]
action: Literal["asset", "asset-privacy", "assistant-message", 94 more]

The action name

One of the following:
"asset"
"asset-privacy"
"assistant-message"
"background-removal"
"byok-remove-project-provider"
"byok-remove-provider"
"byok-set-project-provider"
"byok-set-provider"
"captioning"
"collection"
"collection-assets"
"collection-models"
"controlnet"
"controlnet-img2img"
"controlnet-inpaint"
"controlnet-ip-adapter"
"controlnet-texture"
"copy-asset"
"copy-model"
"creative-unit-cost"
"creative-unit-discount"
"custom"
"custom-asset-created"
"delete-asset"
"delete-collection"
"delete-collection-assets"
"delete-collection-models"
"delete-inference-image"
"delete-model"
"delete-model-preset"
"delete-oscu-auto-refill"
"delete-project-member"
"delete-subscription"
"delete-team-api-key"
"delete-team-invitations"
"delete-team-member"
"delete-training-images"
"describe-style"
"detection"
"disable-project-model"
"disable-team-model"
"download-assets"
"download-model"
"embed"
"enable-project-model"
"enable-team-model"
"generative-fill"
"image-prompt-editing"
"images-generation"
"img2img"
"img2img-ip-adapter"
"img2img-texture"
"inference"
"inpaint"
"inpaint-ip-adapter"
"model"
"model-preset"
"models-training"
"oscu"
"patch"
"pixelate"
"project"
"project-member"
"reframe"
"repaint"
"restyle"
"segmentation"
"skybox-base-360"
"skybox-upscale-360"
"start-train"
"subscription"
"subscription-seats"
"tag-asset"
"tag-model"
"team-api-key"
"team-member"
"texture"
"train-succeeded"
"training-images-to-model"
"transfer-model"
"txt2img"
"txt2img-ip-adapter"
"update-asset"
"update-collection"
"update-model"
"update-model-description"
"update-model-examples"
"update-model-prompt-guide"
"update-oscu-auto-refill"
"update-project"
"update-project-instructions"
"update-subscription"
"update-team"
"update-team-instructions"
"update-team-member"
"upscale"
"vectorization"
data: ActivityData

The additional data of the action

asset_id: Optional[str]

The asset for this action

byok_provider: Optional[ActivityDataByokProvider]

The BYOK provider information for this action Only set if the action is a BYOK action (byok-set-provider or byok-remove-provider)

id: str
display_name: str
collection_id: Optional[str]

The collection for this action

is_api_key: Optional[bool]

Whether the action is an API key action

job_id: Optional[str]

The job for this action

model_id: Optional[str]

The model for this action

project_id: str

The projectId of the project for this action

time: str

The UTC ISO date of the point

user_id: str

The unique identifier of the user for this action

creative_units_cost: Optional[float]

The Compute Units cost for this action

asset_usages: Optional[List[AssetUsage]]
kind: Literal["3d", "audio", "document", 4 more]
One of the following:
"3d"
"audio"
"document"
"image"
"image-hdr"
"json"
"video"
points: List[AssetUsagePoint]

The data points

count: float

Number of assets created

count_api_key: float

Number of assets created via an API key

time: str

The UTC ISO date of the point

consumption: Optional[List[Consumption]]
discount: float

The Creative Units discount for the user

total: float

The total consumption for the user (value + discount)

user_id: str

The unique identifier of the user

value: float

The Compute Units consumption for the user

entities: Optional[Entities]
assets: Optional[List[EntitiesAsset]]
id: str

The asset ID

kind: Literal["3d", "audio", "document", 4 more]

The kind of the asset

One of the following:
"3d"
"audio"
"document"
"image"
"image-hdr"
"json"
"video"
metadata: EntitiesAssetMetadata

Partial metadata of the asset

type: Literal["3d-texture", "3d-texture-albedo", "3d-texture-metallic", 72 more]

The type of the asset

One of the following:
"3d-texture"
"3d-texture-albedo"
"3d-texture-metallic"
"3d-texture-mtl"
"3d-texture-normal"
"3d-texture-roughness"
"3d23d"
"3d23d-texture"
"audio2audio"
"audio2video"
"background-removal"
"canvas"
"canvas-drawing"
"canvas-export"
"detection"
"generative-fill"
"image-prompt-editing"
"img23d"
"img2img"
"img2video"
"inference-controlnet"
"inference-controlnet-img2img"
"inference-controlnet-inpaint"
"inference-controlnet-inpaint-ip-adapter"
"inference-controlnet-ip-adapter"
"inference-controlnet-reference"
"inference-controlnet-texture"
"inference-img2img"
"inference-img2img-ip-adapter"
"inference-img2img-texture"
"inference-inpaint"
"inference-inpaint-ip-adapter"
"inference-reference"
"inference-reference-texture"
"inference-txt2img"
"inference-txt2img-ip-adapter"
"inference-txt2img-texture"
"patch"
"pixelization"
"reframe"
"restyle"
"segment"
"segmentation-image"
"segmentation-mask"
"skybox-3d"
"skybox-base-360"
"skybox-hdri"
"texture"
"texture-albedo"
"texture-ao"
"texture-edge"
"texture-height"
"texture-metallic"
"texture-normal"
"texture-smoothness"
"txt23d"
"txt2audio"
"txt2img"
"txt2video"
"unknown"
"uploaded"
"uploaded-3d"
"uploaded-audio"
"uploaded-avatar"
"uploaded-video"
"upscale"
"upscale-skybox"
"upscale-texture"
"upscale-video"
"vectorization"
"video23d"
"video2audio"
"video2img"
"video2video"
"voice-clone"
properties: EntitiesAssetProperties

The properties of the asset

size: float
animation_frame_count: Optional[float]

Number of animation frames if animations exist

bitrate: Optional[float]

Bitrate of the media in bits per second

bone_count: Optional[float]

Number of bones if skeleton exists

channels: Optional[float]

Number of channels of the audio

classification: Optional[Literal["effect", "interview", "music", 5 more]]

Classification of the audio

One of the following:
"effect"
"interview"
"music"
"other"
"sound"
"speech"
"text"
"unknown"
codec_name: Optional[str]

Codec name of the media

description: Optional[str]

Description of the audio

dimensions: Optional[List[float]]

Bounding box dimensions [width, height, depth]

duration: Optional[float]

Duration of the media in seconds

face_count: Optional[float]

Number of faces/triangles in the mesh

format: Optional[str]

Format of the mesh file (e.g. ‘glb’, etc.)

frame_rate: Optional[float]

Frame rate of the video in frames per second

has_animations: Optional[bool]

Whether the mesh has animations

has_normals: Optional[bool]

Whether the mesh has normal vectors

has_skeleton: Optional[bool]

Whether the mesh has bones/skeleton

has_u_vs: Optional[bool]

Whether the mesh has UV coordinates

height: Optional[float]
nb_frames: Optional[float]

Number of frames in the video

sample_rate: Optional[float]

Sample rate of the media in Hz

transcription: Optional[EntitiesAssetPropertiesTranscription]

Transcription of the audio

text: str
vertex_count: Optional[float]

Number of vertices in the mesh

width: Optional[float]
source: Literal["3d23d", "3d23d:texture", "3d:texture", 72 more]

The source of the asset

One of the following:
"3d23d"
"3d23d:texture"
"3d:texture"
"3d:texture:albedo"
"3d:texture:metallic"
"3d:texture:mtl"
"3d:texture:normal"
"3d:texture:roughness"
"audio2audio"
"audio2video"
"background-removal"
"canvas"
"canvas-drawing"
"canvas-export"
"detection"
"generative-fill"
"image-prompt-editing"
"img23d"
"img2img"
"img2video"
"inference-control-net"
"inference-control-net-img"
"inference-control-net-inpainting"
"inference-control-net-inpainting-ip-adapter"
"inference-control-net-ip-adapter"
"inference-control-net-reference"
"inference-control-net-texture"
"inference-img"
"inference-img-ip-adapter"
"inference-img-texture"
"inference-in-paint"
"inference-in-paint-ip-adapter"
"inference-reference"
"inference-reference-texture"
"inference-txt"
"inference-txt-ip-adapter"
"inference-txt-texture"
"patch"
"pixelization"
"reframe"
"restyle"
"segment"
"segmentation-image"
"segmentation-mask"
"skybox-3d"
"skybox-base-360"
"skybox-hdri"
"texture"
"texture:albedo"
"texture:ao"
"texture:edge"
"texture:height"
"texture:metallic"
"texture:normal"
"texture:smoothness"
"txt23d"
"txt2audio"
"txt2img"
"txt2video"
"unknown"
"uploaded"
"uploaded-3d"
"uploaded-audio"
"uploaded-avatar"
"uploaded-video"
"upscale"
"upscale-skybox"
"upscale-texture"
"upscale-video"
"vectorization"
"video23d"
"video2audio"
"video2img"
"video2video"
"voice-clone"
collections: Optional[List[EntitiesCollection]]
id: str

The collection ID

name: str

The name of the collection

jobs: Optional[List[EntitiesJob]]
id: str

The job ID

job_type: Literal["assets-download", "canvas-export", "caption", 36 more]

The job type

One of the following:
"assets-download"
"canvas-export"
"caption"
"caption-llava"
"custom"
"describe-style"
"detection"
"embed"
"flux"
"flux-model-training"
"generate-prompt"
"image-generation"
"image-prompt-editing"
"inference"
"mesh-preview-rendering"
"model-download"
"model-import"
"model-training"
"musubi-model-training"
"openai-image-generation"
"patch-image"
"pixelate"
"reframe"
"remove-background"
"repaint"
"restyle"
"segment"
"skybox-3d"
"skybox-base-360"
"skybox-hdri"
"skybox-upscale-360"
"texture"
"translate"
"upload"
"upscale"
"upscale-skybox"
"upscale-texture"
"vectorize"
"workflow"
metadata: EntitiesJobMetadata

The metadata of the job

asset_ids: Optional[List[str]]

List of produced assets for this job

error: Optional[str]

Eventual error for the job

flow: Optional[List[EntitiesJobMetadataFlow]]

The flow of the job. Only available for workflow jobs.

id: str

The id of the node.

status: Literal["failure", "pending", "processing", 2 more]

The status of the node. Only available for WorkflowJob nodes.

One of the following:
"failure"
"pending"
"processing"
"skipped"
"success"
type: Literal["custom-model", "for-each", "generate-prompt", 7 more]

The type of the job for the node.

One of the following:
"custom-model"
"for-each"
"generate-prompt"
"list"
"logic"
"model"
"remove-background"
"transform"
"user-approval"
"workflow"
assets: Optional[List[EntitiesJobMetadataFlowAsset]]

List of produced assets for this node.

asset_id: str
url: str
count: Optional[float]

Fixed number of iterations for a ForEach node. When set, the loop runs exactly count times regardless of array input. When not set, the loop iterates over the resolved array input. Only available for ForEach nodes.

depends_on: Optional[List[str]]

The nodes that this node depends on. Only available for nodes that have dependencies. Mainly used for user approval nodes.

include_outputs_in_workflow_job: Optional[Literal[true]]

If true, the outputs of this node will be included in the workflow job’s final output. Only applicable to producing nodes (custom-model, inference, etc.). By default, only last nodes (nodes not referenced by other nodes) contribute to outputs. Set this to true to also include intermediate nodes in the final output. Note: This should only be set to true or left undefined.

inputs: Optional[List[EntitiesJobMetadataFlowInput]]

The inputs of the node.

name: str

The name that must be user to call the model through the API

type: Literal["boolean", "file", "file_array", 7 more]

The data type of the input

One of the following:
"boolean"
"file"
"file_array"
"inputs_array"
"model"
"model_array"
"number"
"number_array"
"string"
"string_array"
allowed_values: Optional[List[object]]

The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown.

background_behavior: Optional[Literal["opaque", "transparent"]]

Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`.

One of the following:
"opaque"
"transparent"
color: Optional[bool]

Whether the input is a color or not. Only available for `string` input type.

cost_impact: Optional[bool]

Whether this input affects the model’s cost calculation

default: Optional[object]

The default value for the input

description: Optional[str]

Help text displayed in the UI to provide additional information about the input

group: Optional[str]

Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI.

hint: Optional[str]

Hint text displayed in the UI as a tooltip to guide the user

inputs: Optional[List[Dict[str, object]]]

The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs.

items: Optional[List[List[EntitiesJobMetadataFlowInputItem]]]

The configured items for inputs_array type inputs. Each item is an array of SubNodeInput that need ref/value resolution. Only available for inputs_array type.

name: str

The name that must be user to call the model through the API

type: Literal["boolean", "file", "file_array", 7 more]

The data type of the input

One of the following:
"boolean"
"file"
"file_array"
"inputs_array"
"model"
"model_array"
"number"
"number_array"
"string"
"string_array"
allowed_values: Optional[List[object]]

The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown.

background_behavior: Optional[Literal["opaque", "transparent"]]

Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`.

One of the following:
"opaque"
"transparent"
color: Optional[bool]

Whether the input is a color or not. Only available for `string` input type.

cost_impact: Optional[bool]

Whether this input affects the model’s cost calculation

default: Optional[object]

The default value for the input

description: Optional[str]

Help text displayed in the UI to provide additional information about the input

group: Optional[str]

Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI.

hint: Optional[str]

Hint text displayed in the UI as a tooltip to guide the user

inputs: Optional[List[Dict[str, object]]]

The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs.

kind: Optional[Literal["3d", "audio", "document", 4 more]]

The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix

One of the following:
"3d"
"audio"
"document"
"image"
"image-hdr"
"json"
"video"
label: Optional[str]

The label displayed in the UI for this input

mask_from: Optional[str]

The name of the file input field to use as the mask source

max: Optional[float]

The maximum allowed value. Only available for `number` and `array` input types.

max_length: Optional[float]

The maximum allowed length for `string` inputs. Also applies to each item in `string_array`.

max_size: Optional[float]

The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time.

min: Optional[float]

The minimum allowed value. Only available for `number` and array input types.

min_length: Optional[float]

The minimum allowed length for string inputs. Also applies to each item in `string_array`.

model_types: Optional[List[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]]

The allowed model types for this input. Example: `[“flux.1-lora”]`. Only available for `model_array` input type.

One of the following:
"custom"
"elevenlabs-voice"
"flux.1"
"flux.1-composition"
"flux.1-kontext-dev"
"flux.1-kontext-lora"
"flux.1-krea-dev"
"flux.1-krea-lora"
"flux.1-lora"
"flux.1-pro"
"flux.1.1-pro-ultra"
"flux.2-dev-edit-lora"
"flux.2-dev-lora"
"flux.2-klein-4b-edit-lora"
"flux.2-klein-4b-lora"
"flux.2-klein-9b-edit-lora"
"flux.2-klein-9b-lora"
"flux.2-klein-base-4b-edit-lora"
"flux.2-klein-base-4b-lora"
"flux.2-klein-base-9b-edit-lora"
"flux.2-klein-base-9b-lora"
"flux1.1-pro"
"gpt-image-1"
"qwen-image-2512-lora"
"qwen-image-edit-2509-lora"
"qwen-image-edit-2511-lora"
"qwen-image-edit-lora"
"qwen-image-lora"
"sd-1_5"
"sd-1_5-composition"
"sd-1_5-lora"
"sd-xl"
"sd-xl-composition"
"sd-xl-lora"
"zimage-de-turbo-lora"
"zimage-lora"
"zimage-turbo-lora"
parent: Optional[bool]

Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types.

For `file_array`, the parent asset is the first item in the array.

placeholder: Optional[str]

Placeholder text for the input. Only available for ‘string’ input type.

prompt: Optional[bool]

Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type.

prompt_spark: Optional[bool]

Whether the input is used with prompt spark. Only available for `string` input type.

ref: Optional[EntitiesJobMetadataFlowInputItemRef]

The reference to another input or output of the same workflow. Must have at least one of node or conditional.

conditional: Optional[List[str]]

The conditional nodes to reference. If the conditional nodes are successful, the node will be successful. If the conditional nodes are skipped, the node will be skipped. Contains an array of node ids used to check the status of the nodes.

equal: Optional[str]

This is the desired node output value if ref is an if/else node.

name: Optional[str]

The name of the input or output to reference. If the type is ‘workflow’, the name is the name of the input of the workflow is required If the type is ‘node’, the name is not mandatory, except if you want all outputs of the node. To get all outputs of a node, you can use the name ‘all’.

node: Optional[str]

The node id or ‘workflow’ if the source is a workflow input.

required: Optional[EntitiesJobMetadataFlowInputItemRequired]

Set of rules that describes when this input is required:

  • `always`: Input is always required
  • `ifNotDefined`: Input is required when another specified input is not defined
  • `ifDefined`: Input is required when another specified input is defined
  • `conditionalValues`: Input is required when another input has a specific value

By default, the input is not required.

always: Optional[bool]

Whether the input is always required

conditional_values: Optional[object]

Makes this input required when another input has a specific value:

  • Key: name of the input to check
  • Value: operation and allowed values that trigger the requirement
if_defined: Optional[object]

Makes this input required when another input is defined:

  • Key: name of the input that must be defined
  • Value: message to display when this input is required
if_not_defined: Optional[object]

Makes this input required when another input is not defined:

  • Key: name of the input that must be undefined
  • Value: message to display when this input is required
step: Optional[float]

The step increment for numeric inputs. Only available for `number` input type.

minimum1
value: Optional[object]

The value of the input. This is the value of the input that will be used to run the node. Only available for flows managed by a WorkflowJob.

kind: Optional[Literal["3d", "audio", "document", 4 more]]

The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix

One of the following:
"3d"
"audio"
"document"
"image"
"image-hdr"
"json"
"video"
label: Optional[str]

The label displayed in the UI for this input

mask_from: Optional[str]

The name of the file input field to use as the mask source

max: Optional[float]

The maximum allowed value. Only available for `number` and `array` input types.

max_length: Optional[float]

The maximum allowed length for `string` inputs. Also applies to each item in `string_array`.

max_size: Optional[float]

The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time.

min: Optional[float]

The minimum allowed value. Only available for `number` and array input types.

min_length: Optional[float]

The minimum allowed length for string inputs. Also applies to each item in `string_array`.

model_types: Optional[List[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]]

The allowed model types for this input. Example: `[“flux.1-lora”]`. Only available for `model_array` input type.

One of the following:
"custom"
"elevenlabs-voice"
"flux.1"
"flux.1-composition"
"flux.1-kontext-dev"
"flux.1-kontext-lora"
"flux.1-krea-dev"
"flux.1-krea-lora"
"flux.1-lora"
"flux.1-pro"
"flux.1.1-pro-ultra"
"flux.2-dev-edit-lora"
"flux.2-dev-lora"
"flux.2-klein-4b-edit-lora"
"flux.2-klein-4b-lora"
"flux.2-klein-9b-edit-lora"
"flux.2-klein-9b-lora"
"flux.2-klein-base-4b-edit-lora"
"flux.2-klein-base-4b-lora"
"flux.2-klein-base-9b-edit-lora"
"flux.2-klein-base-9b-lora"
"flux1.1-pro"
"gpt-image-1"
"qwen-image-2512-lora"
"qwen-image-edit-2509-lora"
"qwen-image-edit-2511-lora"
"qwen-image-edit-lora"
"qwen-image-lora"
"sd-1_5"
"sd-1_5-composition"
"sd-1_5-lora"
"sd-xl"
"sd-xl-composition"
"sd-xl-lora"
"zimage-de-turbo-lora"
"zimage-lora"
"zimage-turbo-lora"
parent: Optional[bool]

Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types.

For `file_array`, the parent asset is the first item in the array.

placeholder: Optional[str]

Placeholder text for the input. Only available for ‘string’ input type.

prompt: Optional[bool]

Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type.

prompt_spark: Optional[bool]

Whether the input is used with prompt spark. Only available for `string` input type.

ref: Optional[EntitiesJobMetadataFlowInputRef]

The reference to another input or output of the same workflow. Must have at least one of node or conditional.

conditional: Optional[List[str]]

The conditional nodes to reference. If the conditional nodes are successful, the node will be successful. If the conditional nodes are skipped, the node will be skipped. Contains an array of node ids used to check the status of the nodes.

equal: Optional[str]

This is the desired node output value if ref is an if/else node.

name: Optional[str]

The name of the input or output to reference. If the type is ‘workflow’, the name is the name of the input of the workflow is required If the type is ‘node’, the name is not mandatory, except if you want all outputs of the node. To get all outputs of a node, you can use the name ‘all’.

node: Optional[str]

The node id or ‘workflow’ if the source is a workflow input.

required: Optional[EntitiesJobMetadataFlowInputRequired]

Set of rules that describes when this input is required:

  • `always`: Input is always required
  • `ifNotDefined`: Input is required when another specified input is not defined
  • `ifDefined`: Input is required when another specified input is defined
  • `conditionalValues`: Input is required when another input has a specific value

By default, the input is not required.

always: Optional[bool]

Whether the input is always required

conditional_values: Optional[object]

Makes this input required when another input has a specific value:

  • Key: name of the input to check
  • Value: operation and allowed values that trigger the requirement
if_defined: Optional[object]

Makes this input required when another input is defined:

  • Key: name of the input that must be defined
  • Value: message to display when this input is required
if_not_defined: Optional[object]

Makes this input required when another input is not defined:

  • Key: name of the input that must be undefined
  • Value: message to display when this input is required
step: Optional[float]

The step increment for numeric inputs. Only available for `number` input type.

minimum1
value: Optional[object]

The value of the input. This is the value of the input that will be used to run the node. Only available for flows managed by a WorkflowJob.

items: Optional[List[str]]

Statically-configured items for a List node. The node outputs this array as-is when executed. Only available for List nodes. The values can be strings, numbers, or asset IDs.

iteration_index: Optional[float]

Zero-based index of the iteration this node copy belongs to. Set on dynamically-created copies of loop body nodes.

job_id: Optional[str]

If the flow is part of a WorkflowJob, this is the jobId for the node. jobId is only available for nodes started. A node “Pending” for a running workflow job is not started.

logic: Optional[EntitiesJobMetadataFlowLogic]

The logic of the node. Only available for logic nodes.

cases: Optional[List[EntitiesJobMetadataFlowLogicCase]]

The cases of the logic. Only available for if/else nodes.

condition: str
value: str
default: Optional[str]

The default case of the logic. Contains the id/output of the node to execute if no case is matched. Only available for if/else nodes.

transform: Optional[str]

The transform of the logic. Only available for transform nodes.

logic_type: Optional[Literal["if-else"]]

The type of the logic for the node. Only available for logic nodes.

loop_body_node_ids: Optional[List[str]]

IDs of the body template nodes that belong to this ForEach loop. At runtime these templates are cloned once per iteration and marked Skipped. Only available for ForEach nodes.

loop_node_id: Optional[str]

ID of the ForEach node that spawned this iteration copy. Set on dynamically-created copies of loop body nodes.

model_id: Optional[str]

The model id for the node. Mainly used for custom model tasks.

output: Optional[object]

The output of the node. Only available for logic nodes.

workflow_id: Optional[str]

The workflow id for the node. Mainly used for workflow tasks.

hint: Optional[str]

Actionable hint for the user explaining what went wrong and how to resolve it.

input: Optional[Dict[str, object]]

The inputs for the job

output: Optional[Dict[str, object]]

May contain the output of the job for specific custom models jobs. Only available for custom models which generate non-assets outputs. Example: LLM text results.

output_model_id: Optional[str]

For voice-clone jobs: the ID of the model being trained.

workflow_id: Optional[str]

The workflow ID of the job if job is part of a workflow.

workflow_job_id: Optional[str]

The workflow job ID of the job if job is part of a workflow job.

status: Literal["canceled", "failure", "finalizing", 5 more]

The status of the job

One of the following:
"canceled"
"failure"
"finalizing"
"in-progress"
"pending"
"queued"
"success"
"warming-up"
models: Optional[List[EntitiesModel]]
id: str

The model ID

name: str

The name of the model

short_description: Optional[str]

The short description of the model

users: Optional[List[EntitiesUser]]
id: str

The user ID

is_api_key: bool

Whether the user is an API key

api_key_id: Optional[str]

The API key ID

Will be available:

  • if the user is an API key
api_key_status: Optional[Literal["active", "deleted", "inactive"]]

The API key status

Will be available:

  • if the user is an API key
One of the following:
"active"
"deleted"
"inactive"
avatar: Optional[EntitiesUserAvatar]

The user avatar

Will be available:

  • if the user hasn’t left the Scenario platform
  • if the user isn’t an API key
asset_id: Optional[str]

ID of the asset used as thumbnail if provided, otherwise undefined

url: Optional[str]

Signed URL of the assetId or free url if assetId is undefined

email: Optional[str]

The email of the user

Will be available:

  • if the user hasn’t left the Scenario platform
  • if the user isn’t an API key
full_name: Optional[str]

The full name of the user

Will be available:

  • if the user hasn’t left the Scenario platform
  • if the user isn’t an API key
model_usages: Optional[List[ModelUsage]]
model_id: str
points: List[ModelUsagePoint]

The data points

api_key_cost: float

Cost for model usage for API key only

api_key_discount: float

The discount for model usage for API key only

cost: float

Cost for model usage

discount: float

The discount for model usage

jobs: float

Number of jobs for the model usage

time: str

The UTC ISO date of the point

nsfw_usages: Optional[List[NsfwUsage]]
label: str
points: List[NsfwUsagePoint]

The data points

count: float

Number of NSFW assets created

count_api_key: float

Number of NSFW assets created via an API key

time: str

The UTC ISO date of the point

usages: Optional[List[Usage]]
granularity: Literal["15m", "1d", "1h", 4 more]

Granularity for points (example: “1d”, “1h”, “1m”, “15m”)

One of the following:
"15m"
"1d"
"1h"
"1m"
"30m"
"5m"
"7d"
points: List[UsagePoint]

The usage data points

api_key: str

Value of the point for API key only

time: str

The UTC ISO date of the point

value: str

Value of the point

usage_name: Literal["background-removal", "captioning", "creative-unit-cost", 17 more]

Name of the usage points (example: “images-generation”, “generators-training”, “background-removal”, “upscale”, …)

One of the following:
"background-removal"
"captioning"
"creative-unit-cost"
"creative-unit-discount"
"custom"
"custom-asset-created"
"detection"
"image-prompt-editing"
"images-generation"
"models-training"
"patch"
"pixelate"
"repaint"
"restyle"
"segmentation"
"skybox-base-360"
"skybox-upscale-360"
"texture"
"upscale"
"vectorization"