Skip to content
Get started

Description

Retrieve
client.models.description.retrieve(stringmodelID, DescriptionRetrieveParams { originalAssets } query?, RequestOptionsoptions?): DescriptionRetrieveResponse { description }
GET/models/{modelId}/description
Update
client.models.description.update(stringmodelID, DescriptionUpdateParams { description, originalAssets } params, RequestOptionsoptions?): DescriptionUpdateResponse { description }
PUT/models/{modelId}/description
ModelsExpand Collapse
DescriptionRetrieveResponse { description }
description: Description { assets, models, value }
assets: Array<Asset>

The list of assets referenced by the Markdown {asset} tag in the description.

id: string

The asset ID (example: “asset_GTrL3mq4SXWyMxkOHRxlpw”)

authorId: string

The author user ID (example: “dcf121faaa1a0a0bbbd9ca1b73d62aea”)

kind: "3d" | "audio" | "document" | 4 more

The kind of asset

One of the following:
"3d"
"audio"
"document"
"image"
"image-hdr"
"json"
"video"
mimeType: string

The mime type of the asset (example: “image/png”)

ownerId: string

The owner (project) ID (example: “proj_23tlk332lkht3kl2” or “team_dlkhgs23tlk3hlkth32lkht3kl2” for old teams)

privacy: "private" | "public" | "unlisted"

The privacy of the asset

One of the following:
"private"
"public"
"unlisted"
properties: Properties { size, animationFrameCount, bitrate, 20 more }

The properties of the asset, content may depend on the kind of asset returned

size: number
animationFrameCount?: number

Number of animation frames if animations exist

bitrate?: number

Bitrate of the media in bits per second

boneCount?: number

Number of bones if skeleton exists

channels?: number

Number of channels of the audio

classification?: "effect" | "interview" | "music" | 5 more

Classification of the audio

One of the following:
"effect"
"interview"
"music"
"other"
"sound"
"speech"
"text"
"unknown"
codecName?: string

Codec name of the media

description?: string

Description of the audio

dimensions?: Array<number>

Bounding box dimensions [width, height, depth]

duration?: number

Duration of the media in seconds

faceCount?: number

Number of faces/triangles in the mesh

format?: string

Format of the mesh file (e.g. ‘glb’, etc.)

frameRate?: number

Frame rate of the video in frames per second

hasAnimations?: boolean

Whether the mesh has animations

hasNormals?: boolean

Whether the mesh has normal vectors

hasSkeleton?: boolean

Whether the mesh has bones/skeleton

hasUVs?: boolean

Whether the mesh has UV coordinates

height?: number
nbFrames?: number

Number of frames in the video

sampleRate?: number

Sample rate of the media in Hz

transcription?: Transcription { text }

Transcription of the audio

text: string
vertexCount?: number

Number of vertices in the mesh

width?: number
source: "3d23d" | "3d23d:texture" | "3d:texture" | 72 more

source of the asset

One of the following:
"3d23d"
"3d23d:texture"
"3d:texture"
"3d:texture:albedo"
"3d:texture:metallic"
"3d:texture:mtl"
"3d:texture:normal"
"3d:texture:roughness"
"audio2audio"
"audio2video"
"background-removal"
"canvas"
"canvas-drawing"
"canvas-export"
"detection"
"generative-fill"
"image-prompt-editing"
"img23d"
"img2img"
"img2video"
"inference-control-net"
"inference-control-net-img"
"inference-control-net-inpainting"
"inference-control-net-inpainting-ip-adapter"
"inference-control-net-ip-adapter"
"inference-control-net-reference"
"inference-control-net-texture"
"inference-img"
"inference-img-ip-adapter"
"inference-img-texture"
"inference-in-paint"
"inference-in-paint-ip-adapter"
"inference-reference"
"inference-reference-texture"
"inference-txt"
"inference-txt-ip-adapter"
"inference-txt-texture"
"patch"
"pixelization"
"reframe"
"restyle"
"segment"
"segmentation-image"
"segmentation-mask"
"skybox-3d"
"skybox-base-360"
"skybox-hdri"
"texture"
"texture:albedo"
"texture:ao"
"texture:edge"
"texture:height"
"texture:metallic"
"texture:normal"
"texture:smoothness"
"txt23d"
"txt2audio"
"txt2img"
"txt2video"
"unknown"
"uploaded"
"uploaded-3d"
"uploaded-audio"
"uploaded-avatar"
"uploaded-video"
"upscale"
"upscale-skybox"
"upscale-texture"
"upscale-video"
"vectorization"
"video23d"
"video2audio"
"video2img"
"video2video"
"voice-clone"
url: string

Signed URL to get the asset content

originalFileUrl?: string

The original file url.

Contains the url of the original file. without any conversion. Only available for some specific video, audio and threeD assets. Is only specified if the given asset data has been replaced with a new file during the creation of the asset.

preview?: Preview { assetId, url }

The asset’s preview.

Contains the assetId and the url of the preview.

assetId: string
url: string
thumbnail?: Thumbnail { assetId, url }

The asset’s thumbnail.

Contains the assetId and the url of the thumbnail.

assetId: string
url: string
models: Array<Model>

The list of models referenced by the Markdown {model} tag in the description.

id: string

The model ID (example: “model_eyVcnFJcR92BxBkz7N6g5w”)

privacy: "private" | "public" | "unlisted"

The privacy of the model (default: private)

One of the following:
"private"
"public"
"unlisted"
type: "custom" | "elevenlabs-voice" | "flux.1" | 34 more

The model type (example: “flux.1-lora”)

One of the following:
"custom"
"elevenlabs-voice"
"flux.1"
"flux.1-composition"
"flux.1-kontext-dev"
"flux.1-kontext-lora"
"flux.1-krea-dev"
"flux.1-krea-lora"
"flux.1-lora"
"flux.1-pro"
"flux.1.1-pro-ultra"
"flux.2-dev-edit-lora"
"flux.2-dev-lora"
"flux.2-klein-4b-edit-lora"
"flux.2-klein-4b-lora"
"flux.2-klein-9b-edit-lora"
"flux.2-klein-9b-lora"
"flux.2-klein-base-4b-edit-lora"
"flux.2-klein-base-4b-lora"
"flux.2-klein-base-9b-edit-lora"
"flux.2-klein-base-9b-lora"
"flux1.1-pro"
"gpt-image-1"
"qwen-image-2512-lora"
"qwen-image-edit-2509-lora"
"qwen-image-edit-2511-lora"
"qwen-image-edit-lora"
"qwen-image-lora"
"sd-1_5"
"sd-1_5-composition"
"sd-1_5-lora"
"sd-xl"
"sd-xl-composition"
"sd-xl-lora"
"zimage-de-turbo-lora"
"zimage-lora"
"zimage-turbo-lora"
authorId?: string

The author user ID (example: “user_VFhihHKMRZyDDnZAJwLb2Q”)

name?: string

The model name (example: “Cinematic Realism”)

ownerId?: string

The owner ID (example: “team_VFhihHKMRZyDDnZAJwLb2Q”)

shortDescription?: string

The model short description (example: “This model generates highly detailed cinematic scenes.”)

value: string

The markdown description of the model (ex: # My model). We allow the {asset:<assetId>} and {model:<modelId>} tags.

DescriptionUpdateResponse { description }
description: Description { assets, models, value }
assets: Array<Asset>

The list of assets referenced by the Markdown {asset} tag in the description.

id: string

The asset ID (example: “asset_GTrL3mq4SXWyMxkOHRxlpw”)

authorId: string

The author user ID (example: “dcf121faaa1a0a0bbbd9ca1b73d62aea”)

kind: "3d" | "audio" | "document" | 4 more

The kind of asset

One of the following:
"3d"
"audio"
"document"
"image"
"image-hdr"
"json"
"video"
mimeType: string

The mime type of the asset (example: “image/png”)

ownerId: string

The owner (project) ID (example: “proj_23tlk332lkht3kl2” or “team_dlkhgs23tlk3hlkth32lkht3kl2” for old teams)

privacy: "private" | "public" | "unlisted"

The privacy of the asset

One of the following:
"private"
"public"
"unlisted"
properties: Properties { size, animationFrameCount, bitrate, 20 more }

The properties of the asset, content may depend on the kind of asset returned

size: number
animationFrameCount?: number

Number of animation frames if animations exist

bitrate?: number

Bitrate of the media in bits per second

boneCount?: number

Number of bones if skeleton exists

channels?: number

Number of channels of the audio

classification?: "effect" | "interview" | "music" | 5 more

Classification of the audio

One of the following:
"effect"
"interview"
"music"
"other"
"sound"
"speech"
"text"
"unknown"
codecName?: string

Codec name of the media

description?: string

Description of the audio

dimensions?: Array<number>

Bounding box dimensions [width, height, depth]

duration?: number

Duration of the media in seconds

faceCount?: number

Number of faces/triangles in the mesh

format?: string

Format of the mesh file (e.g. ‘glb’, etc.)

frameRate?: number

Frame rate of the video in frames per second

hasAnimations?: boolean

Whether the mesh has animations

hasNormals?: boolean

Whether the mesh has normal vectors

hasSkeleton?: boolean

Whether the mesh has bones/skeleton

hasUVs?: boolean

Whether the mesh has UV coordinates

height?: number
nbFrames?: number

Number of frames in the video

sampleRate?: number

Sample rate of the media in Hz

transcription?: Transcription { text }

Transcription of the audio

text: string
vertexCount?: number

Number of vertices in the mesh

width?: number
source: "3d23d" | "3d23d:texture" | "3d:texture" | 72 more

source of the asset

One of the following:
"3d23d"
"3d23d:texture"
"3d:texture"
"3d:texture:albedo"
"3d:texture:metallic"
"3d:texture:mtl"
"3d:texture:normal"
"3d:texture:roughness"
"audio2audio"
"audio2video"
"background-removal"
"canvas"
"canvas-drawing"
"canvas-export"
"detection"
"generative-fill"
"image-prompt-editing"
"img23d"
"img2img"
"img2video"
"inference-control-net"
"inference-control-net-img"
"inference-control-net-inpainting"
"inference-control-net-inpainting-ip-adapter"
"inference-control-net-ip-adapter"
"inference-control-net-reference"
"inference-control-net-texture"
"inference-img"
"inference-img-ip-adapter"
"inference-img-texture"
"inference-in-paint"
"inference-in-paint-ip-adapter"
"inference-reference"
"inference-reference-texture"
"inference-txt"
"inference-txt-ip-adapter"
"inference-txt-texture"
"patch"
"pixelization"
"reframe"
"restyle"
"segment"
"segmentation-image"
"segmentation-mask"
"skybox-3d"
"skybox-base-360"
"skybox-hdri"
"texture"
"texture:albedo"
"texture:ao"
"texture:edge"
"texture:height"
"texture:metallic"
"texture:normal"
"texture:smoothness"
"txt23d"
"txt2audio"
"txt2img"
"txt2video"
"unknown"
"uploaded"
"uploaded-3d"
"uploaded-audio"
"uploaded-avatar"
"uploaded-video"
"upscale"
"upscale-skybox"
"upscale-texture"
"upscale-video"
"vectorization"
"video23d"
"video2audio"
"video2img"
"video2video"
"voice-clone"
url: string

Signed URL to get the asset content

originalFileUrl?: string

The original file url.

Contains the url of the original file. without any conversion. Only available for some specific video, audio and threeD assets. Is only specified if the given asset data has been replaced with a new file during the creation of the asset.

preview?: Preview { assetId, url }

The asset’s preview.

Contains the assetId and the url of the preview.

assetId: string
url: string
thumbnail?: Thumbnail { assetId, url }

The asset’s thumbnail.

Contains the assetId and the url of the thumbnail.

assetId: string
url: string
models: Array<Model>

The list of models referenced by the Markdown {model} tag in the description.

id: string

The model ID (example: “model_eyVcnFJcR92BxBkz7N6g5w”)

privacy: "private" | "public" | "unlisted"

The privacy of the model (default: private)

One of the following:
"private"
"public"
"unlisted"
type: "custom" | "elevenlabs-voice" | "flux.1" | 34 more

The model type (example: “flux.1-lora”)

One of the following:
"custom"
"elevenlabs-voice"
"flux.1"
"flux.1-composition"
"flux.1-kontext-dev"
"flux.1-kontext-lora"
"flux.1-krea-dev"
"flux.1-krea-lora"
"flux.1-lora"
"flux.1-pro"
"flux.1.1-pro-ultra"
"flux.2-dev-edit-lora"
"flux.2-dev-lora"
"flux.2-klein-4b-edit-lora"
"flux.2-klein-4b-lora"
"flux.2-klein-9b-edit-lora"
"flux.2-klein-9b-lora"
"flux.2-klein-base-4b-edit-lora"
"flux.2-klein-base-4b-lora"
"flux.2-klein-base-9b-edit-lora"
"flux.2-klein-base-9b-lora"
"flux1.1-pro"
"gpt-image-1"
"qwen-image-2512-lora"
"qwen-image-edit-2509-lora"
"qwen-image-edit-2511-lora"
"qwen-image-edit-lora"
"qwen-image-lora"
"sd-1_5"
"sd-1_5-composition"
"sd-1_5-lora"
"sd-xl"
"sd-xl-composition"
"sd-xl-lora"
"zimage-de-turbo-lora"
"zimage-lora"
"zimage-turbo-lora"
authorId?: string

The author user ID (example: “user_VFhihHKMRZyDDnZAJwLb2Q”)

name?: string

The model name (example: “Cinematic Realism”)

ownerId?: string

The owner ID (example: “team_VFhihHKMRZyDDnZAJwLb2Q”)

shortDescription?: string

The model short description (example: “This model generates highly detailed cinematic scenes.”)

value: string

The markdown description of the model (ex: # My model). We allow the {asset:<assetId>} and {model:<modelId>} tags.