Skip to content
Get started

Unlock

assets.unlock(strasset_id, AssetUnlockParams**kwargs) -> AssetUnlockResponse
PUT/assets/{assetId}/unlock

Unlock a canvas

ParametersExpand Collapse
asset_id: str
original_assets: Optional[bool]

If set to true, returns the original asset without transformation

force_unlock: Optional[bool]

If true, no need to pass a lockId.

lock_id: Optional[str]

The value of the lock on this canvas.

ReturnsExpand Collapse
class AssetUnlockResponse:
asset: Asset
id: str

The asset ID (example: “asset_GTrL3mq4SXWyMxkOHRxlpw”)

author_id: str

The author user ID (example: “dcf121faaa1a0a0bbbd9ca1b73d62aea”)

collection_ids: List[str]

A list of CollectionId this asset belongs to

created_at: str

The asset creation date as an ISO string (example: “2023-02-03T11:19:41.579Z”)

edit_capabilities: List[Literal["DETECTION", "GENERATIVE_FILL", "PIXELATE", 8 more]]

List of edit capabilities

One of the following:
"DETECTION"
"GENERATIVE_FILL"
"PIXELATE"
"PROMPT_EDITING"
"REFINE"
"REFRAME"
"REMOVE_BACKGROUND"
"SEGMENTATION"
"UPSCALE"
"UPSCALE_360"
"VECTORIZATION"
kind: Literal["3d", "audio", "document", 4 more]

The kind of asset

One of the following:
"3d"
"audio"
"document"
"image"
"image-hdr"
"json"
"video"
metadata: AssetMetadata

Metadata of the asset with some additional information

kind: Literal["3d", "audio", "document", 4 more]
One of the following:
"3d"
"audio"
"document"
"image"
"image-hdr"
"json"
"video"
type: Literal["3d-texture", "3d-texture-albedo", "3d-texture-metallic", 72 more]

The type of the asset. Ex: ‘inference-txt2img’ will represent an asset generated from a text to image model

One of the following:
"3d-texture"
"3d-texture-albedo"
"3d-texture-metallic"
"3d-texture-mtl"
"3d-texture-normal"
"3d-texture-roughness"
"3d23d"
"3d23d-texture"
"audio2audio"
"audio2video"
"background-removal"
"canvas"
"canvas-drawing"
"canvas-export"
"detection"
"generative-fill"
"image-prompt-editing"
"img23d"
"img2img"
"img2video"
"inference-controlnet"
"inference-controlnet-img2img"
"inference-controlnet-inpaint"
"inference-controlnet-inpaint-ip-adapter"
"inference-controlnet-ip-adapter"
"inference-controlnet-reference"
"inference-controlnet-texture"
"inference-img2img"
"inference-img2img-ip-adapter"
"inference-img2img-texture"
"inference-inpaint"
"inference-inpaint-ip-adapter"
"inference-reference"
"inference-reference-texture"
"inference-txt2img"
"inference-txt2img-ip-adapter"
"inference-txt2img-texture"
"patch"
"pixelization"
"reframe"
"restyle"
"segment"
"segmentation-image"
"segmentation-mask"
"skybox-3d"
"skybox-base-360"
"skybox-hdri"
"texture"
"texture-albedo"
"texture-ao"
"texture-edge"
"texture-height"
"texture-metallic"
"texture-normal"
"texture-smoothness"
"txt23d"
"txt2audio"
"txt2img"
"txt2video"
"unknown"
"uploaded"
"uploaded-3d"
"uploaded-audio"
"uploaded-avatar"
"uploaded-video"
"upscale"
"upscale-skybox"
"upscale-texture"
"upscale-video"
"vectorization"
"video23d"
"video2audio"
"video2img"
"video2video"
"voice-clone"
angular: Optional[float]

How angular is the surface? 0 is like a sphere, 1 is like a mechanical object

maximum1
minimum0
aspect_ratio: Optional[str]

The optional aspect ratio given for the generation, only applicable for some models

background_opacity: Optional[float]

Int to set between 0 and 255 for the opacity of the background in the result images.

maximum255
minimum0
base_model_id: Optional[str]

The baseModelId that maybe changed at inference time

bbox: Optional[List[float]]

A bounding box around the object of interest, in the format [x1, y1, x2, y2].

better_quality: Optional[bool]

Remove small dark spots (i.e. “pepper”) and connect small bright cracks.

canny_structure_image: Optional[str]

The control image already processed by canny detector. Must reference an existing AssetId.

clustering: Optional[bool]

Activate clustering.

color_correction: Optional[bool]

Ensure upscaled tile have the same color histogram as original tile.

color_mode: Optional[str]
color_precision: Optional[float]
concepts: Optional[List[AssetMetadataConcept]]

Flux Kontext LoRA to style the image. For Flux Kontext Prompt Editing.

model_id: str

The model ID (example: “model_eyVcnFJcR92BxBkz7N6g5w”)

scale: float

The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2.

maximum2
minimum-2
model_epoch: Optional[str]

The epoch of the model (example: “000001”) Only available for Flux Lora Trained models

contours: Optional[List[List[List[List[float]]]]]
control_end: Optional[float]

End step for control.

copied_at: Optional[str]

The date when the asset was copied to a project

corner_threshold: Optional[float]
creativity: Optional[float]

Allow the generation of “hallucinations” during the upscale process, which adds additional details and deviates from the original image. Default: optimized for your preset and style.

maximum100
minimum0
creativity_decay: Optional[float]

Amount of decay in creativity over the upscale process. The lowest the value, the less the creativity will be preserved over the upscale process.

maximum100
minimum0
default_parameters: Optional[bool]

If true, use the default parameters

depth_fidelity: Optional[float]

The depth fidelity if a depth image provided

maximum100
minimum0
depth_image: Optional[str]

The control image processed by depth estimator. Must reference an existing AssetId.

details_level: Optional[float]

Amount of details to remove or add

maximum50
minimum-50
dilate: Optional[float]

The number of pixels to dilate the result masks.

maximum30
minimum0
factor: Optional[float]

Contrast factor for Grayscale detector

filter_speckle: Optional[float]
fractality: Optional[float]

Determine the scale at which the upscale process works.

  • With a small value, the upscale works at the largest scale, resulting in fewer added details and more coherent images. Ideal for portraits, for example.
  • With a large value, the upscale works at the smallest scale, resulting in more added details and more hallucinations. Ideal for landscapes, for example.

(info): A small value is slower and more expensive to run.

maximum100
minimum0
geometry_enforcement: Optional[float]

Apply extra control to the Skybox 360 geometry. The higher the value, the more the 360 geometry will influence the generated skybox image.

Use with caution. Default is adapted to the other parameters.

maximum100
minimum0
guidance: Optional[float]

The guidance used to generate this asset

half_mode: Optional[bool]
hdr: Optional[float]
height: Optional[float]
high_threshold: Optional[float]

High threshold for Canny detector

horizontal_expansion_ratio: Optional[float]

(deprecated) Horizontal expansion ratio.

maximum2
minimum1
image: Optional[str]

The input image to process. Must reference an existing AssetId or be a data URL.

image_fidelity: Optional[float]

Strengthen the similarity to the original image during the upscale. Default: optimized for your preset and style.

maximum100
minimum0
image_type: Optional[Literal["seamfull", "skybox", "texture"]]

Preserve the seamless properties of skybox or texture images. Input has to be of same type (seamless).

One of the following:
"seamfull"
"skybox"
"texture"
inference_id: Optional[str]

The id of the Inference describing how this image was generated

input_fidelity: Optional[Literal["high", "low"]]

When set to high, allows to better preserve details from the input images in the output. This is especially useful when using images that contain elements like faces or logos that require accurate preservation in the generated image.

You can provide multiple input images that will all be preserved with high fidelity, but keep in mind that the first image will be preserved with richer textures and finer details, so if you include elements such as faces, consider placing them in the first image.

Only available for the gpt-image-1 model.

One of the following:
"high"
"low"
input_location: Optional[Literal["bottom", "left", "middle", 2 more]]

Location of the input image in the output.

One of the following:
"bottom"
"left"
"middle"
"right"
"top"
invert: Optional[bool]

To invert the relief

keypoint_threshold: Optional[float]

How polished is the surface? 0 is like a rough surface, 1 is like a mirror

maximum1
minimum0
layer_difference: Optional[float]
length_threshold: Optional[float]
lock_expires_at: Optional[str]

The ISO timestamp when the lock on the canvas will expire

low_threshold: Optional[float]

Low threshold for Canny detector

mask: Optional[str]

The mask used for the asset generation or editing

max_iterations: Optional[float]
max_threshold: Optional[float]

Maximum threshold for Grayscale conversion

min_threshold: Optional[float]

Minimum threshold for Grayscale conversion

modality: Optional[Literal["canny", "depth", "grayscale", 7 more]]

Modality to detect

One of the following:
"canny"
"depth"
"grayscale"
"lineart_anime"
"mlsd"
"normal"
"pose"
"scribble"
"segmentation"
"sketch"
mode: Optional[str]
model_id: Optional[str]

The modelId used to generate this asset

model_type: Optional[Literal["custom", "elevenlabs-voice", "flux.1", 34 more]]

The type of the generator used

One of the following:
"custom"
"elevenlabs-voice"
"flux.1"
"flux.1-composition"
"flux.1-kontext-dev"
"flux.1-kontext-lora"
"flux.1-krea-dev"
"flux.1-krea-lora"
"flux.1-lora"
"flux.1-pro"
"flux.1.1-pro-ultra"
"flux.2-dev-edit-lora"
"flux.2-dev-lora"
"flux.2-klein-4b-edit-lora"
"flux.2-klein-4b-lora"
"flux.2-klein-9b-edit-lora"
"flux.2-klein-9b-lora"
"flux.2-klein-base-4b-edit-lora"
"flux.2-klein-base-4b-lora"
"flux.2-klein-base-9b-edit-lora"
"flux.2-klein-base-9b-lora"
"flux1.1-pro"
"gpt-image-1"
"qwen-image-2512-lora"
"qwen-image-edit-2509-lora"
"qwen-image-edit-2511-lora"
"qwen-image-edit-lora"
"qwen-image-lora"
"sd-1_5"
"sd-1_5-composition"
"sd-1_5-lora"
"sd-xl"
"sd-xl-composition"
"sd-xl-lora"
"zimage-de-turbo-lora"
"zimage-lora"
"zimage-turbo-lora"
name: Optional[str]
nb_masks: Optional[float]
negative_prompt: Optional[str]

The negative prompt used to generate this asset

negative_prompt_strength: Optional[float]

Controls the influence of the negative prompt. Default 0 means the negative prompt has no effect. Higher values increase negative prompt influence. Must be > 0 if negativePrompt is provided.

maximum10
minimum0
num_inference_steps: Optional[float]

The number of denoising steps for each image generation.

maximum50
minimum5
num_outputs: Optional[float]

The number of outputs to generate.

maximum8
minimum1
original_asset_id: Optional[str]
output_index: Optional[float]
overlap_percentage: Optional[float]

Overlap percentage for the output image.

maximum0.5
minimum0
override_embeddings: Optional[bool]

Override the embeddings of the model. Only your prompt and negativePrompt will be used. Use with caution.

parent_id: Optional[str]
parent_job_id: Optional[str]
path_precision: Optional[float]
points: Optional[List[List[float]]]

List of points (label, x, y) in the image where label = 0 for background and 1 for object.

polished: Optional[float]

How polished is the surface? 0 is like a rough surface, 1 is like a mirror

maximum1
minimum0
preset: Optional[str]
progress_percent: Optional[float]
prompt: Optional[str]

The prompt that guided the asset generation or editing

prompt_fidelity: Optional[float]

Increase the fidelity to the prompt during upscale. Default: optimized for your preset and style.

maximum100
minimum0
raised: Optional[float]

How raised is the surface? 0 is flat like water, 1 is like a very rough rock

maximum1
minimum0
reference_images: Optional[List[str]]

The reference images used for the asset generation or editing

refinement_steps: Optional[float]

Additional refinement steps before scaling.

If scalingFactor == 1, the refinement process will be applied (1 + refinementSteps) times. If scalingFactor > 1, the refinement process will be applied refinementSteps times.

maximum4
minimum0
remove_background: Optional[bool]

Remove background for Grayscale detector

resize_option: Optional[float]

Size proportion of the input image in the output.

maximum1
minimum0.1
result_contours: Optional[bool]

Boolean to output the contours.

result_image: Optional[bool]

Boolean to able output the cut out object.

result_mask: Optional[bool]

Boolean to able return the masks (binary image) in the response.

root_parent_id: Optional[str]
save_flipbook: Optional[bool]

Save a flipbook of the texture. Deactivated when the input texture is larger than 2048x2048px

scaling_factor: Optional[float]

Scaling factor (when targetWidth not specified)

maximum16
minimum1
scheduler: Optional[str]

The scheduler used to generate this asset

seed: Optional[str]

The seed used to generate this asset. <!> Can be a string or a number in some cases <!>.

sharpen: Optional[bool]

Sharpen tiles.

shiny: Optional[float]

How shiny is the surface? 0 is like a matte surface, 1 is like a diamond

maximum1
minimum0
size: Optional[float]
sketch: Optional[bool]

Activate sketch detection instead of canny.

source_project_id: Optional[str]
splice_threshold: Optional[float]
strength: Optional[float]

The strength

Only available for the flux-kontext LoRA model.

structure_fidelity: Optional[float]

Strength for the input image structure preservation

maximum100
minimum0
structure_image: Optional[str]

The control image for structure. A canny detector will be applied to this image. Must reference an existing AssetId.

style: Optional[Literal["3d-cartoon", "3d-rendered", "anime", 23 more]]
One of the following:
"3d-cartoon"
"3d-rendered"
"anime"
"cartoon"
"cinematic"
"claymation"
"cloud-skydome"
"comic"
"cyberpunk"
"enchanted"
"fantasy"
"ink"
"manga"
"manga-color"
"minimalist"
"neon-tron"
"oil-painting"
"pastel"
"photo"
"photography"
"psychedelic"
"retro-fantasy"
"scifi-concept-art"
"space"
"standard"
"whimsical"
style_fidelity: Optional[float]

The higher the value the more it will look like the style image(s)

maximum100
minimum0
style_images: Optional[List[str]]

List of style images. Most of the time, only one image is enough. It must be existing AssetIds.

style_images_fidelity: Optional[float]

Condition the influence of the style image(s). The higher the value, the more the style images will influence the upscaled image.

maximum100
minimum0
target_height: Optional[float]

The target height of the output image.

maximum2048
minimum0
target_width: Optional[float]

Target width for the upscaled image, take priority over scaling factor

maximum16000
minimum1024
text: Optional[str]

A textual description / keywords describing the object of interest.

maxLength100
texture: Optional[str]

The asset to convert in texture maps. Must reference an existing AssetId.

thumbnail: Optional[AssetMetadataThumbnail]

The thumbnail of the canvas

asset_id: str

The AssetId of the image used as a thumbnail for the canvas (example: “asset_GTrL3mq4SXWyMxkOHRxlpw”)

url: str

The url of the image used as a thumbnail for the canvas

tile_style: Optional[bool]

If set to true, during the upscaling process, the model will match tiles of the source image with tiles of the style image(s). This will result in a more coherent restyle. Works best with style images that have a similar composition.

training_image: Optional[bool]
vertical_expansion_ratio: Optional[float]

(deprecated) Vertical expansion ratio.

maximum2
minimum1
width: Optional[float]

The width of the rendered image.

maximum2048
minimum1024
mime_type: str

The mime type of the asset (example: “image/png”)

owner_id: str

The owner (project) ID (example: “proj_23tlk332lkht3kl2” or “team_dlkhgs23tlk3hlkth32lkht3kl2” for old teams)

privacy: Literal["private", "public", "unlisted"]

The privacy of the asset

One of the following:
"private"
"public"
"unlisted"
properties: AssetProperties

The properties of the asset, content may depend on the kind of asset returned

size: float
animation_frame_count: Optional[float]

Number of animation frames if animations exist

bitrate: Optional[float]

Bitrate of the media in bits per second

bone_count: Optional[float]

Number of bones if skeleton exists

channels: Optional[float]

Number of channels of the audio

classification: Optional[Literal["effect", "interview", "music", 5 more]]

Classification of the audio

One of the following:
"effect"
"interview"
"music"
"other"
"sound"
"speech"
"text"
"unknown"
codec_name: Optional[str]

Codec name of the media

description: Optional[str]

Description of the audio

dimensions: Optional[List[float]]

Bounding box dimensions [width, height, depth]

duration: Optional[float]

Duration of the media in seconds

face_count: Optional[float]

Number of faces/triangles in the mesh

format: Optional[str]

Format of the mesh file (e.g. ‘glb’, etc.)

frame_rate: Optional[float]

Frame rate of the video in frames per second

has_animations: Optional[bool]

Whether the mesh has animations

has_normals: Optional[bool]

Whether the mesh has normal vectors

has_skeleton: Optional[bool]

Whether the mesh has bones/skeleton

has_u_vs: Optional[bool]

Whether the mesh has UV coordinates

height: Optional[float]
nb_frames: Optional[float]

Number of frames in the video

sample_rate: Optional[float]

Sample rate of the media in Hz

transcription: Optional[AssetPropertiesTranscription]

Transcription of the audio

text: str
vertex_count: Optional[float]

Number of vertices in the mesh

width: Optional[float]
source: Literal["3d23d", "3d23d:texture", "3d:texture", 72 more]

source of the asset

One of the following:
"3d23d"
"3d23d:texture"
"3d:texture"
"3d:texture:albedo"
"3d:texture:metallic"
"3d:texture:mtl"
"3d:texture:normal"
"3d:texture:roughness"
"audio2audio"
"audio2video"
"background-removal"
"canvas"
"canvas-drawing"
"canvas-export"
"detection"
"generative-fill"
"image-prompt-editing"
"img23d"
"img2img"
"img2video"
"inference-control-net"
"inference-control-net-img"
"inference-control-net-inpainting"
"inference-control-net-inpainting-ip-adapter"
"inference-control-net-ip-adapter"
"inference-control-net-reference"
"inference-control-net-texture"
"inference-img"
"inference-img-ip-adapter"
"inference-img-texture"
"inference-in-paint"
"inference-in-paint-ip-adapter"
"inference-reference"
"inference-reference-texture"
"inference-txt"
"inference-txt-ip-adapter"
"inference-txt-texture"
"patch"
"pixelization"
"reframe"
"restyle"
"segment"
"segmentation-image"
"segmentation-mask"
"skybox-3d"
"skybox-base-360"
"skybox-hdri"
"texture"
"texture:albedo"
"texture:ao"
"texture:edge"
"texture:height"
"texture:metallic"
"texture:normal"
"texture:smoothness"
"txt23d"
"txt2audio"
"txt2img"
"txt2video"
"unknown"
"uploaded"
"uploaded-3d"
"uploaded-audio"
"uploaded-avatar"
"uploaded-video"
"upscale"
"upscale-skybox"
"upscale-texture"
"upscale-video"
"vectorization"
"video23d"
"video2audio"
"video2img"
"video2video"
"voice-clone"
status: Literal["error", "pending", "success"]

The actual status

One of the following:
"error"
"pending"
"success"
tags: List[str]

The associated tags (example: [“sci-fi”, “landscape”])

updated_at: str

The asset last update date as an ISO string (example: “2023-02-03T11:19:41.579Z”)

url: str

Signed URL to get the asset content

automatic_captioning: Optional[str]

Automatic captioning of the asset

description: Optional[str]

The description, it will contain in priority:

  • the manual description
  • the advanced captioning when the asset is used in training flow
  • the automatic captioning
embedding: Optional[List[float]]

The embedding of the asset when requested.

Only available when an asset can be embedded (ie: not Detection maps)

first_frame: Optional[AssetFirstFrame]

The video asset’s first frame.

Contains the assetId and the url of the first frame.

asset_id: str
url: str
is_hidden: Optional[bool]

Whether the asset is hidden.

last_frame: Optional[AssetLastFrame]

The video asset’s last frame.

Contains the assetId and the url of the last frame.

asset_id: str
url: str
nsfw: Optional[List[str]]

The NSFW labels

original_file_url: Optional[str]

The original file url.

Contains the url of the original file. without any conversion. Only available for some specific video, audio and threeD assets. Is only specified if the given asset data has been replaced with a new file during the creation of the asset.

output_index: Optional[float]

The output index of the asset within a job This index is an positive integer that starts at 0 It is used to differentiate between multiple outputs of the same job If the job has only one output, this index is 0

preview: Optional[AssetPreview]

The asset’s preview.

Contains the assetId and the url of the preview.

asset_id: str
url: str
thumbnail: Optional[AssetThumbnail]

The asset’s thumbnail.

Contains the assetId and the url of the thumbnail.

asset_id: str
url: str

Unlock

import os
from scenario_sdk import Scenario

client = Scenario(
    api_key=os.environ.get("SCENARIO_SDK_API_KEY"),  # This is the default and can be omitted
    api_secret=os.environ.get("SCENARIO_SDK_API_SECRET"),  # This is the default and can be omitted
)
response = client.assets.unlock(
    asset_id="assetId",
)
print(response.asset)
{
  "asset": {
    "id": "id",
    "authorId": "authorId",
    "collectionIds": [
      "string"
    ],
    "createdAt": "createdAt",
    "editCapabilities": [
      "DETECTION"
    ],
    "kind": "3d",
    "metadata": {
      "kind": "3d",
      "type": "3d-texture",
      "angular": 0,
      "aspectRatio": "aspectRatio",
      "backgroundOpacity": 0,
      "baseModelId": "baseModelId",
      "bbox": [
        0,
        0,
        0,
        0
      ],
      "betterQuality": true,
      "cannyStructureImage": "cannyStructureImage",
      "clustering": true,
      "colorCorrection": true,
      "colorMode": "colorMode",
      "colorPrecision": 0,
      "concepts": [
        {
          "modelId": "modelId",
          "scale": -2,
          "modelEpoch": "modelEpoch"
        }
      ],
      "contours": [
        [
          [
            [
              0
            ]
          ]
        ]
      ],
      "controlEnd": 0,
      "copiedAt": "copiedAt",
      "cornerThreshold": 0,
      "creativity": 0,
      "creativityDecay": 0,
      "defaultParameters": true,
      "depthFidelity": 0,
      "depthImage": "depthImage",
      "detailsLevel": -50,
      "dilate": 0,
      "factor": 0,
      "filterSpeckle": 0,
      "fractality": 0,
      "geometryEnforcement": 0,
      "guidance": 0,
      "halfMode": true,
      "hdr": 0,
      "height": 0,
      "highThreshold": 0,
      "horizontalExpansionRatio": 1,
      "image": "image",
      "imageFidelity": 0,
      "imageType": "seamfull",
      "inferenceId": "inferenceId",
      "inputFidelity": "high",
      "inputLocation": "bottom",
      "invert": true,
      "keypointThreshold": 0,
      "layerDifference": 0,
      "lengthThreshold": 0,
      "lockExpiresAt": "lockExpiresAt",
      "lowThreshold": 0,
      "mask": "mask",
      "maxIterations": 0,
      "maxThreshold": 0,
      "minThreshold": 0,
      "modality": "canny",
      "mode": "mode",
      "modelId": "modelId",
      "modelType": "custom",
      "name": "name",
      "nbMasks": 0,
      "negativePrompt": "negativePrompt",
      "negativePromptStrength": 0,
      "numInferenceSteps": 5,
      "numOutputs": 1,
      "originalAssetId": "originalAssetId",
      "outputIndex": 0,
      "overlapPercentage": 0,
      "overrideEmbeddings": true,
      "parentId": "parentId",
      "parentJobId": "parentJobId",
      "pathPrecision": 0,
      "points": [
        [
          0
        ],
        [
          0
        ],
        [
          0
        ]
      ],
      "polished": 0,
      "preset": "preset",
      "progressPercent": 0,
      "prompt": "prompt",
      "promptFidelity": 0,
      "raised": 0,
      "referenceImages": [
        "string"
      ],
      "refinementSteps": 0,
      "removeBackground": true,
      "resizeOption": 0.1,
      "resultContours": true,
      "resultImage": true,
      "resultMask": true,
      "rootParentId": "rootParentId",
      "saveFlipbook": true,
      "scalingFactor": 1,
      "scheduler": "scheduler",
      "seed": "seed",
      "sharpen": true,
      "shiny": 0,
      "size": 0,
      "sketch": true,
      "sourceProjectId": "sourceProjectId",
      "spliceThreshold": 0,
      "strength": 0,
      "structureFidelity": 0,
      "structureImage": "structureImage",
      "style": "3d-cartoon",
      "styleFidelity": 0,
      "styleImages": [
        "string"
      ],
      "styleImagesFidelity": 0,
      "targetHeight": 0,
      "targetWidth": 1024,
      "text": "text",
      "texture": "texture",
      "thumbnail": {
        "assetId": "assetId",
        "url": "url"
      },
      "tileStyle": true,
      "trainingImage": true,
      "verticalExpansionRatio": 1,
      "width": 1024
    },
    "mimeType": "mimeType",
    "ownerId": "ownerId",
    "privacy": "private",
    "properties": {
      "size": 0,
      "animationFrameCount": 0,
      "bitrate": 0,
      "boneCount": 0,
      "channels": 0,
      "classification": "effect",
      "codecName": "codecName",
      "description": "description",
      "dimensions": [
        0,
        0,
        0
      ],
      "duration": 0,
      "faceCount": 0,
      "format": "format",
      "frameRate": 0,
      "hasAnimations": true,
      "hasNormals": true,
      "hasSkeleton": true,
      "hasUVs": true,
      "height": 0,
      "nbFrames": 0,
      "sampleRate": 0,
      "transcription": {
        "text": "text"
      },
      "vertexCount": 0,
      "width": 0
    },
    "source": "3d23d",
    "status": "error",
    "tags": [
      "string"
    ],
    "updatedAt": "updatedAt",
    "url": "url",
    "automaticCaptioning": "automaticCaptioning",
    "description": "description",
    "embedding": [
      0
    ],
    "firstFrame": {
      "assetId": "assetId",
      "url": "url"
    },
    "isHidden": true,
    "lastFrame": {
      "assetId": "assetId",
      "url": "url"
    },
    "nsfw": [
      "string"
    ],
    "originalFileUrl": "originalFileUrl",
    "outputIndex": 0,
    "preview": {
      "assetId": "assetId",
      "url": "url"
    },
    "thumbnail": {
      "assetId": "assetId",
      "url": "url"
    }
  }
}
Returns Examples
{
  "asset": {
    "id": "id",
    "authorId": "authorId",
    "collectionIds": [
      "string"
    ],
    "createdAt": "createdAt",
    "editCapabilities": [
      "DETECTION"
    ],
    "kind": "3d",
    "metadata": {
      "kind": "3d",
      "type": "3d-texture",
      "angular": 0,
      "aspectRatio": "aspectRatio",
      "backgroundOpacity": 0,
      "baseModelId": "baseModelId",
      "bbox": [
        0,
        0,
        0,
        0
      ],
      "betterQuality": true,
      "cannyStructureImage": "cannyStructureImage",
      "clustering": true,
      "colorCorrection": true,
      "colorMode": "colorMode",
      "colorPrecision": 0,
      "concepts": [
        {
          "modelId": "modelId",
          "scale": -2,
          "modelEpoch": "modelEpoch"
        }
      ],
      "contours": [
        [
          [
            [
              0
            ]
          ]
        ]
      ],
      "controlEnd": 0,
      "copiedAt": "copiedAt",
      "cornerThreshold": 0,
      "creativity": 0,
      "creativityDecay": 0,
      "defaultParameters": true,
      "depthFidelity": 0,
      "depthImage": "depthImage",
      "detailsLevel": -50,
      "dilate": 0,
      "factor": 0,
      "filterSpeckle": 0,
      "fractality": 0,
      "geometryEnforcement": 0,
      "guidance": 0,
      "halfMode": true,
      "hdr": 0,
      "height": 0,
      "highThreshold": 0,
      "horizontalExpansionRatio": 1,
      "image": "image",
      "imageFidelity": 0,
      "imageType": "seamfull",
      "inferenceId": "inferenceId",
      "inputFidelity": "high",
      "inputLocation": "bottom",
      "invert": true,
      "keypointThreshold": 0,
      "layerDifference": 0,
      "lengthThreshold": 0,
      "lockExpiresAt": "lockExpiresAt",
      "lowThreshold": 0,
      "mask": "mask",
      "maxIterations": 0,
      "maxThreshold": 0,
      "minThreshold": 0,
      "modality": "canny",
      "mode": "mode",
      "modelId": "modelId",
      "modelType": "custom",
      "name": "name",
      "nbMasks": 0,
      "negativePrompt": "negativePrompt",
      "negativePromptStrength": 0,
      "numInferenceSteps": 5,
      "numOutputs": 1,
      "originalAssetId": "originalAssetId",
      "outputIndex": 0,
      "overlapPercentage": 0,
      "overrideEmbeddings": true,
      "parentId": "parentId",
      "parentJobId": "parentJobId",
      "pathPrecision": 0,
      "points": [
        [
          0
        ],
        [
          0
        ],
        [
          0
        ]
      ],
      "polished": 0,
      "preset": "preset",
      "progressPercent": 0,
      "prompt": "prompt",
      "promptFidelity": 0,
      "raised": 0,
      "referenceImages": [
        "string"
      ],
      "refinementSteps": 0,
      "removeBackground": true,
      "resizeOption": 0.1,
      "resultContours": true,
      "resultImage": true,
      "resultMask": true,
      "rootParentId": "rootParentId",
      "saveFlipbook": true,
      "scalingFactor": 1,
      "scheduler": "scheduler",
      "seed": "seed",
      "sharpen": true,
      "shiny": 0,
      "size": 0,
      "sketch": true,
      "sourceProjectId": "sourceProjectId",
      "spliceThreshold": 0,
      "strength": 0,
      "structureFidelity": 0,
      "structureImage": "structureImage",
      "style": "3d-cartoon",
      "styleFidelity": 0,
      "styleImages": [
        "string"
      ],
      "styleImagesFidelity": 0,
      "targetHeight": 0,
      "targetWidth": 1024,
      "text": "text",
      "texture": "texture",
      "thumbnail": {
        "assetId": "assetId",
        "url": "url"
      },
      "tileStyle": true,
      "trainingImage": true,
      "verticalExpansionRatio": 1,
      "width": 1024
    },
    "mimeType": "mimeType",
    "ownerId": "ownerId",
    "privacy": "private",
    "properties": {
      "size": 0,
      "animationFrameCount": 0,
      "bitrate": 0,
      "boneCount": 0,
      "channels": 0,
      "classification": "effect",
      "codecName": "codecName",
      "description": "description",
      "dimensions": [
        0,
        0,
        0
      ],
      "duration": 0,
      "faceCount": 0,
      "format": "format",
      "frameRate": 0,
      "hasAnimations": true,
      "hasNormals": true,
      "hasSkeleton": true,
      "hasUVs": true,
      "height": 0,
      "nbFrames": 0,
      "sampleRate": 0,
      "transcription": {
        "text": "text"
      },
      "vertexCount": 0,
      "width": 0
    },
    "source": "3d23d",
    "status": "error",
    "tags": [
      "string"
    ],
    "updatedAt": "updatedAt",
    "url": "url",
    "automaticCaptioning": "automaticCaptioning",
    "description": "description",
    "embedding": [
      0
    ],
    "firstFrame": {
      "assetId": "assetId",
      "url": "url"
    },
    "isHidden": true,
    "lastFrame": {
      "assetId": "assetId",
      "url": "url"
    },
    "nsfw": [
      "string"
    ],
    "originalFileUrl": "originalFileUrl",
    "outputIndex": 0,
    "preview": {
      "assetId": "assetId",
      "url": "url"
    },
    "thumbnail": {
      "assetId": "assetId",
      "url": "url"
    }
  }
}