# Assets ## List **get** `/assets` List assets of a project team. Supports both public access (via the `Authorization` header set to `public-auth-token`) and authenticated user access (including API keys). ### Query Parameters - `authorId: optional string` List assets generated by a specific author (the user that created the asset) - `collectionId: optional unknown` - `createdAfter: optional string` Filter results to only return assets created after the specified ISO string date (exclusive). Requires the sortBy parameter to be "createdAt" - `createdBefore: optional string` Filter results to only return assets created before the specified ISO string date (exclusive). Requires the sortBy parameter to be "createdAt" - `inferenceId: optional string` List assets generated from a specific inference - `modelId: optional string` List assets generated from all inferences coming from a specific model (this is not the training images) - `originalAssets: optional boolean` If set to true, returns the original asset without transformation - `pageSize: optional number` The number of items to return in the response. The default value is 50, maximum value is 100, minimum value is 1 - `paginationToken: optional string` A token you received in a previous request to query the next page of items - `parentAssetId: optional string` List all the children assets that were generated from a specific parent asset - `privacy: optional string` Filters result by asset privacy. If set to public, it will return -all- public assets from all organizations - `rootAssetId: optional string` List all the children assets that were generated from a specific root asset - `sortBy: optional string` Sort results by the createdAt or updatedAt - `sortDirection: optional string` Sort results in ascending (asc) or descending (desc) order - `tags: optional string` List of tags, comma separated. Only for public assets on all teams. - `type: optional "inference-txt2img" or "inference-txt2img-ip-adapter" or "inference-txt2img-texture" or 72 more` List all the assets of a specific type. The parameter "type" and "types" cannot be used together. Can be any of the following values: inference-txt2img, inference-txt2img-ip-adapter, inference-txt2img-texture, inference-img2img, inference-img2img-ip-adapter, inference-img2img-texture, inference-inpaint, inference-inpaint-ip-adapter, inference-reference, inference-reference-texture, inference-controlnet, inference-controlnet-ip-adapter, inference-controlnet-img2img, inference-controlnet-reference, inference-controlnet-inpaint, inference-controlnet-inpaint-ip-adapter, inference-controlnet-texture, background-removal, canvas, canvas-export, canvas-drawing, detection, patch, pixelization, upscale, upscale-texture, upscale-skybox, vectorization, segment, segmentation-image, segmentation-mask, skybox-base-360, skybox-hdri, skybox-3d, restyle, reframe, generative-fill, texture, texture-height, texture-normal, texture-smoothness, texture-metallic, texture-edge, texture-ao, texture-albedo, image-prompt-editing, unknown, img23d, txt23d, video23d, 3d23d, 3d23d-texture, 3d-texture, 3d-texture-mtl, 3d-texture-albedo, 3d-texture-normal, 3d-texture-roughness, 3d-texture-metallic, img2video, txt2audio, audio2audio, audio2video, video2audio, voice-clone, video2video, video2img, txt2img, img2img, txt2video, uploaded, uploaded-video, uploaded-audio, uploaded-3d, uploaded-avatar, upscale-video, assets with a type starting with "inference-" will be returned - `"inference-txt2img"` - `"inference-txt2img-ip-adapter"` - `"inference-txt2img-texture"` - `"inference-img2img"` - `"inference-img2img-ip-adapter"` - `"inference-img2img-texture"` - `"inference-inpaint"` - `"inference-inpaint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-controlnet"` - `"inference-controlnet-ip-adapter"` - `"inference-controlnet-img2img"` - `"inference-controlnet-reference"` - `"inference-controlnet-inpaint"` - `"inference-controlnet-inpaint-ip-adapter"` - `"inference-controlnet-texture"` - `"background-removal"` - `"canvas"` - `"canvas-export"` - `"canvas-drawing"` - `"detection"` - `"patch"` - `"pixelization"` - `"upscale"` - `"upscale-texture"` - `"upscale-skybox"` - `"vectorization"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-base-360"` - `"skybox-hdri"` - `"skybox-3d"` - `"restyle"` - `"reframe"` - `"generative-fill"` - `"texture"` - `"texture-height"` - `"texture-normal"` - `"texture-smoothness"` - `"texture-metallic"` - `"texture-edge"` - `"texture-ao"` - `"texture-albedo"` - `"image-prompt-editing"` - `"unknown"` - `"img23d"` - `"txt23d"` - `"video23d"` - `"3d23d"` - `"3d23d-texture"` - `"3d-texture"` - `"3d-texture-mtl"` - `"3d-texture-albedo"` - `"3d-texture-normal"` - `"3d-texture-roughness"` - `"3d-texture-metallic"` - `"img2video"` - `"txt2audio"` - `"audio2audio"` - `"audio2video"` - `"video2audio"` - `"voice-clone"` - `"video2video"` - `"video2img"` - `"txt2img"` - `"img2img"` - `"txt2video"` - `"uploaded"` - `"uploaded-video"` - `"uploaded-audio"` - `"uploaded-3d"` - `"uploaded-avatar"` - `"upscale-video"` - `types: optional array of "inference-txt2img" or "inference-txt2img-ip-adapter" or "inference-txt2img-texture" or 72 more` List of the asset types to request. The parameter "type" and "types" cannot be used together. Can be any of the following values: inference-txt2img, inference-txt2img-ip-adapter, inference-txt2img-texture, inference-img2img, inference-img2img-ip-adapter, inference-img2img-texture, inference-inpaint, inference-inpaint-ip-adapter, inference-reference, inference-reference-texture, inference-controlnet, inference-controlnet-ip-adapter, inference-controlnet-img2img, inference-controlnet-reference, inference-controlnet-inpaint, inference-controlnet-inpaint-ip-adapter, inference-controlnet-texture, background-removal, canvas, canvas-export, canvas-drawing, detection, patch, pixelization, upscale, upscale-texture, upscale-skybox, vectorization, segment, segmentation-image, segmentation-mask, skybox-base-360, skybox-hdri, skybox-3d, restyle, reframe, generative-fill, texture, texture-height, texture-normal, texture-smoothness, texture-metallic, texture-edge, texture-ao, texture-albedo, image-prompt-editing, unknown, img23d, txt23d, video23d, 3d23d, 3d23d-texture, 3d-texture, 3d-texture-mtl, 3d-texture-albedo, 3d-texture-normal, 3d-texture-roughness, 3d-texture-metallic, img2video, txt2audio, audio2audio, audio2video, video2audio, voice-clone, video2video, video2img, txt2img, img2img, txt2video, uploaded, uploaded-video, uploaded-audio, uploaded-3d, uploaded-avatar, upscale-video - `"inference-txt2img"` - `"inference-txt2img-ip-adapter"` - `"inference-txt2img-texture"` - `"inference-img2img"` - `"inference-img2img-ip-adapter"` - `"inference-img2img-texture"` - `"inference-inpaint"` - `"inference-inpaint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-controlnet"` - `"inference-controlnet-ip-adapter"` - `"inference-controlnet-img2img"` - `"inference-controlnet-reference"` - `"inference-controlnet-inpaint"` - `"inference-controlnet-inpaint-ip-adapter"` - `"inference-controlnet-texture"` - `"background-removal"` - `"canvas"` - `"canvas-export"` - `"canvas-drawing"` - `"detection"` - `"patch"` - `"pixelization"` - `"upscale"` - `"upscale-texture"` - `"upscale-skybox"` - `"vectorization"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-base-360"` - `"skybox-hdri"` - `"skybox-3d"` - `"restyle"` - `"reframe"` - `"generative-fill"` - `"texture"` - `"texture-height"` - `"texture-normal"` - `"texture-smoothness"` - `"texture-metallic"` - `"texture-edge"` - `"texture-ao"` - `"texture-albedo"` - `"image-prompt-editing"` - `"unknown"` - `"img23d"` - `"txt23d"` - `"video23d"` - `"3d23d"` - `"3d23d-texture"` - `"3d-texture"` - `"3d-texture-mtl"` - `"3d-texture-albedo"` - `"3d-texture-normal"` - `"3d-texture-roughness"` - `"3d-texture-metallic"` - `"img2video"` - `"txt2audio"` - `"audio2audio"` - `"audio2video"` - `"video2audio"` - `"voice-clone"` - `"video2video"` - `"video2img"` - `"txt2img"` - `"img2img"` - `"txt2video"` - `"uploaded"` - `"uploaded-video"` - `"uploaded-audio"` - `"uploaded-3d"` - `"uploaded-avatar"` - `"upscale-video"` - `updatedAfter: optional string` Filter results to only return assets updated after the specified ISO string date (exclusive). Requires the sortBy parameter to be "updatedAt" - `updatedBefore: optional string` Filter results to only return assets updated before the specified ISO string date (exclusive). Requires the sortBy parameter to be "updatedAt" ### Returns - `assets: array of object { id, authorId, collectionIds, 24 more }` - `id: string` The asset ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `authorId: string` The author user ID (example: "dcf121faaa1a0a0bbbd9ca1b73d62aea") - `collectionIds: array of string` A list of CollectionId this asset belongs to - `createdAt: string` The asset creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `editCapabilities: array of "DETECTION" or "GENERATIVE_FILL" or "PIXELATE" or 8 more` List of edit capabilities - `"DETECTION"` - `"GENERATIVE_FILL"` - `"PIXELATE"` - `"PROMPT_EDITING"` - `"REFINE"` - `"REFRAME"` - `"REMOVE_BACKGROUND"` - `"SEGMENTATION"` - `"UPSCALE"` - `"UPSCALE_360"` - `"VECTORIZATION"` - `kind: "3d" or "audio" or "document" or 4 more` The kind of asset - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `metadata: object { kind, type, angular, 106 more }` Metadata of the asset with some additional information - `kind: "3d" or "audio" or "document" or 4 more` - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `type: "3d-texture" or "3d-texture-albedo" or "3d-texture-metallic" or 72 more` The type of the asset. Ex: 'inference-txt2img' will represent an asset generated from a text to image model - `"3d-texture"` - `"3d-texture-albedo"` - `"3d-texture-metallic"` - `"3d-texture-mtl"` - `"3d-texture-normal"` - `"3d-texture-roughness"` - `"3d23d"` - `"3d23d-texture"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-controlnet"` - `"inference-controlnet-img2img"` - `"inference-controlnet-inpaint"` - `"inference-controlnet-inpaint-ip-adapter"` - `"inference-controlnet-ip-adapter"` - `"inference-controlnet-reference"` - `"inference-controlnet-texture"` - `"inference-img2img"` - `"inference-img2img-ip-adapter"` - `"inference-img2img-texture"` - `"inference-inpaint"` - `"inference-inpaint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt2img"` - `"inference-txt2img-ip-adapter"` - `"inference-txt2img-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture-albedo"` - `"texture-ao"` - `"texture-edge"` - `"texture-height"` - `"texture-metallic"` - `"texture-normal"` - `"texture-smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `angular: optional number` How angular is the surface? 0 is like a sphere, 1 is like a mechanical object - `aspectRatio: optional string` The optional aspect ratio given for the generation, only applicable for some models - `backgroundOpacity: optional number` Int to set between 0 and 255 for the opacity of the background in the result images. - `baseModelId: optional string` The baseModelId that maybe changed at inference time - `bbox: optional array of number` A bounding box around the object of interest, in the format [x1, y1, x2, y2]. - `betterQuality: optional boolean` Remove small dark spots (i.e. “pepper”) and connect small bright cracks. - `cannyStructureImage: optional string` The control image already processed by canny detector. Must reference an existing AssetId. - `clustering: optional boolean` Activate clustering. - `colorCorrection: optional boolean` Ensure upscaled tile have the same color histogram as original tile. - `colorMode: optional string` - `colorPrecision: optional number` - `concepts: optional array of object { modelId, scale, modelEpoch }` Flux Kontext LoRA to style the image. For Flux Kontext Prompt Editing. - `modelId: string` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: number` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `modelEpoch: optional string` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `contours: optional array of array of array of array of number` - `controlEnd: optional number` End step for control. - `copiedAt: optional string` The date when the asset was copied to a project - `cornerThreshold: optional number` - `creativity: optional number` Allow the generation of "hallucinations" during the upscale process, which adds additional details and deviates from the original image. Default: optimized for your preset and style. - `creativityDecay: optional number` Amount of decay in creativity over the upscale process. The lowest the value, the less the creativity will be preserved over the upscale process. - `defaultParameters: optional boolean` If true, use the default parameters - `depthFidelity: optional number` The depth fidelity if a depth image provided - `depthImage: optional string` The control image processed by depth estimator. Must reference an existing AssetId. - `detailsLevel: optional number` Amount of details to remove or add - `dilate: optional number` The number of pixels to dilate the result masks. - `factor: optional number` Contrast factor for Grayscale detector - `filterSpeckle: optional number` - `fractality: optional number` Determine the scale at which the upscale process works. - With a small value, the upscale works at the largest scale, resulting in fewer added details and more coherent images. Ideal for portraits, for example. - With a large value, the upscale works at the smallest scale, resulting in more added details and more hallucinations. Ideal for landscapes, for example. (info): A small value is slower and more expensive to run. - `geometryEnforcement: optional number` Apply extra control to the Skybox 360 geometry. The higher the value, the more the 360 geometry will influence the generated skybox image. Use with caution. Default is adapted to the other parameters. - `guidance: optional number` The guidance used to generate this asset - `halfMode: optional boolean` - `hdr: optional number` - `height: optional number` - `highThreshold: optional number` High threshold for Canny detector - `horizontalExpansionRatio: optional number` (deprecated) Horizontal expansion ratio. - `image: optional string` The input image to process. Must reference an existing AssetId or be a data URL. - `imageFidelity: optional number` Strengthen the similarity to the original image during the upscale. Default: optimized for your preset and style. - `imageType: optional "seamfull" or "skybox" or "texture"` Preserve the seamless properties of skybox or texture images. Input has to be of same type (seamless). - `"seamfull"` - `"skybox"` - `"texture"` - `inferenceId: optional string` The id of the Inference describing how this image was generated - `inputFidelity: optional "high" or "low"` When set to `high`, allows to better preserve details from the input images in the output. This is especially useful when using images that contain elements like faces or logos that require accurate preservation in the generated image. You can provide multiple input images that will all be preserved with high fidelity, but keep in mind that the first image will be preserved with richer textures and finer details, so if you include elements such as faces, consider placing them in the first image. Only available for the `gpt-image-1` model. - `"high"` - `"low"` - `inputLocation: optional "bottom" or "left" or "middle" or 2 more` Location of the input image in the output. - `"bottom"` - `"left"` - `"middle"` - `"right"` - `"top"` - `invert: optional boolean` To invert the relief - `keypointThreshold: optional number` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `layerDifference: optional number` - `lengthThreshold: optional number` - `lockExpiresAt: optional string` The ISO timestamp when the lock on the canvas will expire - `lowThreshold: optional number` Low threshold for Canny detector - `mask: optional string` The mask used for the asset generation or editing - `maxIterations: optional number` - `maxThreshold: optional number` Maximum threshold for Grayscale conversion - `minThreshold: optional number` Minimum threshold for Grayscale conversion - `modality: optional "canny" or "depth" or "grayscale" or 7 more` Modality to detect - `"canny"` - `"depth"` - `"grayscale"` - `"lineart_anime"` - `"mlsd"` - `"normal"` - `"pose"` - `"scribble"` - `"segmentation"` - `"sketch"` - `mode: optional string` - `modelId: optional string` The modelId used to generate this asset - `modelType: optional "custom" or "elevenlabs-voice" or "flux.1" or 34 more` The type of the generator used - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `name: optional string` - `nbMasks: optional number` - `negativePrompt: optional string` The negative prompt used to generate this asset - `negativePromptStrength: optional number` Controls the influence of the negative prompt. Default 0 means the negative prompt has no effect. Higher values increase negative prompt influence. Must be > 0 if negativePrompt is provided. - `numInferenceSteps: optional number` The number of denoising steps for each image generation. - `numOutputs: optional number` The number of outputs to generate. - `originalAssetId: optional string` - `outputIndex: optional number` - `overlapPercentage: optional number` Overlap percentage for the output image. - `overrideEmbeddings: optional boolean` Override the embeddings of the model. Only your prompt and negativePrompt will be used. Use with caution. - `parentId: optional string` - `parentJobId: optional string` - `pathPrecision: optional number` - `points: optional array of array of number` List of points (label, x, y) in the image where label = 0 for background and 1 for object. - `polished: optional number` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `preset: optional string` - `progressPercent: optional number` - `prompt: optional string` The prompt that guided the asset generation or editing - `promptFidelity: optional number` Increase the fidelity to the prompt during upscale. Default: optimized for your preset and style. - `raised: optional number` How raised is the surface? 0 is flat like water, 1 is like a very rough rock - `referenceImages: optional array of string` The reference images used for the asset generation or editing - `refinementSteps: optional number` Additional refinement steps before scaling. If scalingFactor == 1, the refinement process will be applied (1 + refinementSteps) times. If scalingFactor > 1, the refinement process will be applied refinementSteps times. - `removeBackground: optional boolean` Remove background for Grayscale detector - `resizeOption: optional number` Size proportion of the input image in the output. - `resultContours: optional boolean` Boolean to output the contours. - `resultImage: optional boolean` Boolean to able output the cut out object. - `resultMask: optional boolean` Boolean to able return the masks (binary image) in the response. - `rootParentId: optional string` - `saveFlipbook: optional boolean` Save a flipbook of the texture. Deactivated when the input texture is larger than 2048x2048px - `scalingFactor: optional number` Scaling factor (when `targetWidth` not specified) - `scheduler: optional string` The scheduler used to generate this asset - `seed: optional string` The seed used to generate this asset. Can be a string or a number in some cases . - `sharpen: optional boolean` Sharpen tiles. - `shiny: optional number` How shiny is the surface? 0 is like a matte surface, 1 is like a diamond - `size: optional number` - `sketch: optional boolean` Activate sketch detection instead of canny. - `sourceProjectId: optional string` - `spliceThreshold: optional number` - `strength: optional number` The strength Only available for the `flux-kontext` LoRA model. - `structureFidelity: optional number` Strength for the input image structure preservation - `structureImage: optional string` The control image for structure. A canny detector will be applied to this image. Must reference an existing AssetId. - `style: optional "3d-cartoon" or "3d-rendered" or "anime" or 23 more` - `"3d-cartoon"` - `"3d-rendered"` - `"anime"` - `"cartoon"` - `"cinematic"` - `"claymation"` - `"cloud-skydome"` - `"comic"` - `"cyberpunk"` - `"enchanted"` - `"fantasy"` - `"ink"` - `"manga"` - `"manga-color"` - `"minimalist"` - `"neon-tron"` - `"oil-painting"` - `"pastel"` - `"photo"` - `"photography"` - `"psychedelic"` - `"retro-fantasy"` - `"scifi-concept-art"` - `"space"` - `"standard"` - `"whimsical"` - `styleFidelity: optional number` The higher the value the more it will look like the style image(s) - `styleImages: optional array of string` List of style images. Most of the time, only one image is enough. It must be existing AssetIds. - `styleImagesFidelity: optional number` Condition the influence of the style image(s). The higher the value, the more the style images will influence the upscaled image. - `targetHeight: optional number` The target height of the output image. - `targetWidth: optional number` Target width for the upscaled image, take priority over scaling factor - `text: optional string` A textual description / keywords describing the object of interest. - `texture: optional string` The asset to convert in texture maps. Must reference an existing AssetId. - `thumbnail: optional object { assetId, url }` The thumbnail of the canvas - `assetId: string` The AssetId of the image used as a thumbnail for the canvas (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: string` The url of the image used as a thumbnail for the canvas - `tileStyle: optional boolean` If set to true, during the upscaling process, the model will match tiles of the source image with tiles of the style image(s). This will result in a more coherent restyle. Works best with style images that have a similar composition. - `trainingImage: optional boolean` - `verticalExpansionRatio: optional number` (deprecated) Vertical expansion ratio. - `width: optional number` The width of the rendered image. - `mimeType: string` The mime type of the asset (example: "image/png") - `ownerId: string` The owner (project) ID (example: "proj_23tlk332lkht3kl2" or "team_dlkhgs23tlk3hlkth32lkht3kl2" for old teams) - `privacy: "private" or "public" or "unlisted"` The privacy of the asset - `"private"` - `"public"` - `"unlisted"` - `properties: object { size, animationFrameCount, bitrate, 20 more }` The properties of the asset, content may depend on the kind of asset returned - `size: number` - `animationFrameCount: optional number` Number of animation frames if animations exist - `bitrate: optional number` Bitrate of the media in bits per second - `boneCount: optional number` Number of bones if skeleton exists - `channels: optional number` Number of channels of the audio - `classification: optional "effect" or "interview" or "music" or 5 more` Classification of the audio - `"effect"` - `"interview"` - `"music"` - `"other"` - `"sound"` - `"speech"` - `"text"` - `"unknown"` - `codecName: optional string` Codec name of the media - `description: optional string` Description of the audio - `dimensions: optional array of number` Bounding box dimensions [width, height, depth] - `duration: optional number` Duration of the media in seconds - `faceCount: optional number` Number of faces/triangles in the mesh - `format: optional string` Format of the mesh file (e.g. 'glb', etc.) - `frameRate: optional number` Frame rate of the video in frames per second - `hasAnimations: optional boolean` Whether the mesh has animations - `hasNormals: optional boolean` Whether the mesh has normal vectors - `hasSkeleton: optional boolean` Whether the mesh has bones/skeleton - `hasUVs: optional boolean` Whether the mesh has UV coordinates - `height: optional number` - `nbFrames: optional number` Number of frames in the video - `sampleRate: optional number` Sample rate of the media in Hz - `transcription: optional object { text }` Transcription of the audio - `text: string` - `vertexCount: optional number` Number of vertices in the mesh - `width: optional number` - `source: "3d23d" or "3d23d:texture" or "3d:texture" or 72 more` source of the asset - `"3d23d"` - `"3d23d:texture"` - `"3d:texture"` - `"3d:texture:albedo"` - `"3d:texture:metallic"` - `"3d:texture:mtl"` - `"3d:texture:normal"` - `"3d:texture:roughness"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-control-net"` - `"inference-control-net-img"` - `"inference-control-net-inpainting"` - `"inference-control-net-inpainting-ip-adapter"` - `"inference-control-net-ip-adapter"` - `"inference-control-net-reference"` - `"inference-control-net-texture"` - `"inference-img"` - `"inference-img-ip-adapter"` - `"inference-img-texture"` - `"inference-in-paint"` - `"inference-in-paint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt"` - `"inference-txt-ip-adapter"` - `"inference-txt-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture:albedo"` - `"texture:ao"` - `"texture:edge"` - `"texture:height"` - `"texture:metallic"` - `"texture:normal"` - `"texture:smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `status: "error" or "pending" or "success"` The actual status - `"error"` - `"pending"` - `"success"` - `tags: array of string` The associated tags (example: ["sci-fi", "landscape"]) - `updatedAt: string` The asset last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `url: string` Signed URL to get the asset content - `automaticCaptioning: optional string` Automatic captioning of the asset - `description: optional string` The description, it will contain in priority: - the manual description - the advanced captioning when the asset is used in training flow - the automatic captioning - `embedding: optional array of number` The embedding of the asset when requested. Only available when an asset can be embedded (ie: not Detection maps) - `firstFrame: optional object { assetId, url }` The video asset's first frame. Contains the assetId and the url of the first frame. - `assetId: string` - `url: string` - `isHidden: optional boolean` Whether the asset is hidden. - `lastFrame: optional object { assetId, url }` The video asset's last frame. Contains the assetId and the url of the last frame. - `assetId: string` - `url: string` - `nsfw: optional array of string` The NSFW labels - `originalFileUrl: optional string` The original file url. Contains the url of the original file. without any conversion. Only available for some specific video, audio and threeD assets. Is only specified if the given asset data has been replaced with a new file during the creation of the asset. - `outputIndex: optional number` The output index of the asset within a job This index is an positive integer that starts at 0 It is used to differentiate between multiple outputs of the same job If the job has only one output, this index is 0 - `preview: optional object { assetId, url }` The asset's preview. Contains the assetId and the url of the preview. - `assetId: string` - `url: string` - `thumbnail: optional object { assetId, url }` The asset's thumbnail. Contains the assetId and the url of the thumbnail. - `assetId: string` - `url: string` - `nextPaginationToken: optional string` A token to query the next page of assets ### Example ```http curl https://api.cloud.scenario.com/v1/assets \ -u "$SCENARIO_SDK_API_KEY:SCENARIO_SDK_API_SECRET" ``` #### Response ```json { "assets": [ { "id": "id", "authorId": "authorId", "collectionIds": [ "string" ], "createdAt": "createdAt", "editCapabilities": [ "DETECTION" ], "kind": "3d", "metadata": { "kind": "3d", "type": "3d-texture", "angular": 0, "aspectRatio": "aspectRatio", "backgroundOpacity": 0, "baseModelId": "baseModelId", "bbox": [ 0, 0, 0, 0 ], "betterQuality": true, "cannyStructureImage": "cannyStructureImage", "clustering": true, "colorCorrection": true, "colorMode": "colorMode", "colorPrecision": 0, "concepts": [ { "modelId": "modelId", "scale": -2, "modelEpoch": "modelEpoch" } ], "contours": [ [ [ [ 0 ] ] ] ], "controlEnd": 0, "copiedAt": "copiedAt", "cornerThreshold": 0, "creativity": 0, "creativityDecay": 0, "defaultParameters": true, "depthFidelity": 0, "depthImage": "depthImage", "detailsLevel": -50, "dilate": 0, "factor": 0, "filterSpeckle": 0, "fractality": 0, "geometryEnforcement": 0, "guidance": 0, "halfMode": true, "hdr": 0, "height": 0, "highThreshold": 0, "horizontalExpansionRatio": 1, "image": "image", "imageFidelity": 0, "imageType": "seamfull", "inferenceId": "inferenceId", "inputFidelity": "high", "inputLocation": "bottom", "invert": true, "keypointThreshold": 0, "layerDifference": 0, "lengthThreshold": 0, "lockExpiresAt": "lockExpiresAt", "lowThreshold": 0, "mask": "mask", "maxIterations": 0, "maxThreshold": 0, "minThreshold": 0, "modality": "canny", "mode": "mode", "modelId": "modelId", "modelType": "custom", "name": "name", "nbMasks": 0, "negativePrompt": "negativePrompt", "negativePromptStrength": 0, "numInferenceSteps": 5, "numOutputs": 1, "originalAssetId": "originalAssetId", "outputIndex": 0, "overlapPercentage": 0, "overrideEmbeddings": true, "parentId": "parentId", "parentJobId": "parentJobId", "pathPrecision": 0, "points": [ [ 0 ], [ 0 ], [ 0 ] ], "polished": 0, "preset": "preset", "progressPercent": 0, "prompt": "prompt", "promptFidelity": 0, "raised": 0, "referenceImages": [ "string" ], "refinementSteps": 0, "removeBackground": true, "resizeOption": 0.1, "resultContours": true, "resultImage": true, "resultMask": true, "rootParentId": "rootParentId", "saveFlipbook": true, "scalingFactor": 1, "scheduler": "scheduler", "seed": "seed", "sharpen": true, "shiny": 0, "size": 0, "sketch": true, "sourceProjectId": "sourceProjectId", "spliceThreshold": 0, "strength": 0, "structureFidelity": 0, "structureImage": "structureImage", "style": "3d-cartoon", "styleFidelity": 0, "styleImages": [ "string" ], "styleImagesFidelity": 0, "targetHeight": 0, "targetWidth": 1024, "text": "text", "texture": "texture", "thumbnail": { "assetId": "assetId", "url": "url" }, "tileStyle": true, "trainingImage": true, "verticalExpansionRatio": 1, "width": 1024 }, "mimeType": "mimeType", "ownerId": "ownerId", "privacy": "private", "properties": { "size": 0, "animationFrameCount": 0, "bitrate": 0, "boneCount": 0, "channels": 0, "classification": "effect", "codecName": "codecName", "description": "description", "dimensions": [ 0, 0, 0 ], "duration": 0, "faceCount": 0, "format": "format", "frameRate": 0, "hasAnimations": true, "hasNormals": true, "hasSkeleton": true, "hasUVs": true, "height": 0, "nbFrames": 0, "sampleRate": 0, "transcription": { "text": "text" }, "vertexCount": 0, "width": 0 }, "source": "3d23d", "status": "error", "tags": [ "string" ], "updatedAt": "updatedAt", "url": "url", "automaticCaptioning": "automaticCaptioning", "description": "description", "embedding": [ 0 ], "firstFrame": { "assetId": "assetId", "url": "url" }, "isHidden": true, "lastFrame": { "assetId": "assetId", "url": "url" }, "nsfw": [ "string" ], "originalFileUrl": "originalFileUrl", "outputIndex": 0, "preview": { "assetId": "assetId", "url": "url" }, "thumbnail": { "assetId": "assetId", "url": "url" } } ], "nextPaginationToken": "nextPaginationToken" } ``` ## Upload **post** `/assets` Upload an image or canvas ### Query Parameters - `originalAssets: optional boolean` If set to true, returns the original asset without transformation ### Body Parameters - `name: string` The original file name of the image (example: "low-res-image.jpg"). - `canvas: optional string` The canvas to upload as a stringified JSON. Ignored if `image` is provided. - `collectionIds: optional array of string` The IDs of the collections to add the asset to. If provided, the new asset will be added to the collections. - `hide: optional boolean` Toggles the hidden status of the asset. - `image: optional string` The image to upload in base64 format string. - `parentId: optional string` Specifies the parent asset Id for the asset. - `thumbnail: optional string` The thumbnail for the canvas in base64 format string. Ignored if `image` is provided. ### Returns - `asset: object { id, authorId, collectionIds, 24 more }` - `id: string` The asset ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `authorId: string` The author user ID (example: "dcf121faaa1a0a0bbbd9ca1b73d62aea") - `collectionIds: array of string` A list of CollectionId this asset belongs to - `createdAt: string` The asset creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `editCapabilities: array of "DETECTION" or "GENERATIVE_FILL" or "PIXELATE" or 8 more` List of edit capabilities - `"DETECTION"` - `"GENERATIVE_FILL"` - `"PIXELATE"` - `"PROMPT_EDITING"` - `"REFINE"` - `"REFRAME"` - `"REMOVE_BACKGROUND"` - `"SEGMENTATION"` - `"UPSCALE"` - `"UPSCALE_360"` - `"VECTORIZATION"` - `kind: "3d" or "audio" or "document" or 4 more` The kind of asset - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `metadata: object { kind, type, angular, 106 more }` Metadata of the asset with some additional information - `kind: "3d" or "audio" or "document" or 4 more` - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `type: "3d-texture" or "3d-texture-albedo" or "3d-texture-metallic" or 72 more` The type of the asset. Ex: 'inference-txt2img' will represent an asset generated from a text to image model - `"3d-texture"` - `"3d-texture-albedo"` - `"3d-texture-metallic"` - `"3d-texture-mtl"` - `"3d-texture-normal"` - `"3d-texture-roughness"` - `"3d23d"` - `"3d23d-texture"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-controlnet"` - `"inference-controlnet-img2img"` - `"inference-controlnet-inpaint"` - `"inference-controlnet-inpaint-ip-adapter"` - `"inference-controlnet-ip-adapter"` - `"inference-controlnet-reference"` - `"inference-controlnet-texture"` - `"inference-img2img"` - `"inference-img2img-ip-adapter"` - `"inference-img2img-texture"` - `"inference-inpaint"` - `"inference-inpaint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt2img"` - `"inference-txt2img-ip-adapter"` - `"inference-txt2img-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture-albedo"` - `"texture-ao"` - `"texture-edge"` - `"texture-height"` - `"texture-metallic"` - `"texture-normal"` - `"texture-smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `angular: optional number` How angular is the surface? 0 is like a sphere, 1 is like a mechanical object - `aspectRatio: optional string` The optional aspect ratio given for the generation, only applicable for some models - `backgroundOpacity: optional number` Int to set between 0 and 255 for the opacity of the background in the result images. - `baseModelId: optional string` The baseModelId that maybe changed at inference time - `bbox: optional array of number` A bounding box around the object of interest, in the format [x1, y1, x2, y2]. - `betterQuality: optional boolean` Remove small dark spots (i.e. “pepper”) and connect small bright cracks. - `cannyStructureImage: optional string` The control image already processed by canny detector. Must reference an existing AssetId. - `clustering: optional boolean` Activate clustering. - `colorCorrection: optional boolean` Ensure upscaled tile have the same color histogram as original tile. - `colorMode: optional string` - `colorPrecision: optional number` - `concepts: optional array of object { modelId, scale, modelEpoch }` Flux Kontext LoRA to style the image. For Flux Kontext Prompt Editing. - `modelId: string` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: number` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `modelEpoch: optional string` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `contours: optional array of array of array of array of number` - `controlEnd: optional number` End step for control. - `copiedAt: optional string` The date when the asset was copied to a project - `cornerThreshold: optional number` - `creativity: optional number` Allow the generation of "hallucinations" during the upscale process, which adds additional details and deviates from the original image. Default: optimized for your preset and style. - `creativityDecay: optional number` Amount of decay in creativity over the upscale process. The lowest the value, the less the creativity will be preserved over the upscale process. - `defaultParameters: optional boolean` If true, use the default parameters - `depthFidelity: optional number` The depth fidelity if a depth image provided - `depthImage: optional string` The control image processed by depth estimator. Must reference an existing AssetId. - `detailsLevel: optional number` Amount of details to remove or add - `dilate: optional number` The number of pixels to dilate the result masks. - `factor: optional number` Contrast factor for Grayscale detector - `filterSpeckle: optional number` - `fractality: optional number` Determine the scale at which the upscale process works. - With a small value, the upscale works at the largest scale, resulting in fewer added details and more coherent images. Ideal for portraits, for example. - With a large value, the upscale works at the smallest scale, resulting in more added details and more hallucinations. Ideal for landscapes, for example. (info): A small value is slower and more expensive to run. - `geometryEnforcement: optional number` Apply extra control to the Skybox 360 geometry. The higher the value, the more the 360 geometry will influence the generated skybox image. Use with caution. Default is adapted to the other parameters. - `guidance: optional number` The guidance used to generate this asset - `halfMode: optional boolean` - `hdr: optional number` - `height: optional number` - `highThreshold: optional number` High threshold for Canny detector - `horizontalExpansionRatio: optional number` (deprecated) Horizontal expansion ratio. - `image: optional string` The input image to process. Must reference an existing AssetId or be a data URL. - `imageFidelity: optional number` Strengthen the similarity to the original image during the upscale. Default: optimized for your preset and style. - `imageType: optional "seamfull" or "skybox" or "texture"` Preserve the seamless properties of skybox or texture images. Input has to be of same type (seamless). - `"seamfull"` - `"skybox"` - `"texture"` - `inferenceId: optional string` The id of the Inference describing how this image was generated - `inputFidelity: optional "high" or "low"` When set to `high`, allows to better preserve details from the input images in the output. This is especially useful when using images that contain elements like faces or logos that require accurate preservation in the generated image. You can provide multiple input images that will all be preserved with high fidelity, but keep in mind that the first image will be preserved with richer textures and finer details, so if you include elements such as faces, consider placing them in the first image. Only available for the `gpt-image-1` model. - `"high"` - `"low"` - `inputLocation: optional "bottom" or "left" or "middle" or 2 more` Location of the input image in the output. - `"bottom"` - `"left"` - `"middle"` - `"right"` - `"top"` - `invert: optional boolean` To invert the relief - `keypointThreshold: optional number` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `layerDifference: optional number` - `lengthThreshold: optional number` - `lockExpiresAt: optional string` The ISO timestamp when the lock on the canvas will expire - `lowThreshold: optional number` Low threshold for Canny detector - `mask: optional string` The mask used for the asset generation or editing - `maxIterations: optional number` - `maxThreshold: optional number` Maximum threshold for Grayscale conversion - `minThreshold: optional number` Minimum threshold for Grayscale conversion - `modality: optional "canny" or "depth" or "grayscale" or 7 more` Modality to detect - `"canny"` - `"depth"` - `"grayscale"` - `"lineart_anime"` - `"mlsd"` - `"normal"` - `"pose"` - `"scribble"` - `"segmentation"` - `"sketch"` - `mode: optional string` - `modelId: optional string` The modelId used to generate this asset - `modelType: optional "custom" or "elevenlabs-voice" or "flux.1" or 34 more` The type of the generator used - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `name: optional string` - `nbMasks: optional number` - `negativePrompt: optional string` The negative prompt used to generate this asset - `negativePromptStrength: optional number` Controls the influence of the negative prompt. Default 0 means the negative prompt has no effect. Higher values increase negative prompt influence. Must be > 0 if negativePrompt is provided. - `numInferenceSteps: optional number` The number of denoising steps for each image generation. - `numOutputs: optional number` The number of outputs to generate. - `originalAssetId: optional string` - `outputIndex: optional number` - `overlapPercentage: optional number` Overlap percentage for the output image. - `overrideEmbeddings: optional boolean` Override the embeddings of the model. Only your prompt and negativePrompt will be used. Use with caution. - `parentId: optional string` - `parentJobId: optional string` - `pathPrecision: optional number` - `points: optional array of array of number` List of points (label, x, y) in the image where label = 0 for background and 1 for object. - `polished: optional number` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `preset: optional string` - `progressPercent: optional number` - `prompt: optional string` The prompt that guided the asset generation or editing - `promptFidelity: optional number` Increase the fidelity to the prompt during upscale. Default: optimized for your preset and style. - `raised: optional number` How raised is the surface? 0 is flat like water, 1 is like a very rough rock - `referenceImages: optional array of string` The reference images used for the asset generation or editing - `refinementSteps: optional number` Additional refinement steps before scaling. If scalingFactor == 1, the refinement process will be applied (1 + refinementSteps) times. If scalingFactor > 1, the refinement process will be applied refinementSteps times. - `removeBackground: optional boolean` Remove background for Grayscale detector - `resizeOption: optional number` Size proportion of the input image in the output. - `resultContours: optional boolean` Boolean to output the contours. - `resultImage: optional boolean` Boolean to able output the cut out object. - `resultMask: optional boolean` Boolean to able return the masks (binary image) in the response. - `rootParentId: optional string` - `saveFlipbook: optional boolean` Save a flipbook of the texture. Deactivated when the input texture is larger than 2048x2048px - `scalingFactor: optional number` Scaling factor (when `targetWidth` not specified) - `scheduler: optional string` The scheduler used to generate this asset - `seed: optional string` The seed used to generate this asset. Can be a string or a number in some cases . - `sharpen: optional boolean` Sharpen tiles. - `shiny: optional number` How shiny is the surface? 0 is like a matte surface, 1 is like a diamond - `size: optional number` - `sketch: optional boolean` Activate sketch detection instead of canny. - `sourceProjectId: optional string` - `spliceThreshold: optional number` - `strength: optional number` The strength Only available for the `flux-kontext` LoRA model. - `structureFidelity: optional number` Strength for the input image structure preservation - `structureImage: optional string` The control image for structure. A canny detector will be applied to this image. Must reference an existing AssetId. - `style: optional "3d-cartoon" or "3d-rendered" or "anime" or 23 more` - `"3d-cartoon"` - `"3d-rendered"` - `"anime"` - `"cartoon"` - `"cinematic"` - `"claymation"` - `"cloud-skydome"` - `"comic"` - `"cyberpunk"` - `"enchanted"` - `"fantasy"` - `"ink"` - `"manga"` - `"manga-color"` - `"minimalist"` - `"neon-tron"` - `"oil-painting"` - `"pastel"` - `"photo"` - `"photography"` - `"psychedelic"` - `"retro-fantasy"` - `"scifi-concept-art"` - `"space"` - `"standard"` - `"whimsical"` - `styleFidelity: optional number` The higher the value the more it will look like the style image(s) - `styleImages: optional array of string` List of style images. Most of the time, only one image is enough. It must be existing AssetIds. - `styleImagesFidelity: optional number` Condition the influence of the style image(s). The higher the value, the more the style images will influence the upscaled image. - `targetHeight: optional number` The target height of the output image. - `targetWidth: optional number` Target width for the upscaled image, take priority over scaling factor - `text: optional string` A textual description / keywords describing the object of interest. - `texture: optional string` The asset to convert in texture maps. Must reference an existing AssetId. - `thumbnail: optional object { assetId, url }` The thumbnail of the canvas - `assetId: string` The AssetId of the image used as a thumbnail for the canvas (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: string` The url of the image used as a thumbnail for the canvas - `tileStyle: optional boolean` If set to true, during the upscaling process, the model will match tiles of the source image with tiles of the style image(s). This will result in a more coherent restyle. Works best with style images that have a similar composition. - `trainingImage: optional boolean` - `verticalExpansionRatio: optional number` (deprecated) Vertical expansion ratio. - `width: optional number` The width of the rendered image. - `mimeType: string` The mime type of the asset (example: "image/png") - `ownerId: string` The owner (project) ID (example: "proj_23tlk332lkht3kl2" or "team_dlkhgs23tlk3hlkth32lkht3kl2" for old teams) - `privacy: "private" or "public" or "unlisted"` The privacy of the asset - `"private"` - `"public"` - `"unlisted"` - `properties: object { size, animationFrameCount, bitrate, 20 more }` The properties of the asset, content may depend on the kind of asset returned - `size: number` - `animationFrameCount: optional number` Number of animation frames if animations exist - `bitrate: optional number` Bitrate of the media in bits per second - `boneCount: optional number` Number of bones if skeleton exists - `channels: optional number` Number of channels of the audio - `classification: optional "effect" or "interview" or "music" or 5 more` Classification of the audio - `"effect"` - `"interview"` - `"music"` - `"other"` - `"sound"` - `"speech"` - `"text"` - `"unknown"` - `codecName: optional string` Codec name of the media - `description: optional string` Description of the audio - `dimensions: optional array of number` Bounding box dimensions [width, height, depth] - `duration: optional number` Duration of the media in seconds - `faceCount: optional number` Number of faces/triangles in the mesh - `format: optional string` Format of the mesh file (e.g. 'glb', etc.) - `frameRate: optional number` Frame rate of the video in frames per second - `hasAnimations: optional boolean` Whether the mesh has animations - `hasNormals: optional boolean` Whether the mesh has normal vectors - `hasSkeleton: optional boolean` Whether the mesh has bones/skeleton - `hasUVs: optional boolean` Whether the mesh has UV coordinates - `height: optional number` - `nbFrames: optional number` Number of frames in the video - `sampleRate: optional number` Sample rate of the media in Hz - `transcription: optional object { text }` Transcription of the audio - `text: string` - `vertexCount: optional number` Number of vertices in the mesh - `width: optional number` - `source: "3d23d" or "3d23d:texture" or "3d:texture" or 72 more` source of the asset - `"3d23d"` - `"3d23d:texture"` - `"3d:texture"` - `"3d:texture:albedo"` - `"3d:texture:metallic"` - `"3d:texture:mtl"` - `"3d:texture:normal"` - `"3d:texture:roughness"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-control-net"` - `"inference-control-net-img"` - `"inference-control-net-inpainting"` - `"inference-control-net-inpainting-ip-adapter"` - `"inference-control-net-ip-adapter"` - `"inference-control-net-reference"` - `"inference-control-net-texture"` - `"inference-img"` - `"inference-img-ip-adapter"` - `"inference-img-texture"` - `"inference-in-paint"` - `"inference-in-paint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt"` - `"inference-txt-ip-adapter"` - `"inference-txt-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture:albedo"` - `"texture:ao"` - `"texture:edge"` - `"texture:height"` - `"texture:metallic"` - `"texture:normal"` - `"texture:smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `status: "error" or "pending" or "success"` The actual status - `"error"` - `"pending"` - `"success"` - `tags: array of string` The associated tags (example: ["sci-fi", "landscape"]) - `updatedAt: string` The asset last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `url: string` Signed URL to get the asset content - `automaticCaptioning: optional string` Automatic captioning of the asset - `description: optional string` The description, it will contain in priority: - the manual description - the advanced captioning when the asset is used in training flow - the automatic captioning - `embedding: optional array of number` The embedding of the asset when requested. Only available when an asset can be embedded (ie: not Detection maps) - `firstFrame: optional object { assetId, url }` The video asset's first frame. Contains the assetId and the url of the first frame. - `assetId: string` - `url: string` - `isHidden: optional boolean` Whether the asset is hidden. - `lastFrame: optional object { assetId, url }` The video asset's last frame. Contains the assetId and the url of the last frame. - `assetId: string` - `url: string` - `nsfw: optional array of string` The NSFW labels - `originalFileUrl: optional string` The original file url. Contains the url of the original file. without any conversion. Only available for some specific video, audio and threeD assets. Is only specified if the given asset data has been replaced with a new file during the creation of the asset. - `outputIndex: optional number` The output index of the asset within a job This index is an positive integer that starts at 0 It is used to differentiate between multiple outputs of the same job If the job has only one output, this index is 0 - `preview: optional object { assetId, url }` The asset's preview. Contains the assetId and the url of the preview. - `assetId: string` - `url: string` - `thumbnail: optional object { assetId, url }` The asset's thumbnail. Contains the assetId and the url of the thumbnail. - `assetId: string` - `url: string` - `job: optional object { createdAt, jobId, jobType, 8 more }` - `createdAt: string` The job creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `jobId: string` The job ID (example: "job_ocZCnG1Df35XRL1QyCZSRxAG8") - `jobType: "assets-download" or "canvas-export" or "caption" or 36 more` The type of job - `"assets-download"` - `"canvas-export"` - `"caption"` - `"caption-llava"` - `"custom"` - `"describe-style"` - `"detection"` - `"embed"` - `"flux"` - `"flux-model-training"` - `"generate-prompt"` - `"image-generation"` - `"image-prompt-editing"` - `"inference"` - `"mesh-preview-rendering"` - `"model-download"` - `"model-import"` - `"model-training"` - `"musubi-model-training"` - `"openai-image-generation"` - `"patch-image"` - `"pixelate"` - `"reframe"` - `"remove-background"` - `"repaint"` - `"restyle"` - `"segment"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"skybox-upscale-360"` - `"texture"` - `"translate"` - `"upload"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"vectorize"` - `"workflow"` - `metadata: object { assetIds, error, flow, 6 more }` Metadata of the job with some additional information - `assetIds: optional array of string` List of produced assets for this job - `error: optional string` Eventual error for the job - `flow: optional array of object { id, status, type, 15 more }` The flow of the job. Only available for workflow jobs. - `id: string` The id of the node. - `status: "failure" or "pending" or "processing" or 2 more` The status of the node. Only available for WorkflowJob nodes. - `"failure"` - `"pending"` - `"processing"` - `"skipped"` - `"success"` - `type: "custom-model" or "for-each" or "generate-prompt" or 7 more` The type of the job for the node. - `"custom-model"` - `"for-each"` - `"generate-prompt"` - `"list"` - `"logic"` - `"model"` - `"remove-background"` - `"transform"` - `"user-approval"` - `"workflow"` - `assets: optional array of object { assetId, url }` List of produced assets for this node. - `assetId: string` - `url: string` - `count: optional number` Fixed number of iterations for a ForEach node. When set, the loop runs exactly `count` times regardless of array input. When not set, the loop iterates over the resolved array input. Only available for ForEach nodes. - `dependsOn: optional array of string` The nodes that this node depends on. Only available for nodes that have dependencies. Mainly used for user approval nodes. - `includeOutputsInWorkflowJob: optional true` If true, the outputs of this node will be included in the workflow job's final output. Only applicable to producing nodes (custom-model, inference, etc.). By default, only last nodes (nodes not referenced by other nodes) contribute to outputs. Set this to true to also include intermediate nodes in the final output. Note: This should only be set to `true` or left undefined. - `true` - `inputs: optional array of object { name, type, allowedValues, 26 more }` The inputs of the node. - `name: string` The name that must be user to call the model through the API - `type: "boolean" or "file" or "file_array" or 7 more` The data type of the input - `"boolean"` - `"file"` - `"file_array"` - `"inputs_array"` - `"model"` - `"model_array"` - `"number"` - `"number_array"` - `"string"` - `"string_array"` - `allowedValues: optional array of unknown` The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown. - `backgroundBehavior: optional "opaque" or "transparent"` Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`. - `"opaque"` - `"transparent"` - `color: optional boolean` Whether the input is a color or not. Only available for `string` input type. - `costImpact: optional boolean` Whether this input affects the model's cost calculation - `default: optional unknown` The default value for the input - `description: optional string` Help text displayed in the UI to provide additional information about the input - `group: optional string` Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI. - `hint: optional string` Hint text displayed in the UI as a tooltip to guide the user - `inputs: optional array of map[unknown]` The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs. - `items: optional array of array of object { name, type, allowedValues, 25 more }` The configured items for inputs_array type inputs. Each item is an array of SubNodeInput that need ref/value resolution. Only available for inputs_array type. - `name: string` The name that must be user to call the model through the API - `type: "boolean" or "file" or "file_array" or 7 more` The data type of the input - `"boolean"` - `"file"` - `"file_array"` - `"inputs_array"` - `"model"` - `"model_array"` - `"number"` - `"number_array"` - `"string"` - `"string_array"` - `allowedValues: optional array of unknown` The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown. - `backgroundBehavior: optional "opaque" or "transparent"` Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`. - `"opaque"` - `"transparent"` - `color: optional boolean` Whether the input is a color or not. Only available for `string` input type. - `costImpact: optional boolean` Whether this input affects the model's cost calculation - `default: optional unknown` The default value for the input - `description: optional string` Help text displayed in the UI to provide additional information about the input - `group: optional string` Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI. - `hint: optional string` Hint text displayed in the UI as a tooltip to guide the user - `inputs: optional array of map[unknown]` The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs. - `kind: optional "3d" or "audio" or "document" or 4 more` The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `label: optional string` The label displayed in the UI for this input - `maskFrom: optional string` The name of the file input field to use as the mask source - `max: optional number` The maximum allowed value. Only available for `number` and `array` input types. - `maxLength: optional number` The maximum allowed length for `string` inputs. Also applies to each item in `string_array`. - `maxSize: optional number` The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time. - `min: optional number` The minimum allowed value. Only available for `number` and array input types. - `minLength: optional number` The minimum allowed length for string inputs. Also applies to each item in `string_array`. - `modelTypes: optional array of "custom" or "elevenlabs-voice" or "flux.1" or 34 more` The allowed model types for this input. Example: `["flux.1-lora"]`. Only available for `model_array` input type. - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `parent: optional boolean` Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types. For `file_array`, the parent asset is the first item in the array. - `placeholder: optional string` Placeholder text for the input. Only available for 'string' input type. - `prompt: optional boolean` Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type. - `promptSpark: optional boolean` Whether the input is used with prompt spark. Only available for `string` input type. - `ref: optional object { conditional, equal, name, node }` The reference to another input or output of the same workflow. Must have at least one of node or conditional. - `conditional: optional array of string` The conditional nodes to reference. If the conditional nodes are successful, the node will be successful. If the conditional nodes are skipped, the node will be skipped. Contains an array of node ids used to check the status of the nodes. - `equal: optional string` This is the desired node output value if ref is an if/else node. - `name: optional string` The name of the input or output to reference. If the type is 'workflow', the name is the name of the input of the workflow is required If the type is 'node', the name is not mandatory, except if you want all outputs of the node. To get all outputs of a node, you can use the name 'all'. - `node: optional string` The node id or 'workflow' if the source is a workflow input. - `required: optional object { always, conditionalValues, ifDefined, ifNotDefined }` Set of rules that describes when this input is required: - `always`: Input is always required - `ifNotDefined`: Input is required when another specified input is not defined - `ifDefined`: Input is required when another specified input is defined - `conditionalValues`: Input is required when another input has a specific value By default, the input is not required. - `always: optional boolean` Whether the input is always required - `conditionalValues: optional unknown` Makes this input required when another input has a specific value: - Key: name of the input to check - Value: operation and allowed values that trigger the requirement - `ifDefined: optional unknown` Makes this input required when another input is defined: - Key: name of the input that must be defined - Value: message to display when this input is required - `ifNotDefined: optional unknown` Makes this input required when another input is not defined: - Key: name of the input that must be undefined - Value: message to display when this input is required - `step: optional number` The step increment for numeric inputs. Only available for `number` input type. - `value: optional unknown` The value of the input. This is the value of the input that will be used to run the node. Only available for flows managed by a WorkflowJob. - `kind: optional "3d" or "audio" or "document" or 4 more` The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `label: optional string` The label displayed in the UI for this input - `maskFrom: optional string` The name of the file input field to use as the mask source - `max: optional number` The maximum allowed value. Only available for `number` and `array` input types. - `maxLength: optional number` The maximum allowed length for `string` inputs. Also applies to each item in `string_array`. - `maxSize: optional number` The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time. - `min: optional number` The minimum allowed value. Only available for `number` and array input types. - `minLength: optional number` The minimum allowed length for string inputs. Also applies to each item in `string_array`. - `modelTypes: optional array of "custom" or "elevenlabs-voice" or "flux.1" or 34 more` The allowed model types for this input. Example: `["flux.1-lora"]`. Only available for `model_array` input type. - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `parent: optional boolean` Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types. For `file_array`, the parent asset is the first item in the array. - `placeholder: optional string` Placeholder text for the input. Only available for 'string' input type. - `prompt: optional boolean` Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type. - `promptSpark: optional boolean` Whether the input is used with prompt spark. Only available for `string` input type. - `ref: optional object { conditional, equal, name, node }` The reference to another input or output of the same workflow. Must have at least one of node or conditional. - `conditional: optional array of string` The conditional nodes to reference. If the conditional nodes are successful, the node will be successful. If the conditional nodes are skipped, the node will be skipped. Contains an array of node ids used to check the status of the nodes. - `equal: optional string` This is the desired node output value if ref is an if/else node. - `name: optional string` The name of the input or output to reference. If the type is 'workflow', the name is the name of the input of the workflow is required If the type is 'node', the name is not mandatory, except if you want all outputs of the node. To get all outputs of a node, you can use the name 'all'. - `node: optional string` The node id or 'workflow' if the source is a workflow input. - `required: optional object { always, conditionalValues, ifDefined, ifNotDefined }` Set of rules that describes when this input is required: - `always`: Input is always required - `ifNotDefined`: Input is required when another specified input is not defined - `ifDefined`: Input is required when another specified input is defined - `conditionalValues`: Input is required when another input has a specific value By default, the input is not required. - `always: optional boolean` Whether the input is always required - `conditionalValues: optional unknown` Makes this input required when another input has a specific value: - Key: name of the input to check - Value: operation and allowed values that trigger the requirement - `ifDefined: optional unknown` Makes this input required when another input is defined: - Key: name of the input that must be defined - Value: message to display when this input is required - `ifNotDefined: optional unknown` Makes this input required when another input is not defined: - Key: name of the input that must be undefined - Value: message to display when this input is required - `step: optional number` The step increment for numeric inputs. Only available for `number` input type. - `value: optional unknown` The value of the input. This is the value of the input that will be used to run the node. Only available for flows managed by a WorkflowJob. - `items: optional array of string` Statically-configured items for a List node. The node outputs this array as-is when executed. Only available for List nodes. The values can be strings, numbers, or asset IDs. - `iterationIndex: optional number` Zero-based index of the iteration this node copy belongs to. Set on dynamically-created copies of loop body nodes. - `jobId: optional string` If the flow is part of a WorkflowJob, this is the jobId for the node. jobId is only available for nodes started. A node "Pending" for a running workflow job is not started. - `logic: optional object { cases, default, transform }` The logic of the node. Only available for logic nodes. - `cases: optional array of object { condition, value }` The cases of the logic. Only available for if/else nodes. - `condition: string` - `value: string` - `default: optional string` The default case of the logic. Contains the id/output of the node to execute if no case is matched. Only available for if/else nodes. - `transform: optional string` The transform of the logic. Only available for transform nodes. - `logicType: optional "if-else"` The type of the logic for the node. Only available for logic nodes. - `"if-else"` - `loopBodyNodeIds: optional array of string` IDs of the body template nodes that belong to this ForEach loop. At runtime these templates are cloned once per iteration and marked Skipped. Only available for ForEach nodes. - `loopNodeId: optional string` ID of the ForEach node that spawned this iteration copy. Set on dynamically-created copies of loop body nodes. - `modelId: optional string` The model id for the node. Mainly used for custom model tasks. - `output: optional unknown` The output of the node. Only available for logic nodes. - `workflowId: optional string` The workflow id for the node. Mainly used for workflow tasks. - `hint: optional string` Actionable hint for the user explaining what went wrong and how to resolve it. - `input: optional map[unknown]` The inputs for the job - `output: optional map[unknown]` May contain the output of the job for specific custom models jobs. Only available for custom models which generate non-assets outputs. Example: LLM text results. - `outputModelId: optional string` For voice-clone jobs: the ID of the model being trained. - `workflowId: optional string` The workflow ID of the job if job is part of a workflow. - `workflowJobId: optional string` The workflow job ID of the job if job is part of a workflow job. - `progress: number` Progress of the job (between 0 and 1) - `status: "canceled" or "failure" or "finalizing" or 5 more` The current status of the job - `"canceled"` - `"failure"` - `"finalizing"` - `"in-progress"` - `"pending"` - `"queued"` - `"success"` - `"warming-up"` - `statusHistory: array of object { date, status }` The history of the different statuses the job went through with the ISO string date of when the job reached each statuses. - `date: string` - `status: "canceled" or "failure" or "finalizing" or 5 more` - `"canceled"` - `"failure"` - `"finalizing"` - `"in-progress"` - `"pending"` - `"queued"` - `"success"` - `"warming-up"` - `updatedAt: string` The job last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `authorId: optional string` The author user ID (example: "dcf121faaa1a0a0bbbd9ca1b73d62aea") - `billing: optional object { cuCost, cuDiscount }` The billing of the job - `cuCost: number` - `cuDiscount: number` - `ownerId: optional string` The owner ID (example: "team_U3Qmc8PCdWXwAQJ4Dvw4tV6D") ### Example ```http curl https://api.cloud.scenario.com/v1/assets \ -H 'Content-Type: application/json' \ -u "$SCENARIO_SDK_API_KEY:SCENARIO_SDK_API_SECRET" \ -d '{ "name": "name" }' ``` #### Response ```json { "asset": { "id": "id", "authorId": "authorId", "collectionIds": [ "string" ], "createdAt": "createdAt", "editCapabilities": [ "DETECTION" ], "kind": "3d", "metadata": { "kind": "3d", "type": "3d-texture", "angular": 0, "aspectRatio": "aspectRatio", "backgroundOpacity": 0, "baseModelId": "baseModelId", "bbox": [ 0, 0, 0, 0 ], "betterQuality": true, "cannyStructureImage": "cannyStructureImage", "clustering": true, "colorCorrection": true, "colorMode": "colorMode", "colorPrecision": 0, "concepts": [ { "modelId": "modelId", "scale": -2, "modelEpoch": "modelEpoch" } ], "contours": [ [ [ [ 0 ] ] ] ], "controlEnd": 0, "copiedAt": "copiedAt", "cornerThreshold": 0, "creativity": 0, "creativityDecay": 0, "defaultParameters": true, "depthFidelity": 0, "depthImage": "depthImage", "detailsLevel": -50, "dilate": 0, "factor": 0, "filterSpeckle": 0, "fractality": 0, "geometryEnforcement": 0, "guidance": 0, "halfMode": true, "hdr": 0, "height": 0, "highThreshold": 0, "horizontalExpansionRatio": 1, "image": "image", "imageFidelity": 0, "imageType": "seamfull", "inferenceId": "inferenceId", "inputFidelity": "high", "inputLocation": "bottom", "invert": true, "keypointThreshold": 0, "layerDifference": 0, "lengthThreshold": 0, "lockExpiresAt": "lockExpiresAt", "lowThreshold": 0, "mask": "mask", "maxIterations": 0, "maxThreshold": 0, "minThreshold": 0, "modality": "canny", "mode": "mode", "modelId": "modelId", "modelType": "custom", "name": "name", "nbMasks": 0, "negativePrompt": "negativePrompt", "negativePromptStrength": 0, "numInferenceSteps": 5, "numOutputs": 1, "originalAssetId": "originalAssetId", "outputIndex": 0, "overlapPercentage": 0, "overrideEmbeddings": true, "parentId": "parentId", "parentJobId": "parentJobId", "pathPrecision": 0, "points": [ [ 0 ], [ 0 ], [ 0 ] ], "polished": 0, "preset": "preset", "progressPercent": 0, "prompt": "prompt", "promptFidelity": 0, "raised": 0, "referenceImages": [ "string" ], "refinementSteps": 0, "removeBackground": true, "resizeOption": 0.1, "resultContours": true, "resultImage": true, "resultMask": true, "rootParentId": "rootParentId", "saveFlipbook": true, "scalingFactor": 1, "scheduler": "scheduler", "seed": "seed", "sharpen": true, "shiny": 0, "size": 0, "sketch": true, "sourceProjectId": "sourceProjectId", "spliceThreshold": 0, "strength": 0, "structureFidelity": 0, "structureImage": "structureImage", "style": "3d-cartoon", "styleFidelity": 0, "styleImages": [ "string" ], "styleImagesFidelity": 0, "targetHeight": 0, "targetWidth": 1024, "text": "text", "texture": "texture", "thumbnail": { "assetId": "assetId", "url": "url" }, "tileStyle": true, "trainingImage": true, "verticalExpansionRatio": 1, "width": 1024 }, "mimeType": "mimeType", "ownerId": "ownerId", "privacy": "private", "properties": { "size": 0, "animationFrameCount": 0, "bitrate": 0, "boneCount": 0, "channels": 0, "classification": "effect", "codecName": "codecName", "description": "description", "dimensions": [ 0, 0, 0 ], "duration": 0, "faceCount": 0, "format": "format", "frameRate": 0, "hasAnimations": true, "hasNormals": true, "hasSkeleton": true, "hasUVs": true, "height": 0, "nbFrames": 0, "sampleRate": 0, "transcription": { "text": "text" }, "vertexCount": 0, "width": 0 }, "source": "3d23d", "status": "error", "tags": [ "string" ], "updatedAt": "updatedAt", "url": "url", "automaticCaptioning": "automaticCaptioning", "description": "description", "embedding": [ 0 ], "firstFrame": { "assetId": "assetId", "url": "url" }, "isHidden": true, "lastFrame": { "assetId": "assetId", "url": "url" }, "nsfw": [ "string" ], "originalFileUrl": "originalFileUrl", "outputIndex": 0, "preview": { "assetId": "assetId", "url": "url" }, "thumbnail": { "assetId": "assetId", "url": "url" } }, "job": { "createdAt": "createdAt", "jobId": "jobId", "jobType": "assets-download", "metadata": { "assetIds": [ "string" ], "error": "error", "flow": [ { "id": "id", "status": "failure", "type": "custom-model", "assets": [ { "assetId": "assetId", "url": "url" } ], "count": 0, "dependsOn": [ "string" ], "includeOutputsInWorkflowJob": true, "inputs": [ { "name": "name", "type": "boolean", "allowedValues": [ {} ], "backgroundBehavior": "opaque", "color": true, "costImpact": true, "default": {}, "description": "description", "group": "group", "hint": "hint", "inputs": [ { "foo": "bar" } ], "items": [ [ { "name": "name", "type": "boolean", "allowedValues": [ {} ], "backgroundBehavior": "opaque", "color": true, "costImpact": true, "default": {}, "description": "description", "group": "group", "hint": "hint", "inputs": [ { "foo": "bar" } ], "kind": "3d", "label": "label", "maskFrom": "maskFrom", "max": 0, "maxLength": 0, "maxSize": 0, "min": 0, "minLength": 0, "modelTypes": [ "custom" ], "parent": true, "placeholder": "placeholder", "prompt": true, "promptSpark": true, "ref": { "conditional": [ "string" ], "equal": "equal", "name": "name", "node": "node" }, "required": { "always": true, "conditionalValues": {}, "ifDefined": {}, "ifNotDefined": {} }, "step": 1, "value": {} } ] ], "kind": "3d", "label": "label", "maskFrom": "maskFrom", "max": 0, "maxLength": 0, "maxSize": 0, "min": 0, "minLength": 0, "modelTypes": [ "custom" ], "parent": true, "placeholder": "placeholder", "prompt": true, "promptSpark": true, "ref": { "conditional": [ "string" ], "equal": "equal", "name": "name", "node": "node" }, "required": { "always": true, "conditionalValues": {}, "ifDefined": {}, "ifNotDefined": {} }, "step": 1, "value": {} } ], "items": [ "string" ], "iterationIndex": 0, "jobId": "jobId", "logic": { "cases": [ { "condition": "condition", "value": "value" } ], "default": "default", "transform": "transform" }, "logicType": "if-else", "loopBodyNodeIds": [ "string" ], "loopNodeId": "loopNodeId", "modelId": "modelId", "output": {}, "workflowId": "workflowId" } ], "hint": "hint", "input": { "foo": "bar" }, "output": { "foo": "bar" }, "outputModelId": "outputModelId", "workflowId": "workflowId", "workflowJobId": "workflowJobId" }, "progress": 0, "status": "canceled", "statusHistory": [ { "date": "date", "status": "canceled" } ], "updatedAt": "updatedAt", "authorId": "authorId", "billing": { "cuCost": 0, "cuDiscount": 0 }, "ownerId": "ownerId" } } ``` ## Delete Multiple **delete** `/assets` Delete multiple assets ### Body Parameters - `assetIds: array of string` The ids of the assets to delete. (Max 100 at once) ### Example ```http curl https://api.cloud.scenario.com/v1/assets \ -X DELETE \ -u "$SCENARIO_SDK_API_KEY:SCENARIO_SDK_API_SECRET" ``` #### Response ```json {} ``` ## Get Bulk **post** `/assets/get-bulk` Get multiple assets by their IDs ### Query Parameters - `originalAssets: optional boolean` If set to true, returns the original asset without transformation ### Body Parameters - `assetIds: optional array of string` The list of asset ids the team has read access to. Limit of 200 assets. ### Returns - `assets: array of object { id, authorId, collectionIds, 24 more }` - `id: string` The asset ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `authorId: string` The author user ID (example: "dcf121faaa1a0a0bbbd9ca1b73d62aea") - `collectionIds: array of string` A list of CollectionId this asset belongs to - `createdAt: string` The asset creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `editCapabilities: array of "DETECTION" or "GENERATIVE_FILL" or "PIXELATE" or 8 more` List of edit capabilities - `"DETECTION"` - `"GENERATIVE_FILL"` - `"PIXELATE"` - `"PROMPT_EDITING"` - `"REFINE"` - `"REFRAME"` - `"REMOVE_BACKGROUND"` - `"SEGMENTATION"` - `"UPSCALE"` - `"UPSCALE_360"` - `"VECTORIZATION"` - `kind: "3d" or "audio" or "document" or 4 more` The kind of asset - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `metadata: object { kind, type, angular, 106 more }` Metadata of the asset with some additional information - `kind: "3d" or "audio" or "document" or 4 more` - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `type: "3d-texture" or "3d-texture-albedo" or "3d-texture-metallic" or 72 more` The type of the asset. Ex: 'inference-txt2img' will represent an asset generated from a text to image model - `"3d-texture"` - `"3d-texture-albedo"` - `"3d-texture-metallic"` - `"3d-texture-mtl"` - `"3d-texture-normal"` - `"3d-texture-roughness"` - `"3d23d"` - `"3d23d-texture"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-controlnet"` - `"inference-controlnet-img2img"` - `"inference-controlnet-inpaint"` - `"inference-controlnet-inpaint-ip-adapter"` - `"inference-controlnet-ip-adapter"` - `"inference-controlnet-reference"` - `"inference-controlnet-texture"` - `"inference-img2img"` - `"inference-img2img-ip-adapter"` - `"inference-img2img-texture"` - `"inference-inpaint"` - `"inference-inpaint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt2img"` - `"inference-txt2img-ip-adapter"` - `"inference-txt2img-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture-albedo"` - `"texture-ao"` - `"texture-edge"` - `"texture-height"` - `"texture-metallic"` - `"texture-normal"` - `"texture-smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `angular: optional number` How angular is the surface? 0 is like a sphere, 1 is like a mechanical object - `aspectRatio: optional string` The optional aspect ratio given for the generation, only applicable for some models - `backgroundOpacity: optional number` Int to set between 0 and 255 for the opacity of the background in the result images. - `baseModelId: optional string` The baseModelId that maybe changed at inference time - `bbox: optional array of number` A bounding box around the object of interest, in the format [x1, y1, x2, y2]. - `betterQuality: optional boolean` Remove small dark spots (i.e. “pepper”) and connect small bright cracks. - `cannyStructureImage: optional string` The control image already processed by canny detector. Must reference an existing AssetId. - `clustering: optional boolean` Activate clustering. - `colorCorrection: optional boolean` Ensure upscaled tile have the same color histogram as original tile. - `colorMode: optional string` - `colorPrecision: optional number` - `concepts: optional array of object { modelId, scale, modelEpoch }` Flux Kontext LoRA to style the image. For Flux Kontext Prompt Editing. - `modelId: string` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: number` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `modelEpoch: optional string` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `contours: optional array of array of array of array of number` - `controlEnd: optional number` End step for control. - `copiedAt: optional string` The date when the asset was copied to a project - `cornerThreshold: optional number` - `creativity: optional number` Allow the generation of "hallucinations" during the upscale process, which adds additional details and deviates from the original image. Default: optimized for your preset and style. - `creativityDecay: optional number` Amount of decay in creativity over the upscale process. The lowest the value, the less the creativity will be preserved over the upscale process. - `defaultParameters: optional boolean` If true, use the default parameters - `depthFidelity: optional number` The depth fidelity if a depth image provided - `depthImage: optional string` The control image processed by depth estimator. Must reference an existing AssetId. - `detailsLevel: optional number` Amount of details to remove or add - `dilate: optional number` The number of pixels to dilate the result masks. - `factor: optional number` Contrast factor for Grayscale detector - `filterSpeckle: optional number` - `fractality: optional number` Determine the scale at which the upscale process works. - With a small value, the upscale works at the largest scale, resulting in fewer added details and more coherent images. Ideal for portraits, for example. - With a large value, the upscale works at the smallest scale, resulting in more added details and more hallucinations. Ideal for landscapes, for example. (info): A small value is slower and more expensive to run. - `geometryEnforcement: optional number` Apply extra control to the Skybox 360 geometry. The higher the value, the more the 360 geometry will influence the generated skybox image. Use with caution. Default is adapted to the other parameters. - `guidance: optional number` The guidance used to generate this asset - `halfMode: optional boolean` - `hdr: optional number` - `height: optional number` - `highThreshold: optional number` High threshold for Canny detector - `horizontalExpansionRatio: optional number` (deprecated) Horizontal expansion ratio. - `image: optional string` The input image to process. Must reference an existing AssetId or be a data URL. - `imageFidelity: optional number` Strengthen the similarity to the original image during the upscale. Default: optimized for your preset and style. - `imageType: optional "seamfull" or "skybox" or "texture"` Preserve the seamless properties of skybox or texture images. Input has to be of same type (seamless). - `"seamfull"` - `"skybox"` - `"texture"` - `inferenceId: optional string` The id of the Inference describing how this image was generated - `inputFidelity: optional "high" or "low"` When set to `high`, allows to better preserve details from the input images in the output. This is especially useful when using images that contain elements like faces or logos that require accurate preservation in the generated image. You can provide multiple input images that will all be preserved with high fidelity, but keep in mind that the first image will be preserved with richer textures and finer details, so if you include elements such as faces, consider placing them in the first image. Only available for the `gpt-image-1` model. - `"high"` - `"low"` - `inputLocation: optional "bottom" or "left" or "middle" or 2 more` Location of the input image in the output. - `"bottom"` - `"left"` - `"middle"` - `"right"` - `"top"` - `invert: optional boolean` To invert the relief - `keypointThreshold: optional number` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `layerDifference: optional number` - `lengthThreshold: optional number` - `lockExpiresAt: optional string` The ISO timestamp when the lock on the canvas will expire - `lowThreshold: optional number` Low threshold for Canny detector - `mask: optional string` The mask used for the asset generation or editing - `maxIterations: optional number` - `maxThreshold: optional number` Maximum threshold for Grayscale conversion - `minThreshold: optional number` Minimum threshold for Grayscale conversion - `modality: optional "canny" or "depth" or "grayscale" or 7 more` Modality to detect - `"canny"` - `"depth"` - `"grayscale"` - `"lineart_anime"` - `"mlsd"` - `"normal"` - `"pose"` - `"scribble"` - `"segmentation"` - `"sketch"` - `mode: optional string` - `modelId: optional string` The modelId used to generate this asset - `modelType: optional "custom" or "elevenlabs-voice" or "flux.1" or 34 more` The type of the generator used - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `name: optional string` - `nbMasks: optional number` - `negativePrompt: optional string` The negative prompt used to generate this asset - `negativePromptStrength: optional number` Controls the influence of the negative prompt. Default 0 means the negative prompt has no effect. Higher values increase negative prompt influence. Must be > 0 if negativePrompt is provided. - `numInferenceSteps: optional number` The number of denoising steps for each image generation. - `numOutputs: optional number` The number of outputs to generate. - `originalAssetId: optional string` - `outputIndex: optional number` - `overlapPercentage: optional number` Overlap percentage for the output image. - `overrideEmbeddings: optional boolean` Override the embeddings of the model. Only your prompt and negativePrompt will be used. Use with caution. - `parentId: optional string` - `parentJobId: optional string` - `pathPrecision: optional number` - `points: optional array of array of number` List of points (label, x, y) in the image where label = 0 for background and 1 for object. - `polished: optional number` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `preset: optional string` - `progressPercent: optional number` - `prompt: optional string` The prompt that guided the asset generation or editing - `promptFidelity: optional number` Increase the fidelity to the prompt during upscale. Default: optimized for your preset and style. - `raised: optional number` How raised is the surface? 0 is flat like water, 1 is like a very rough rock - `referenceImages: optional array of string` The reference images used for the asset generation or editing - `refinementSteps: optional number` Additional refinement steps before scaling. If scalingFactor == 1, the refinement process will be applied (1 + refinementSteps) times. If scalingFactor > 1, the refinement process will be applied refinementSteps times. - `removeBackground: optional boolean` Remove background for Grayscale detector - `resizeOption: optional number` Size proportion of the input image in the output. - `resultContours: optional boolean` Boolean to output the contours. - `resultImage: optional boolean` Boolean to able output the cut out object. - `resultMask: optional boolean` Boolean to able return the masks (binary image) in the response. - `rootParentId: optional string` - `saveFlipbook: optional boolean` Save a flipbook of the texture. Deactivated when the input texture is larger than 2048x2048px - `scalingFactor: optional number` Scaling factor (when `targetWidth` not specified) - `scheduler: optional string` The scheduler used to generate this asset - `seed: optional string` The seed used to generate this asset. Can be a string or a number in some cases . - `sharpen: optional boolean` Sharpen tiles. - `shiny: optional number` How shiny is the surface? 0 is like a matte surface, 1 is like a diamond - `size: optional number` - `sketch: optional boolean` Activate sketch detection instead of canny. - `sourceProjectId: optional string` - `spliceThreshold: optional number` - `strength: optional number` The strength Only available for the `flux-kontext` LoRA model. - `structureFidelity: optional number` Strength for the input image structure preservation - `structureImage: optional string` The control image for structure. A canny detector will be applied to this image. Must reference an existing AssetId. - `style: optional "3d-cartoon" or "3d-rendered" or "anime" or 23 more` - `"3d-cartoon"` - `"3d-rendered"` - `"anime"` - `"cartoon"` - `"cinematic"` - `"claymation"` - `"cloud-skydome"` - `"comic"` - `"cyberpunk"` - `"enchanted"` - `"fantasy"` - `"ink"` - `"manga"` - `"manga-color"` - `"minimalist"` - `"neon-tron"` - `"oil-painting"` - `"pastel"` - `"photo"` - `"photography"` - `"psychedelic"` - `"retro-fantasy"` - `"scifi-concept-art"` - `"space"` - `"standard"` - `"whimsical"` - `styleFidelity: optional number` The higher the value the more it will look like the style image(s) - `styleImages: optional array of string` List of style images. Most of the time, only one image is enough. It must be existing AssetIds. - `styleImagesFidelity: optional number` Condition the influence of the style image(s). The higher the value, the more the style images will influence the upscaled image. - `targetHeight: optional number` The target height of the output image. - `targetWidth: optional number` Target width for the upscaled image, take priority over scaling factor - `text: optional string` A textual description / keywords describing the object of interest. - `texture: optional string` The asset to convert in texture maps. Must reference an existing AssetId. - `thumbnail: optional object { assetId, url }` The thumbnail of the canvas - `assetId: string` The AssetId of the image used as a thumbnail for the canvas (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: string` The url of the image used as a thumbnail for the canvas - `tileStyle: optional boolean` If set to true, during the upscaling process, the model will match tiles of the source image with tiles of the style image(s). This will result in a more coherent restyle. Works best with style images that have a similar composition. - `trainingImage: optional boolean` - `verticalExpansionRatio: optional number` (deprecated) Vertical expansion ratio. - `width: optional number` The width of the rendered image. - `mimeType: string` The mime type of the asset (example: "image/png") - `ownerId: string` The owner (project) ID (example: "proj_23tlk332lkht3kl2" or "team_dlkhgs23tlk3hlkth32lkht3kl2" for old teams) - `privacy: "private" or "public" or "unlisted"` The privacy of the asset - `"private"` - `"public"` - `"unlisted"` - `properties: object { size, animationFrameCount, bitrate, 20 more }` The properties of the asset, content may depend on the kind of asset returned - `size: number` - `animationFrameCount: optional number` Number of animation frames if animations exist - `bitrate: optional number` Bitrate of the media in bits per second - `boneCount: optional number` Number of bones if skeleton exists - `channels: optional number` Number of channels of the audio - `classification: optional "effect" or "interview" or "music" or 5 more` Classification of the audio - `"effect"` - `"interview"` - `"music"` - `"other"` - `"sound"` - `"speech"` - `"text"` - `"unknown"` - `codecName: optional string` Codec name of the media - `description: optional string` Description of the audio - `dimensions: optional array of number` Bounding box dimensions [width, height, depth] - `duration: optional number` Duration of the media in seconds - `faceCount: optional number` Number of faces/triangles in the mesh - `format: optional string` Format of the mesh file (e.g. 'glb', etc.) - `frameRate: optional number` Frame rate of the video in frames per second - `hasAnimations: optional boolean` Whether the mesh has animations - `hasNormals: optional boolean` Whether the mesh has normal vectors - `hasSkeleton: optional boolean` Whether the mesh has bones/skeleton - `hasUVs: optional boolean` Whether the mesh has UV coordinates - `height: optional number` - `nbFrames: optional number` Number of frames in the video - `sampleRate: optional number` Sample rate of the media in Hz - `transcription: optional object { text }` Transcription of the audio - `text: string` - `vertexCount: optional number` Number of vertices in the mesh - `width: optional number` - `source: "3d23d" or "3d23d:texture" or "3d:texture" or 72 more` source of the asset - `"3d23d"` - `"3d23d:texture"` - `"3d:texture"` - `"3d:texture:albedo"` - `"3d:texture:metallic"` - `"3d:texture:mtl"` - `"3d:texture:normal"` - `"3d:texture:roughness"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-control-net"` - `"inference-control-net-img"` - `"inference-control-net-inpainting"` - `"inference-control-net-inpainting-ip-adapter"` - `"inference-control-net-ip-adapter"` - `"inference-control-net-reference"` - `"inference-control-net-texture"` - `"inference-img"` - `"inference-img-ip-adapter"` - `"inference-img-texture"` - `"inference-in-paint"` - `"inference-in-paint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt"` - `"inference-txt-ip-adapter"` - `"inference-txt-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture:albedo"` - `"texture:ao"` - `"texture:edge"` - `"texture:height"` - `"texture:metallic"` - `"texture:normal"` - `"texture:smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `status: "error" or "pending" or "success"` The actual status - `"error"` - `"pending"` - `"success"` - `tags: array of string` The associated tags (example: ["sci-fi", "landscape"]) - `updatedAt: string` The asset last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `url: string` Signed URL to get the asset content - `automaticCaptioning: optional string` Automatic captioning of the asset - `description: optional string` The description, it will contain in priority: - the manual description - the advanced captioning when the asset is used in training flow - the automatic captioning - `embedding: optional array of number` The embedding of the asset when requested. Only available when an asset can be embedded (ie: not Detection maps) - `firstFrame: optional object { assetId, url }` The video asset's first frame. Contains the assetId and the url of the first frame. - `assetId: string` - `url: string` - `isHidden: optional boolean` Whether the asset is hidden. - `lastFrame: optional object { assetId, url }` The video asset's last frame. Contains the assetId and the url of the last frame. - `assetId: string` - `url: string` - `nsfw: optional array of string` The NSFW labels - `originalFileUrl: optional string` The original file url. Contains the url of the original file. without any conversion. Only available for some specific video, audio and threeD assets. Is only specified if the given asset data has been replaced with a new file during the creation of the asset. - `outputIndex: optional number` The output index of the asset within a job This index is an positive integer that starts at 0 It is used to differentiate between multiple outputs of the same job If the job has only one output, this index is 0 - `preview: optional object { assetId, url }` The asset's preview. Contains the assetId and the url of the preview. - `assetId: string` - `url: string` - `thumbnail: optional object { assetId, url }` The asset's thumbnail. Contains the assetId and the url of the thumbnail. - `assetId: string` - `url: string` ### Example ```http curl https://api.cloud.scenario.com/v1/assets/get-bulk \ -H 'Content-Type: application/json' \ -u "$SCENARIO_SDK_API_KEY:SCENARIO_SDK_API_SECRET" \ -d '{}' ``` #### Response ```json { "assets": [ { "id": "id", "authorId": "authorId", "collectionIds": [ "string" ], "createdAt": "createdAt", "editCapabilities": [ "DETECTION" ], "kind": "3d", "metadata": { "kind": "3d", "type": "3d-texture", "angular": 0, "aspectRatio": "aspectRatio", "backgroundOpacity": 0, "baseModelId": "baseModelId", "bbox": [ 0, 0, 0, 0 ], "betterQuality": true, "cannyStructureImage": "cannyStructureImage", "clustering": true, "colorCorrection": true, "colorMode": "colorMode", "colorPrecision": 0, "concepts": [ { "modelId": "modelId", "scale": -2, "modelEpoch": "modelEpoch" } ], "contours": [ [ [ [ 0 ] ] ] ], "controlEnd": 0, "copiedAt": "copiedAt", "cornerThreshold": 0, "creativity": 0, "creativityDecay": 0, "defaultParameters": true, "depthFidelity": 0, "depthImage": "depthImage", "detailsLevel": -50, "dilate": 0, "factor": 0, "filterSpeckle": 0, "fractality": 0, "geometryEnforcement": 0, "guidance": 0, "halfMode": true, "hdr": 0, "height": 0, "highThreshold": 0, "horizontalExpansionRatio": 1, "image": "image", "imageFidelity": 0, "imageType": "seamfull", "inferenceId": "inferenceId", "inputFidelity": "high", "inputLocation": "bottom", "invert": true, "keypointThreshold": 0, "layerDifference": 0, "lengthThreshold": 0, "lockExpiresAt": "lockExpiresAt", "lowThreshold": 0, "mask": "mask", "maxIterations": 0, "maxThreshold": 0, "minThreshold": 0, "modality": "canny", "mode": "mode", "modelId": "modelId", "modelType": "custom", "name": "name", "nbMasks": 0, "negativePrompt": "negativePrompt", "negativePromptStrength": 0, "numInferenceSteps": 5, "numOutputs": 1, "originalAssetId": "originalAssetId", "outputIndex": 0, "overlapPercentage": 0, "overrideEmbeddings": true, "parentId": "parentId", "parentJobId": "parentJobId", "pathPrecision": 0, "points": [ [ 0 ], [ 0 ], [ 0 ] ], "polished": 0, "preset": "preset", "progressPercent": 0, "prompt": "prompt", "promptFidelity": 0, "raised": 0, "referenceImages": [ "string" ], "refinementSteps": 0, "removeBackground": true, "resizeOption": 0.1, "resultContours": true, "resultImage": true, "resultMask": true, "rootParentId": "rootParentId", "saveFlipbook": true, "scalingFactor": 1, "scheduler": "scheduler", "seed": "seed", "sharpen": true, "shiny": 0, "size": 0, "sketch": true, "sourceProjectId": "sourceProjectId", "spliceThreshold": 0, "strength": 0, "structureFidelity": 0, "structureImage": "structureImage", "style": "3d-cartoon", "styleFidelity": 0, "styleImages": [ "string" ], "styleImagesFidelity": 0, "targetHeight": 0, "targetWidth": 1024, "text": "text", "texture": "texture", "thumbnail": { "assetId": "assetId", "url": "url" }, "tileStyle": true, "trainingImage": true, "verticalExpansionRatio": 1, "width": 1024 }, "mimeType": "mimeType", "ownerId": "ownerId", "privacy": "private", "properties": { "size": 0, "animationFrameCount": 0, "bitrate": 0, "boneCount": 0, "channels": 0, "classification": "effect", "codecName": "codecName", "description": "description", "dimensions": [ 0, 0, 0 ], "duration": 0, "faceCount": 0, "format": "format", "frameRate": 0, "hasAnimations": true, "hasNormals": true, "hasSkeleton": true, "hasUVs": true, "height": 0, "nbFrames": 0, "sampleRate": 0, "transcription": { "text": "text" }, "vertexCount": 0, "width": 0 }, "source": "3d23d", "status": "error", "tags": [ "string" ], "updatedAt": "updatedAt", "url": "url", "automaticCaptioning": "automaticCaptioning", "description": "description", "embedding": [ 0 ], "firstFrame": { "assetId": "assetId", "url": "url" }, "isHidden": true, "lastFrame": { "assetId": "assetId", "url": "url" }, "nsfw": [ "string" ], "originalFileUrl": "originalFileUrl", "outputIndex": 0, "preview": { "assetId": "assetId", "url": "url" }, "thumbnail": { "assetId": "assetId", "url": "url" } } ] } ``` ## Retrieve **get** `/assets/{assetId}` Get the details of an asset. Supports both public access (via the `Authorization` header set to `public-auth-token`) and authenticated user access (including API keys). ### Path Parameters - `assetId: string` ### Query Parameters - `originalAssets: optional boolean` If set to true, returns the original asset without transformation - `withEmbedding: optional boolean` Include the embedding in the response ### Returns - `asset: object { id, authorId, collectionIds, 24 more }` - `id: string` The asset ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `authorId: string` The author user ID (example: "dcf121faaa1a0a0bbbd9ca1b73d62aea") - `collectionIds: array of string` A list of CollectionId this asset belongs to - `createdAt: string` The asset creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `editCapabilities: array of "DETECTION" or "GENERATIVE_FILL" or "PIXELATE" or 8 more` List of edit capabilities - `"DETECTION"` - `"GENERATIVE_FILL"` - `"PIXELATE"` - `"PROMPT_EDITING"` - `"REFINE"` - `"REFRAME"` - `"REMOVE_BACKGROUND"` - `"SEGMENTATION"` - `"UPSCALE"` - `"UPSCALE_360"` - `"VECTORIZATION"` - `kind: "3d" or "audio" or "document" or 4 more` The kind of asset - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `metadata: object { kind, type, angular, 106 more }` Metadata of the asset with some additional information - `kind: "3d" or "audio" or "document" or 4 more` - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `type: "3d-texture" or "3d-texture-albedo" or "3d-texture-metallic" or 72 more` The type of the asset. Ex: 'inference-txt2img' will represent an asset generated from a text to image model - `"3d-texture"` - `"3d-texture-albedo"` - `"3d-texture-metallic"` - `"3d-texture-mtl"` - `"3d-texture-normal"` - `"3d-texture-roughness"` - `"3d23d"` - `"3d23d-texture"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-controlnet"` - `"inference-controlnet-img2img"` - `"inference-controlnet-inpaint"` - `"inference-controlnet-inpaint-ip-adapter"` - `"inference-controlnet-ip-adapter"` - `"inference-controlnet-reference"` - `"inference-controlnet-texture"` - `"inference-img2img"` - `"inference-img2img-ip-adapter"` - `"inference-img2img-texture"` - `"inference-inpaint"` - `"inference-inpaint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt2img"` - `"inference-txt2img-ip-adapter"` - `"inference-txt2img-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture-albedo"` - `"texture-ao"` - `"texture-edge"` - `"texture-height"` - `"texture-metallic"` - `"texture-normal"` - `"texture-smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `angular: optional number` How angular is the surface? 0 is like a sphere, 1 is like a mechanical object - `aspectRatio: optional string` The optional aspect ratio given for the generation, only applicable for some models - `backgroundOpacity: optional number` Int to set between 0 and 255 for the opacity of the background in the result images. - `baseModelId: optional string` The baseModelId that maybe changed at inference time - `bbox: optional array of number` A bounding box around the object of interest, in the format [x1, y1, x2, y2]. - `betterQuality: optional boolean` Remove small dark spots (i.e. “pepper”) and connect small bright cracks. - `cannyStructureImage: optional string` The control image already processed by canny detector. Must reference an existing AssetId. - `clustering: optional boolean` Activate clustering. - `colorCorrection: optional boolean` Ensure upscaled tile have the same color histogram as original tile. - `colorMode: optional string` - `colorPrecision: optional number` - `concepts: optional array of object { modelId, scale, modelEpoch }` Flux Kontext LoRA to style the image. For Flux Kontext Prompt Editing. - `modelId: string` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: number` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `modelEpoch: optional string` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `contours: optional array of array of array of array of number` - `controlEnd: optional number` End step for control. - `copiedAt: optional string` The date when the asset was copied to a project - `cornerThreshold: optional number` - `creativity: optional number` Allow the generation of "hallucinations" during the upscale process, which adds additional details and deviates from the original image. Default: optimized for your preset and style. - `creativityDecay: optional number` Amount of decay in creativity over the upscale process. The lowest the value, the less the creativity will be preserved over the upscale process. - `defaultParameters: optional boolean` If true, use the default parameters - `depthFidelity: optional number` The depth fidelity if a depth image provided - `depthImage: optional string` The control image processed by depth estimator. Must reference an existing AssetId. - `detailsLevel: optional number` Amount of details to remove or add - `dilate: optional number` The number of pixels to dilate the result masks. - `factor: optional number` Contrast factor for Grayscale detector - `filterSpeckle: optional number` - `fractality: optional number` Determine the scale at which the upscale process works. - With a small value, the upscale works at the largest scale, resulting in fewer added details and more coherent images. Ideal for portraits, for example. - With a large value, the upscale works at the smallest scale, resulting in more added details and more hallucinations. Ideal for landscapes, for example. (info): A small value is slower and more expensive to run. - `geometryEnforcement: optional number` Apply extra control to the Skybox 360 geometry. The higher the value, the more the 360 geometry will influence the generated skybox image. Use with caution. Default is adapted to the other parameters. - `guidance: optional number` The guidance used to generate this asset - `halfMode: optional boolean` - `hdr: optional number` - `height: optional number` - `highThreshold: optional number` High threshold for Canny detector - `horizontalExpansionRatio: optional number` (deprecated) Horizontal expansion ratio. - `image: optional string` The input image to process. Must reference an existing AssetId or be a data URL. - `imageFidelity: optional number` Strengthen the similarity to the original image during the upscale. Default: optimized for your preset and style. - `imageType: optional "seamfull" or "skybox" or "texture"` Preserve the seamless properties of skybox or texture images. Input has to be of same type (seamless). - `"seamfull"` - `"skybox"` - `"texture"` - `inferenceId: optional string` The id of the Inference describing how this image was generated - `inputFidelity: optional "high" or "low"` When set to `high`, allows to better preserve details from the input images in the output. This is especially useful when using images that contain elements like faces or logos that require accurate preservation in the generated image. You can provide multiple input images that will all be preserved with high fidelity, but keep in mind that the first image will be preserved with richer textures and finer details, so if you include elements such as faces, consider placing them in the first image. Only available for the `gpt-image-1` model. - `"high"` - `"low"` - `inputLocation: optional "bottom" or "left" or "middle" or 2 more` Location of the input image in the output. - `"bottom"` - `"left"` - `"middle"` - `"right"` - `"top"` - `invert: optional boolean` To invert the relief - `keypointThreshold: optional number` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `layerDifference: optional number` - `lengthThreshold: optional number` - `lockExpiresAt: optional string` The ISO timestamp when the lock on the canvas will expire - `lowThreshold: optional number` Low threshold for Canny detector - `mask: optional string` The mask used for the asset generation or editing - `maxIterations: optional number` - `maxThreshold: optional number` Maximum threshold for Grayscale conversion - `minThreshold: optional number` Minimum threshold for Grayscale conversion - `modality: optional "canny" or "depth" or "grayscale" or 7 more` Modality to detect - `"canny"` - `"depth"` - `"grayscale"` - `"lineart_anime"` - `"mlsd"` - `"normal"` - `"pose"` - `"scribble"` - `"segmentation"` - `"sketch"` - `mode: optional string` - `modelId: optional string` The modelId used to generate this asset - `modelType: optional "custom" or "elevenlabs-voice" or "flux.1" or 34 more` The type of the generator used - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `name: optional string` - `nbMasks: optional number` - `negativePrompt: optional string` The negative prompt used to generate this asset - `negativePromptStrength: optional number` Controls the influence of the negative prompt. Default 0 means the negative prompt has no effect. Higher values increase negative prompt influence. Must be > 0 if negativePrompt is provided. - `numInferenceSteps: optional number` The number of denoising steps for each image generation. - `numOutputs: optional number` The number of outputs to generate. - `originalAssetId: optional string` - `outputIndex: optional number` - `overlapPercentage: optional number` Overlap percentage for the output image. - `overrideEmbeddings: optional boolean` Override the embeddings of the model. Only your prompt and negativePrompt will be used. Use with caution. - `parentId: optional string` - `parentJobId: optional string` - `pathPrecision: optional number` - `points: optional array of array of number` List of points (label, x, y) in the image where label = 0 for background and 1 for object. - `polished: optional number` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `preset: optional string` - `progressPercent: optional number` - `prompt: optional string` The prompt that guided the asset generation or editing - `promptFidelity: optional number` Increase the fidelity to the prompt during upscale. Default: optimized for your preset and style. - `raised: optional number` How raised is the surface? 0 is flat like water, 1 is like a very rough rock - `referenceImages: optional array of string` The reference images used for the asset generation or editing - `refinementSteps: optional number` Additional refinement steps before scaling. If scalingFactor == 1, the refinement process will be applied (1 + refinementSteps) times. If scalingFactor > 1, the refinement process will be applied refinementSteps times. - `removeBackground: optional boolean` Remove background for Grayscale detector - `resizeOption: optional number` Size proportion of the input image in the output. - `resultContours: optional boolean` Boolean to output the contours. - `resultImage: optional boolean` Boolean to able output the cut out object. - `resultMask: optional boolean` Boolean to able return the masks (binary image) in the response. - `rootParentId: optional string` - `saveFlipbook: optional boolean` Save a flipbook of the texture. Deactivated when the input texture is larger than 2048x2048px - `scalingFactor: optional number` Scaling factor (when `targetWidth` not specified) - `scheduler: optional string` The scheduler used to generate this asset - `seed: optional string` The seed used to generate this asset. Can be a string or a number in some cases . - `sharpen: optional boolean` Sharpen tiles. - `shiny: optional number` How shiny is the surface? 0 is like a matte surface, 1 is like a diamond - `size: optional number` - `sketch: optional boolean` Activate sketch detection instead of canny. - `sourceProjectId: optional string` - `spliceThreshold: optional number` - `strength: optional number` The strength Only available for the `flux-kontext` LoRA model. - `structureFidelity: optional number` Strength for the input image structure preservation - `structureImage: optional string` The control image for structure. A canny detector will be applied to this image. Must reference an existing AssetId. - `style: optional "3d-cartoon" or "3d-rendered" or "anime" or 23 more` - `"3d-cartoon"` - `"3d-rendered"` - `"anime"` - `"cartoon"` - `"cinematic"` - `"claymation"` - `"cloud-skydome"` - `"comic"` - `"cyberpunk"` - `"enchanted"` - `"fantasy"` - `"ink"` - `"manga"` - `"manga-color"` - `"minimalist"` - `"neon-tron"` - `"oil-painting"` - `"pastel"` - `"photo"` - `"photography"` - `"psychedelic"` - `"retro-fantasy"` - `"scifi-concept-art"` - `"space"` - `"standard"` - `"whimsical"` - `styleFidelity: optional number` The higher the value the more it will look like the style image(s) - `styleImages: optional array of string` List of style images. Most of the time, only one image is enough. It must be existing AssetIds. - `styleImagesFidelity: optional number` Condition the influence of the style image(s). The higher the value, the more the style images will influence the upscaled image. - `targetHeight: optional number` The target height of the output image. - `targetWidth: optional number` Target width for the upscaled image, take priority over scaling factor - `text: optional string` A textual description / keywords describing the object of interest. - `texture: optional string` The asset to convert in texture maps. Must reference an existing AssetId. - `thumbnail: optional object { assetId, url }` The thumbnail of the canvas - `assetId: string` The AssetId of the image used as a thumbnail for the canvas (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: string` The url of the image used as a thumbnail for the canvas - `tileStyle: optional boolean` If set to true, during the upscaling process, the model will match tiles of the source image with tiles of the style image(s). This will result in a more coherent restyle. Works best with style images that have a similar composition. - `trainingImage: optional boolean` - `verticalExpansionRatio: optional number` (deprecated) Vertical expansion ratio. - `width: optional number` The width of the rendered image. - `mimeType: string` The mime type of the asset (example: "image/png") - `ownerId: string` The owner (project) ID (example: "proj_23tlk332lkht3kl2" or "team_dlkhgs23tlk3hlkth32lkht3kl2" for old teams) - `privacy: "private" or "public" or "unlisted"` The privacy of the asset - `"private"` - `"public"` - `"unlisted"` - `properties: object { size, animationFrameCount, bitrate, 20 more }` The properties of the asset, content may depend on the kind of asset returned - `size: number` - `animationFrameCount: optional number` Number of animation frames if animations exist - `bitrate: optional number` Bitrate of the media in bits per second - `boneCount: optional number` Number of bones if skeleton exists - `channels: optional number` Number of channels of the audio - `classification: optional "effect" or "interview" or "music" or 5 more` Classification of the audio - `"effect"` - `"interview"` - `"music"` - `"other"` - `"sound"` - `"speech"` - `"text"` - `"unknown"` - `codecName: optional string` Codec name of the media - `description: optional string` Description of the audio - `dimensions: optional array of number` Bounding box dimensions [width, height, depth] - `duration: optional number` Duration of the media in seconds - `faceCount: optional number` Number of faces/triangles in the mesh - `format: optional string` Format of the mesh file (e.g. 'glb', etc.) - `frameRate: optional number` Frame rate of the video in frames per second - `hasAnimations: optional boolean` Whether the mesh has animations - `hasNormals: optional boolean` Whether the mesh has normal vectors - `hasSkeleton: optional boolean` Whether the mesh has bones/skeleton - `hasUVs: optional boolean` Whether the mesh has UV coordinates - `height: optional number` - `nbFrames: optional number` Number of frames in the video - `sampleRate: optional number` Sample rate of the media in Hz - `transcription: optional object { text }` Transcription of the audio - `text: string` - `vertexCount: optional number` Number of vertices in the mesh - `width: optional number` - `source: "3d23d" or "3d23d:texture" or "3d:texture" or 72 more` source of the asset - `"3d23d"` - `"3d23d:texture"` - `"3d:texture"` - `"3d:texture:albedo"` - `"3d:texture:metallic"` - `"3d:texture:mtl"` - `"3d:texture:normal"` - `"3d:texture:roughness"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-control-net"` - `"inference-control-net-img"` - `"inference-control-net-inpainting"` - `"inference-control-net-inpainting-ip-adapter"` - `"inference-control-net-ip-adapter"` - `"inference-control-net-reference"` - `"inference-control-net-texture"` - `"inference-img"` - `"inference-img-ip-adapter"` - `"inference-img-texture"` - `"inference-in-paint"` - `"inference-in-paint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt"` - `"inference-txt-ip-adapter"` - `"inference-txt-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture:albedo"` - `"texture:ao"` - `"texture:edge"` - `"texture:height"` - `"texture:metallic"` - `"texture:normal"` - `"texture:smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `status: "error" or "pending" or "success"` The actual status - `"error"` - `"pending"` - `"success"` - `tags: array of string` The associated tags (example: ["sci-fi", "landscape"]) - `updatedAt: string` The asset last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `url: string` Signed URL to get the asset content - `automaticCaptioning: optional string` Automatic captioning of the asset - `description: optional string` The description, it will contain in priority: - the manual description - the advanced captioning when the asset is used in training flow - the automatic captioning - `embedding: optional array of number` The embedding of the asset when requested. Only available when an asset can be embedded (ie: not Detection maps) - `firstFrame: optional object { assetId, url }` The video asset's first frame. Contains the assetId and the url of the first frame. - `assetId: string` - `url: string` - `isHidden: optional boolean` Whether the asset is hidden. - `lastFrame: optional object { assetId, url }` The video asset's last frame. Contains the assetId and the url of the last frame. - `assetId: string` - `url: string` - `nsfw: optional array of string` The NSFW labels - `originalFileUrl: optional string` The original file url. Contains the url of the original file. without any conversion. Only available for some specific video, audio and threeD assets. Is only specified if the given asset data has been replaced with a new file during the creation of the asset. - `outputIndex: optional number` The output index of the asset within a job This index is an positive integer that starts at 0 It is used to differentiate between multiple outputs of the same job If the job has only one output, this index is 0 - `preview: optional object { assetId, url }` The asset's preview. Contains the assetId and the url of the preview. - `assetId: string` - `url: string` - `thumbnail: optional object { assetId, url }` The asset's thumbnail. Contains the assetId and the url of the thumbnail. - `assetId: string` - `url: string` ### Example ```http curl https://api.cloud.scenario.com/v1/assets/$ASSET_ID \ -u "$SCENARIO_SDK_API_KEY:SCENARIO_SDK_API_SECRET" ``` #### Response ```json { "asset": { "id": "id", "authorId": "authorId", "collectionIds": [ "string" ], "createdAt": "createdAt", "editCapabilities": [ "DETECTION" ], "kind": "3d", "metadata": { "kind": "3d", "type": "3d-texture", "angular": 0, "aspectRatio": "aspectRatio", "backgroundOpacity": 0, "baseModelId": "baseModelId", "bbox": [ 0, 0, 0, 0 ], "betterQuality": true, "cannyStructureImage": "cannyStructureImage", "clustering": true, "colorCorrection": true, "colorMode": "colorMode", "colorPrecision": 0, "concepts": [ { "modelId": "modelId", "scale": -2, "modelEpoch": "modelEpoch" } ], "contours": [ [ [ [ 0 ] ] ] ], "controlEnd": 0, "copiedAt": "copiedAt", "cornerThreshold": 0, "creativity": 0, "creativityDecay": 0, "defaultParameters": true, "depthFidelity": 0, "depthImage": "depthImage", "detailsLevel": -50, "dilate": 0, "factor": 0, "filterSpeckle": 0, "fractality": 0, "geometryEnforcement": 0, "guidance": 0, "halfMode": true, "hdr": 0, "height": 0, "highThreshold": 0, "horizontalExpansionRatio": 1, "image": "image", "imageFidelity": 0, "imageType": "seamfull", "inferenceId": "inferenceId", "inputFidelity": "high", "inputLocation": "bottom", "invert": true, "keypointThreshold": 0, "layerDifference": 0, "lengthThreshold": 0, "lockExpiresAt": "lockExpiresAt", "lowThreshold": 0, "mask": "mask", "maxIterations": 0, "maxThreshold": 0, "minThreshold": 0, "modality": "canny", "mode": "mode", "modelId": "modelId", "modelType": "custom", "name": "name", "nbMasks": 0, "negativePrompt": "negativePrompt", "negativePromptStrength": 0, "numInferenceSteps": 5, "numOutputs": 1, "originalAssetId": "originalAssetId", "outputIndex": 0, "overlapPercentage": 0, "overrideEmbeddings": true, "parentId": "parentId", "parentJobId": "parentJobId", "pathPrecision": 0, "points": [ [ 0 ], [ 0 ], [ 0 ] ], "polished": 0, "preset": "preset", "progressPercent": 0, "prompt": "prompt", "promptFidelity": 0, "raised": 0, "referenceImages": [ "string" ], "refinementSteps": 0, "removeBackground": true, "resizeOption": 0.1, "resultContours": true, "resultImage": true, "resultMask": true, "rootParentId": "rootParentId", "saveFlipbook": true, "scalingFactor": 1, "scheduler": "scheduler", "seed": "seed", "sharpen": true, "shiny": 0, "size": 0, "sketch": true, "sourceProjectId": "sourceProjectId", "spliceThreshold": 0, "strength": 0, "structureFidelity": 0, "structureImage": "structureImage", "style": "3d-cartoon", "styleFidelity": 0, "styleImages": [ "string" ], "styleImagesFidelity": 0, "targetHeight": 0, "targetWidth": 1024, "text": "text", "texture": "texture", "thumbnail": { "assetId": "assetId", "url": "url" }, "tileStyle": true, "trainingImage": true, "verticalExpansionRatio": 1, "width": 1024 }, "mimeType": "mimeType", "ownerId": "ownerId", "privacy": "private", "properties": { "size": 0, "animationFrameCount": 0, "bitrate": 0, "boneCount": 0, "channels": 0, "classification": "effect", "codecName": "codecName", "description": "description", "dimensions": [ 0, 0, 0 ], "duration": 0, "faceCount": 0, "format": "format", "frameRate": 0, "hasAnimations": true, "hasNormals": true, "hasSkeleton": true, "hasUVs": true, "height": 0, "nbFrames": 0, "sampleRate": 0, "transcription": { "text": "text" }, "vertexCount": 0, "width": 0 }, "source": "3d23d", "status": "error", "tags": [ "string" ], "updatedAt": "updatedAt", "url": "url", "automaticCaptioning": "automaticCaptioning", "description": "description", "embedding": [ 0 ], "firstFrame": { "assetId": "assetId", "url": "url" }, "isHidden": true, "lastFrame": { "assetId": "assetId", "url": "url" }, "nsfw": [ "string" ], "originalFileUrl": "originalFileUrl", "outputIndex": 0, "preview": { "assetId": "assetId", "url": "url" }, "thumbnail": { "assetId": "assetId", "url": "url" } } } ``` ## Update **put** `/assets/{assetId}` Update a canvas asset ### Path Parameters - `assetId: string` ### Query Parameters - `originalAssets: optional boolean` If set to true, returns the original asset without transformation ### Body Parameters - `canvas: optional string` The new value for the canvas as a stringified JSON. - `description: optional string` The new description of the asset. - `disableSnapshot: optional boolean` If true, no snapshot will be created for this update. - `lockExpiresAt: optional string` The ISO timestamp when the lock on the canvas will expire. - `lockId: optional string` The value of the lock to use when updating a locked canvas. - `name: optional string` The new name for the canvas. - `thumbnail: optional string` The new thumbnail for the canvas in base64 format string. ### Returns - `asset: object { id, authorId, collectionIds, 24 more }` - `id: string` The asset ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `authorId: string` The author user ID (example: "dcf121faaa1a0a0bbbd9ca1b73d62aea") - `collectionIds: array of string` A list of CollectionId this asset belongs to - `createdAt: string` The asset creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `editCapabilities: array of "DETECTION" or "GENERATIVE_FILL" or "PIXELATE" or 8 more` List of edit capabilities - `"DETECTION"` - `"GENERATIVE_FILL"` - `"PIXELATE"` - `"PROMPT_EDITING"` - `"REFINE"` - `"REFRAME"` - `"REMOVE_BACKGROUND"` - `"SEGMENTATION"` - `"UPSCALE"` - `"UPSCALE_360"` - `"VECTORIZATION"` - `kind: "3d" or "audio" or "document" or 4 more` The kind of asset - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `metadata: object { kind, type, angular, 106 more }` Metadata of the asset with some additional information - `kind: "3d" or "audio" or "document" or 4 more` - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `type: "3d-texture" or "3d-texture-albedo" or "3d-texture-metallic" or 72 more` The type of the asset. Ex: 'inference-txt2img' will represent an asset generated from a text to image model - `"3d-texture"` - `"3d-texture-albedo"` - `"3d-texture-metallic"` - `"3d-texture-mtl"` - `"3d-texture-normal"` - `"3d-texture-roughness"` - `"3d23d"` - `"3d23d-texture"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-controlnet"` - `"inference-controlnet-img2img"` - `"inference-controlnet-inpaint"` - `"inference-controlnet-inpaint-ip-adapter"` - `"inference-controlnet-ip-adapter"` - `"inference-controlnet-reference"` - `"inference-controlnet-texture"` - `"inference-img2img"` - `"inference-img2img-ip-adapter"` - `"inference-img2img-texture"` - `"inference-inpaint"` - `"inference-inpaint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt2img"` - `"inference-txt2img-ip-adapter"` - `"inference-txt2img-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture-albedo"` - `"texture-ao"` - `"texture-edge"` - `"texture-height"` - `"texture-metallic"` - `"texture-normal"` - `"texture-smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `angular: optional number` How angular is the surface? 0 is like a sphere, 1 is like a mechanical object - `aspectRatio: optional string` The optional aspect ratio given for the generation, only applicable for some models - `backgroundOpacity: optional number` Int to set between 0 and 255 for the opacity of the background in the result images. - `baseModelId: optional string` The baseModelId that maybe changed at inference time - `bbox: optional array of number` A bounding box around the object of interest, in the format [x1, y1, x2, y2]. - `betterQuality: optional boolean` Remove small dark spots (i.e. “pepper”) and connect small bright cracks. - `cannyStructureImage: optional string` The control image already processed by canny detector. Must reference an existing AssetId. - `clustering: optional boolean` Activate clustering. - `colorCorrection: optional boolean` Ensure upscaled tile have the same color histogram as original tile. - `colorMode: optional string` - `colorPrecision: optional number` - `concepts: optional array of object { modelId, scale, modelEpoch }` Flux Kontext LoRA to style the image. For Flux Kontext Prompt Editing. - `modelId: string` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: number` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `modelEpoch: optional string` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `contours: optional array of array of array of array of number` - `controlEnd: optional number` End step for control. - `copiedAt: optional string` The date when the asset was copied to a project - `cornerThreshold: optional number` - `creativity: optional number` Allow the generation of "hallucinations" during the upscale process, which adds additional details and deviates from the original image. Default: optimized for your preset and style. - `creativityDecay: optional number` Amount of decay in creativity over the upscale process. The lowest the value, the less the creativity will be preserved over the upscale process. - `defaultParameters: optional boolean` If true, use the default parameters - `depthFidelity: optional number` The depth fidelity if a depth image provided - `depthImage: optional string` The control image processed by depth estimator. Must reference an existing AssetId. - `detailsLevel: optional number` Amount of details to remove or add - `dilate: optional number` The number of pixels to dilate the result masks. - `factor: optional number` Contrast factor for Grayscale detector - `filterSpeckle: optional number` - `fractality: optional number` Determine the scale at which the upscale process works. - With a small value, the upscale works at the largest scale, resulting in fewer added details and more coherent images. Ideal for portraits, for example. - With a large value, the upscale works at the smallest scale, resulting in more added details and more hallucinations. Ideal for landscapes, for example. (info): A small value is slower and more expensive to run. - `geometryEnforcement: optional number` Apply extra control to the Skybox 360 geometry. The higher the value, the more the 360 geometry will influence the generated skybox image. Use with caution. Default is adapted to the other parameters. - `guidance: optional number` The guidance used to generate this asset - `halfMode: optional boolean` - `hdr: optional number` - `height: optional number` - `highThreshold: optional number` High threshold for Canny detector - `horizontalExpansionRatio: optional number` (deprecated) Horizontal expansion ratio. - `image: optional string` The input image to process. Must reference an existing AssetId or be a data URL. - `imageFidelity: optional number` Strengthen the similarity to the original image during the upscale. Default: optimized for your preset and style. - `imageType: optional "seamfull" or "skybox" or "texture"` Preserve the seamless properties of skybox or texture images. Input has to be of same type (seamless). - `"seamfull"` - `"skybox"` - `"texture"` - `inferenceId: optional string` The id of the Inference describing how this image was generated - `inputFidelity: optional "high" or "low"` When set to `high`, allows to better preserve details from the input images in the output. This is especially useful when using images that contain elements like faces or logos that require accurate preservation in the generated image. You can provide multiple input images that will all be preserved with high fidelity, but keep in mind that the first image will be preserved with richer textures and finer details, so if you include elements such as faces, consider placing them in the first image. Only available for the `gpt-image-1` model. - `"high"` - `"low"` - `inputLocation: optional "bottom" or "left" or "middle" or 2 more` Location of the input image in the output. - `"bottom"` - `"left"` - `"middle"` - `"right"` - `"top"` - `invert: optional boolean` To invert the relief - `keypointThreshold: optional number` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `layerDifference: optional number` - `lengthThreshold: optional number` - `lockExpiresAt: optional string` The ISO timestamp when the lock on the canvas will expire - `lowThreshold: optional number` Low threshold for Canny detector - `mask: optional string` The mask used for the asset generation or editing - `maxIterations: optional number` - `maxThreshold: optional number` Maximum threshold for Grayscale conversion - `minThreshold: optional number` Minimum threshold for Grayscale conversion - `modality: optional "canny" or "depth" or "grayscale" or 7 more` Modality to detect - `"canny"` - `"depth"` - `"grayscale"` - `"lineart_anime"` - `"mlsd"` - `"normal"` - `"pose"` - `"scribble"` - `"segmentation"` - `"sketch"` - `mode: optional string` - `modelId: optional string` The modelId used to generate this asset - `modelType: optional "custom" or "elevenlabs-voice" or "flux.1" or 34 more` The type of the generator used - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `name: optional string` - `nbMasks: optional number` - `negativePrompt: optional string` The negative prompt used to generate this asset - `negativePromptStrength: optional number` Controls the influence of the negative prompt. Default 0 means the negative prompt has no effect. Higher values increase negative prompt influence. Must be > 0 if negativePrompt is provided. - `numInferenceSteps: optional number` The number of denoising steps for each image generation. - `numOutputs: optional number` The number of outputs to generate. - `originalAssetId: optional string` - `outputIndex: optional number` - `overlapPercentage: optional number` Overlap percentage for the output image. - `overrideEmbeddings: optional boolean` Override the embeddings of the model. Only your prompt and negativePrompt will be used. Use with caution. - `parentId: optional string` - `parentJobId: optional string` - `pathPrecision: optional number` - `points: optional array of array of number` List of points (label, x, y) in the image where label = 0 for background and 1 for object. - `polished: optional number` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `preset: optional string` - `progressPercent: optional number` - `prompt: optional string` The prompt that guided the asset generation or editing - `promptFidelity: optional number` Increase the fidelity to the prompt during upscale. Default: optimized for your preset and style. - `raised: optional number` How raised is the surface? 0 is flat like water, 1 is like a very rough rock - `referenceImages: optional array of string` The reference images used for the asset generation or editing - `refinementSteps: optional number` Additional refinement steps before scaling. If scalingFactor == 1, the refinement process will be applied (1 + refinementSteps) times. If scalingFactor > 1, the refinement process will be applied refinementSteps times. - `removeBackground: optional boolean` Remove background for Grayscale detector - `resizeOption: optional number` Size proportion of the input image in the output. - `resultContours: optional boolean` Boolean to output the contours. - `resultImage: optional boolean` Boolean to able output the cut out object. - `resultMask: optional boolean` Boolean to able return the masks (binary image) in the response. - `rootParentId: optional string` - `saveFlipbook: optional boolean` Save a flipbook of the texture. Deactivated when the input texture is larger than 2048x2048px - `scalingFactor: optional number` Scaling factor (when `targetWidth` not specified) - `scheduler: optional string` The scheduler used to generate this asset - `seed: optional string` The seed used to generate this asset. Can be a string or a number in some cases . - `sharpen: optional boolean` Sharpen tiles. - `shiny: optional number` How shiny is the surface? 0 is like a matte surface, 1 is like a diamond - `size: optional number` - `sketch: optional boolean` Activate sketch detection instead of canny. - `sourceProjectId: optional string` - `spliceThreshold: optional number` - `strength: optional number` The strength Only available for the `flux-kontext` LoRA model. - `structureFidelity: optional number` Strength for the input image structure preservation - `structureImage: optional string` The control image for structure. A canny detector will be applied to this image. Must reference an existing AssetId. - `style: optional "3d-cartoon" or "3d-rendered" or "anime" or 23 more` - `"3d-cartoon"` - `"3d-rendered"` - `"anime"` - `"cartoon"` - `"cinematic"` - `"claymation"` - `"cloud-skydome"` - `"comic"` - `"cyberpunk"` - `"enchanted"` - `"fantasy"` - `"ink"` - `"manga"` - `"manga-color"` - `"minimalist"` - `"neon-tron"` - `"oil-painting"` - `"pastel"` - `"photo"` - `"photography"` - `"psychedelic"` - `"retro-fantasy"` - `"scifi-concept-art"` - `"space"` - `"standard"` - `"whimsical"` - `styleFidelity: optional number` The higher the value the more it will look like the style image(s) - `styleImages: optional array of string` List of style images. Most of the time, only one image is enough. It must be existing AssetIds. - `styleImagesFidelity: optional number` Condition the influence of the style image(s). The higher the value, the more the style images will influence the upscaled image. - `targetHeight: optional number` The target height of the output image. - `targetWidth: optional number` Target width for the upscaled image, take priority over scaling factor - `text: optional string` A textual description / keywords describing the object of interest. - `texture: optional string` The asset to convert in texture maps. Must reference an existing AssetId. - `thumbnail: optional object { assetId, url }` The thumbnail of the canvas - `assetId: string` The AssetId of the image used as a thumbnail for the canvas (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: string` The url of the image used as a thumbnail for the canvas - `tileStyle: optional boolean` If set to true, during the upscaling process, the model will match tiles of the source image with tiles of the style image(s). This will result in a more coherent restyle. Works best with style images that have a similar composition. - `trainingImage: optional boolean` - `verticalExpansionRatio: optional number` (deprecated) Vertical expansion ratio. - `width: optional number` The width of the rendered image. - `mimeType: string` The mime type of the asset (example: "image/png") - `ownerId: string` The owner (project) ID (example: "proj_23tlk332lkht3kl2" or "team_dlkhgs23tlk3hlkth32lkht3kl2" for old teams) - `privacy: "private" or "public" or "unlisted"` The privacy of the asset - `"private"` - `"public"` - `"unlisted"` - `properties: object { size, animationFrameCount, bitrate, 20 more }` The properties of the asset, content may depend on the kind of asset returned - `size: number` - `animationFrameCount: optional number` Number of animation frames if animations exist - `bitrate: optional number` Bitrate of the media in bits per second - `boneCount: optional number` Number of bones if skeleton exists - `channels: optional number` Number of channels of the audio - `classification: optional "effect" or "interview" or "music" or 5 more` Classification of the audio - `"effect"` - `"interview"` - `"music"` - `"other"` - `"sound"` - `"speech"` - `"text"` - `"unknown"` - `codecName: optional string` Codec name of the media - `description: optional string` Description of the audio - `dimensions: optional array of number` Bounding box dimensions [width, height, depth] - `duration: optional number` Duration of the media in seconds - `faceCount: optional number` Number of faces/triangles in the mesh - `format: optional string` Format of the mesh file (e.g. 'glb', etc.) - `frameRate: optional number` Frame rate of the video in frames per second - `hasAnimations: optional boolean` Whether the mesh has animations - `hasNormals: optional boolean` Whether the mesh has normal vectors - `hasSkeleton: optional boolean` Whether the mesh has bones/skeleton - `hasUVs: optional boolean` Whether the mesh has UV coordinates - `height: optional number` - `nbFrames: optional number` Number of frames in the video - `sampleRate: optional number` Sample rate of the media in Hz - `transcription: optional object { text }` Transcription of the audio - `text: string` - `vertexCount: optional number` Number of vertices in the mesh - `width: optional number` - `source: "3d23d" or "3d23d:texture" or "3d:texture" or 72 more` source of the asset - `"3d23d"` - `"3d23d:texture"` - `"3d:texture"` - `"3d:texture:albedo"` - `"3d:texture:metallic"` - `"3d:texture:mtl"` - `"3d:texture:normal"` - `"3d:texture:roughness"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-control-net"` - `"inference-control-net-img"` - `"inference-control-net-inpainting"` - `"inference-control-net-inpainting-ip-adapter"` - `"inference-control-net-ip-adapter"` - `"inference-control-net-reference"` - `"inference-control-net-texture"` - `"inference-img"` - `"inference-img-ip-adapter"` - `"inference-img-texture"` - `"inference-in-paint"` - `"inference-in-paint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt"` - `"inference-txt-ip-adapter"` - `"inference-txt-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture:albedo"` - `"texture:ao"` - `"texture:edge"` - `"texture:height"` - `"texture:metallic"` - `"texture:normal"` - `"texture:smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `status: "error" or "pending" or "success"` The actual status - `"error"` - `"pending"` - `"success"` - `tags: array of string` The associated tags (example: ["sci-fi", "landscape"]) - `updatedAt: string` The asset last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `url: string` Signed URL to get the asset content - `automaticCaptioning: optional string` Automatic captioning of the asset - `description: optional string` The description, it will contain in priority: - the manual description - the advanced captioning when the asset is used in training flow - the automatic captioning - `embedding: optional array of number` The embedding of the asset when requested. Only available when an asset can be embedded (ie: not Detection maps) - `firstFrame: optional object { assetId, url }` The video asset's first frame. Contains the assetId and the url of the first frame. - `assetId: string` - `url: string` - `isHidden: optional boolean` Whether the asset is hidden. - `lastFrame: optional object { assetId, url }` The video asset's last frame. Contains the assetId and the url of the last frame. - `assetId: string` - `url: string` - `nsfw: optional array of string` The NSFW labels - `originalFileUrl: optional string` The original file url. Contains the url of the original file. without any conversion. Only available for some specific video, audio and threeD assets. Is only specified if the given asset data has been replaced with a new file during the creation of the asset. - `outputIndex: optional number` The output index of the asset within a job This index is an positive integer that starts at 0 It is used to differentiate between multiple outputs of the same job If the job has only one output, this index is 0 - `preview: optional object { assetId, url }` The asset's preview. Contains the assetId and the url of the preview. - `assetId: string` - `url: string` - `thumbnail: optional object { assetId, url }` The asset's thumbnail. Contains the assetId and the url of the thumbnail. - `assetId: string` - `url: string` ### Example ```http curl https://api.cloud.scenario.com/v1/assets/$ASSET_ID \ -X PUT \ -H 'Content-Type: application/json' \ -u "$SCENARIO_SDK_API_KEY:SCENARIO_SDK_API_SECRET" \ -d '{}' ``` #### Response ```json { "asset": { "id": "id", "authorId": "authorId", "collectionIds": [ "string" ], "createdAt": "createdAt", "editCapabilities": [ "DETECTION" ], "kind": "3d", "metadata": { "kind": "3d", "type": "3d-texture", "angular": 0, "aspectRatio": "aspectRatio", "backgroundOpacity": 0, "baseModelId": "baseModelId", "bbox": [ 0, 0, 0, 0 ], "betterQuality": true, "cannyStructureImage": "cannyStructureImage", "clustering": true, "colorCorrection": true, "colorMode": "colorMode", "colorPrecision": 0, "concepts": [ { "modelId": "modelId", "scale": -2, "modelEpoch": "modelEpoch" } ], "contours": [ [ [ [ 0 ] ] ] ], "controlEnd": 0, "copiedAt": "copiedAt", "cornerThreshold": 0, "creativity": 0, "creativityDecay": 0, "defaultParameters": true, "depthFidelity": 0, "depthImage": "depthImage", "detailsLevel": -50, "dilate": 0, "factor": 0, "filterSpeckle": 0, "fractality": 0, "geometryEnforcement": 0, "guidance": 0, "halfMode": true, "hdr": 0, "height": 0, "highThreshold": 0, "horizontalExpansionRatio": 1, "image": "image", "imageFidelity": 0, "imageType": "seamfull", "inferenceId": "inferenceId", "inputFidelity": "high", "inputLocation": "bottom", "invert": true, "keypointThreshold": 0, "layerDifference": 0, "lengthThreshold": 0, "lockExpiresAt": "lockExpiresAt", "lowThreshold": 0, "mask": "mask", "maxIterations": 0, "maxThreshold": 0, "minThreshold": 0, "modality": "canny", "mode": "mode", "modelId": "modelId", "modelType": "custom", "name": "name", "nbMasks": 0, "negativePrompt": "negativePrompt", "negativePromptStrength": 0, "numInferenceSteps": 5, "numOutputs": 1, "originalAssetId": "originalAssetId", "outputIndex": 0, "overlapPercentage": 0, "overrideEmbeddings": true, "parentId": "parentId", "parentJobId": "parentJobId", "pathPrecision": 0, "points": [ [ 0 ], [ 0 ], [ 0 ] ], "polished": 0, "preset": "preset", "progressPercent": 0, "prompt": "prompt", "promptFidelity": 0, "raised": 0, "referenceImages": [ "string" ], "refinementSteps": 0, "removeBackground": true, "resizeOption": 0.1, "resultContours": true, "resultImage": true, "resultMask": true, "rootParentId": "rootParentId", "saveFlipbook": true, "scalingFactor": 1, "scheduler": "scheduler", "seed": "seed", "sharpen": true, "shiny": 0, "size": 0, "sketch": true, "sourceProjectId": "sourceProjectId", "spliceThreshold": 0, "strength": 0, "structureFidelity": 0, "structureImage": "structureImage", "style": "3d-cartoon", "styleFidelity": 0, "styleImages": [ "string" ], "styleImagesFidelity": 0, "targetHeight": 0, "targetWidth": 1024, "text": "text", "texture": "texture", "thumbnail": { "assetId": "assetId", "url": "url" }, "tileStyle": true, "trainingImage": true, "verticalExpansionRatio": 1, "width": 1024 }, "mimeType": "mimeType", "ownerId": "ownerId", "privacy": "private", "properties": { "size": 0, "animationFrameCount": 0, "bitrate": 0, "boneCount": 0, "channels": 0, "classification": "effect", "codecName": "codecName", "description": "description", "dimensions": [ 0, 0, 0 ], "duration": 0, "faceCount": 0, "format": "format", "frameRate": 0, "hasAnimations": true, "hasNormals": true, "hasSkeleton": true, "hasUVs": true, "height": 0, "nbFrames": 0, "sampleRate": 0, "transcription": { "text": "text" }, "vertexCount": 0, "width": 0 }, "source": "3d23d", "status": "error", "tags": [ "string" ], "updatedAt": "updatedAt", "url": "url", "automaticCaptioning": "automaticCaptioning", "description": "description", "embedding": [ 0 ], "firstFrame": { "assetId": "assetId", "url": "url" }, "isHidden": true, "lastFrame": { "assetId": "assetId", "url": "url" }, "nsfw": [ "string" ], "originalFileUrl": "originalFileUrl", "outputIndex": 0, "preview": { "assetId": "assetId", "url": "url" }, "thumbnail": { "assetId": "assetId", "url": "url" } } } ``` ## Duplicate **post** `/assets/{assetId}/copy` Duplicate an asset ### Path Parameters - `assetId: string` ### Query Parameters - `originalAssets: optional boolean` If set to true, returns the original asset without transformation ### Body Parameters - `targetProjectId: optional string` The id of the project to copy the asset to. If not provided, the asset will be copied to the canvas project. ### Returns - `asset: object { id, authorId, collectionIds, 24 more }` - `id: string` The asset ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `authorId: string` The author user ID (example: "dcf121faaa1a0a0bbbd9ca1b73d62aea") - `collectionIds: array of string` A list of CollectionId this asset belongs to - `createdAt: string` The asset creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `editCapabilities: array of "DETECTION" or "GENERATIVE_FILL" or "PIXELATE" or 8 more` List of edit capabilities - `"DETECTION"` - `"GENERATIVE_FILL"` - `"PIXELATE"` - `"PROMPT_EDITING"` - `"REFINE"` - `"REFRAME"` - `"REMOVE_BACKGROUND"` - `"SEGMENTATION"` - `"UPSCALE"` - `"UPSCALE_360"` - `"VECTORIZATION"` - `kind: "3d" or "audio" or "document" or 4 more` The kind of asset - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `metadata: object { kind, type, angular, 106 more }` Metadata of the asset with some additional information - `kind: "3d" or "audio" or "document" or 4 more` - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `type: "3d-texture" or "3d-texture-albedo" or "3d-texture-metallic" or 72 more` The type of the asset. Ex: 'inference-txt2img' will represent an asset generated from a text to image model - `"3d-texture"` - `"3d-texture-albedo"` - `"3d-texture-metallic"` - `"3d-texture-mtl"` - `"3d-texture-normal"` - `"3d-texture-roughness"` - `"3d23d"` - `"3d23d-texture"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-controlnet"` - `"inference-controlnet-img2img"` - `"inference-controlnet-inpaint"` - `"inference-controlnet-inpaint-ip-adapter"` - `"inference-controlnet-ip-adapter"` - `"inference-controlnet-reference"` - `"inference-controlnet-texture"` - `"inference-img2img"` - `"inference-img2img-ip-adapter"` - `"inference-img2img-texture"` - `"inference-inpaint"` - `"inference-inpaint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt2img"` - `"inference-txt2img-ip-adapter"` - `"inference-txt2img-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture-albedo"` - `"texture-ao"` - `"texture-edge"` - `"texture-height"` - `"texture-metallic"` - `"texture-normal"` - `"texture-smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `angular: optional number` How angular is the surface? 0 is like a sphere, 1 is like a mechanical object - `aspectRatio: optional string` The optional aspect ratio given for the generation, only applicable for some models - `backgroundOpacity: optional number` Int to set between 0 and 255 for the opacity of the background in the result images. - `baseModelId: optional string` The baseModelId that maybe changed at inference time - `bbox: optional array of number` A bounding box around the object of interest, in the format [x1, y1, x2, y2]. - `betterQuality: optional boolean` Remove small dark spots (i.e. “pepper”) and connect small bright cracks. - `cannyStructureImage: optional string` The control image already processed by canny detector. Must reference an existing AssetId. - `clustering: optional boolean` Activate clustering. - `colorCorrection: optional boolean` Ensure upscaled tile have the same color histogram as original tile. - `colorMode: optional string` - `colorPrecision: optional number` - `concepts: optional array of object { modelId, scale, modelEpoch }` Flux Kontext LoRA to style the image. For Flux Kontext Prompt Editing. - `modelId: string` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: number` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `modelEpoch: optional string` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `contours: optional array of array of array of array of number` - `controlEnd: optional number` End step for control. - `copiedAt: optional string` The date when the asset was copied to a project - `cornerThreshold: optional number` - `creativity: optional number` Allow the generation of "hallucinations" during the upscale process, which adds additional details and deviates from the original image. Default: optimized for your preset and style. - `creativityDecay: optional number` Amount of decay in creativity over the upscale process. The lowest the value, the less the creativity will be preserved over the upscale process. - `defaultParameters: optional boolean` If true, use the default parameters - `depthFidelity: optional number` The depth fidelity if a depth image provided - `depthImage: optional string` The control image processed by depth estimator. Must reference an existing AssetId. - `detailsLevel: optional number` Amount of details to remove or add - `dilate: optional number` The number of pixels to dilate the result masks. - `factor: optional number` Contrast factor for Grayscale detector - `filterSpeckle: optional number` - `fractality: optional number` Determine the scale at which the upscale process works. - With a small value, the upscale works at the largest scale, resulting in fewer added details and more coherent images. Ideal for portraits, for example. - With a large value, the upscale works at the smallest scale, resulting in more added details and more hallucinations. Ideal for landscapes, for example. (info): A small value is slower and more expensive to run. - `geometryEnforcement: optional number` Apply extra control to the Skybox 360 geometry. The higher the value, the more the 360 geometry will influence the generated skybox image. Use with caution. Default is adapted to the other parameters. - `guidance: optional number` The guidance used to generate this asset - `halfMode: optional boolean` - `hdr: optional number` - `height: optional number` - `highThreshold: optional number` High threshold for Canny detector - `horizontalExpansionRatio: optional number` (deprecated) Horizontal expansion ratio. - `image: optional string` The input image to process. Must reference an existing AssetId or be a data URL. - `imageFidelity: optional number` Strengthen the similarity to the original image during the upscale. Default: optimized for your preset and style. - `imageType: optional "seamfull" or "skybox" or "texture"` Preserve the seamless properties of skybox or texture images. Input has to be of same type (seamless). - `"seamfull"` - `"skybox"` - `"texture"` - `inferenceId: optional string` The id of the Inference describing how this image was generated - `inputFidelity: optional "high" or "low"` When set to `high`, allows to better preserve details from the input images in the output. This is especially useful when using images that contain elements like faces or logos that require accurate preservation in the generated image. You can provide multiple input images that will all be preserved with high fidelity, but keep in mind that the first image will be preserved with richer textures and finer details, so if you include elements such as faces, consider placing them in the first image. Only available for the `gpt-image-1` model. - `"high"` - `"low"` - `inputLocation: optional "bottom" or "left" or "middle" or 2 more` Location of the input image in the output. - `"bottom"` - `"left"` - `"middle"` - `"right"` - `"top"` - `invert: optional boolean` To invert the relief - `keypointThreshold: optional number` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `layerDifference: optional number` - `lengthThreshold: optional number` - `lockExpiresAt: optional string` The ISO timestamp when the lock on the canvas will expire - `lowThreshold: optional number` Low threshold for Canny detector - `mask: optional string` The mask used for the asset generation or editing - `maxIterations: optional number` - `maxThreshold: optional number` Maximum threshold for Grayscale conversion - `minThreshold: optional number` Minimum threshold for Grayscale conversion - `modality: optional "canny" or "depth" or "grayscale" or 7 more` Modality to detect - `"canny"` - `"depth"` - `"grayscale"` - `"lineart_anime"` - `"mlsd"` - `"normal"` - `"pose"` - `"scribble"` - `"segmentation"` - `"sketch"` - `mode: optional string` - `modelId: optional string` The modelId used to generate this asset - `modelType: optional "custom" or "elevenlabs-voice" or "flux.1" or 34 more` The type of the generator used - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `name: optional string` - `nbMasks: optional number` - `negativePrompt: optional string` The negative prompt used to generate this asset - `negativePromptStrength: optional number` Controls the influence of the negative prompt. Default 0 means the negative prompt has no effect. Higher values increase negative prompt influence. Must be > 0 if negativePrompt is provided. - `numInferenceSteps: optional number` The number of denoising steps for each image generation. - `numOutputs: optional number` The number of outputs to generate. - `originalAssetId: optional string` - `outputIndex: optional number` - `overlapPercentage: optional number` Overlap percentage for the output image. - `overrideEmbeddings: optional boolean` Override the embeddings of the model. Only your prompt and negativePrompt will be used. Use with caution. - `parentId: optional string` - `parentJobId: optional string` - `pathPrecision: optional number` - `points: optional array of array of number` List of points (label, x, y) in the image where label = 0 for background and 1 for object. - `polished: optional number` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `preset: optional string` - `progressPercent: optional number` - `prompt: optional string` The prompt that guided the asset generation or editing - `promptFidelity: optional number` Increase the fidelity to the prompt during upscale. Default: optimized for your preset and style. - `raised: optional number` How raised is the surface? 0 is flat like water, 1 is like a very rough rock - `referenceImages: optional array of string` The reference images used for the asset generation or editing - `refinementSteps: optional number` Additional refinement steps before scaling. If scalingFactor == 1, the refinement process will be applied (1 + refinementSteps) times. If scalingFactor > 1, the refinement process will be applied refinementSteps times. - `removeBackground: optional boolean` Remove background for Grayscale detector - `resizeOption: optional number` Size proportion of the input image in the output. - `resultContours: optional boolean` Boolean to output the contours. - `resultImage: optional boolean` Boolean to able output the cut out object. - `resultMask: optional boolean` Boolean to able return the masks (binary image) in the response. - `rootParentId: optional string` - `saveFlipbook: optional boolean` Save a flipbook of the texture. Deactivated when the input texture is larger than 2048x2048px - `scalingFactor: optional number` Scaling factor (when `targetWidth` not specified) - `scheduler: optional string` The scheduler used to generate this asset - `seed: optional string` The seed used to generate this asset. Can be a string or a number in some cases . - `sharpen: optional boolean` Sharpen tiles. - `shiny: optional number` How shiny is the surface? 0 is like a matte surface, 1 is like a diamond - `size: optional number` - `sketch: optional boolean` Activate sketch detection instead of canny. - `sourceProjectId: optional string` - `spliceThreshold: optional number` - `strength: optional number` The strength Only available for the `flux-kontext` LoRA model. - `structureFidelity: optional number` Strength for the input image structure preservation - `structureImage: optional string` The control image for structure. A canny detector will be applied to this image. Must reference an existing AssetId. - `style: optional "3d-cartoon" or "3d-rendered" or "anime" or 23 more` - `"3d-cartoon"` - `"3d-rendered"` - `"anime"` - `"cartoon"` - `"cinematic"` - `"claymation"` - `"cloud-skydome"` - `"comic"` - `"cyberpunk"` - `"enchanted"` - `"fantasy"` - `"ink"` - `"manga"` - `"manga-color"` - `"minimalist"` - `"neon-tron"` - `"oil-painting"` - `"pastel"` - `"photo"` - `"photography"` - `"psychedelic"` - `"retro-fantasy"` - `"scifi-concept-art"` - `"space"` - `"standard"` - `"whimsical"` - `styleFidelity: optional number` The higher the value the more it will look like the style image(s) - `styleImages: optional array of string` List of style images. Most of the time, only one image is enough. It must be existing AssetIds. - `styleImagesFidelity: optional number` Condition the influence of the style image(s). The higher the value, the more the style images will influence the upscaled image. - `targetHeight: optional number` The target height of the output image. - `targetWidth: optional number` Target width for the upscaled image, take priority over scaling factor - `text: optional string` A textual description / keywords describing the object of interest. - `texture: optional string` The asset to convert in texture maps. Must reference an existing AssetId. - `thumbnail: optional object { assetId, url }` The thumbnail of the canvas - `assetId: string` The AssetId of the image used as a thumbnail for the canvas (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: string` The url of the image used as a thumbnail for the canvas - `tileStyle: optional boolean` If set to true, during the upscaling process, the model will match tiles of the source image with tiles of the style image(s). This will result in a more coherent restyle. Works best with style images that have a similar composition. - `trainingImage: optional boolean` - `verticalExpansionRatio: optional number` (deprecated) Vertical expansion ratio. - `width: optional number` The width of the rendered image. - `mimeType: string` The mime type of the asset (example: "image/png") - `ownerId: string` The owner (project) ID (example: "proj_23tlk332lkht3kl2" or "team_dlkhgs23tlk3hlkth32lkht3kl2" for old teams) - `privacy: "private" or "public" or "unlisted"` The privacy of the asset - `"private"` - `"public"` - `"unlisted"` - `properties: object { size, animationFrameCount, bitrate, 20 more }` The properties of the asset, content may depend on the kind of asset returned - `size: number` - `animationFrameCount: optional number` Number of animation frames if animations exist - `bitrate: optional number` Bitrate of the media in bits per second - `boneCount: optional number` Number of bones if skeleton exists - `channels: optional number` Number of channels of the audio - `classification: optional "effect" or "interview" or "music" or 5 more` Classification of the audio - `"effect"` - `"interview"` - `"music"` - `"other"` - `"sound"` - `"speech"` - `"text"` - `"unknown"` - `codecName: optional string` Codec name of the media - `description: optional string` Description of the audio - `dimensions: optional array of number` Bounding box dimensions [width, height, depth] - `duration: optional number` Duration of the media in seconds - `faceCount: optional number` Number of faces/triangles in the mesh - `format: optional string` Format of the mesh file (e.g. 'glb', etc.) - `frameRate: optional number` Frame rate of the video in frames per second - `hasAnimations: optional boolean` Whether the mesh has animations - `hasNormals: optional boolean` Whether the mesh has normal vectors - `hasSkeleton: optional boolean` Whether the mesh has bones/skeleton - `hasUVs: optional boolean` Whether the mesh has UV coordinates - `height: optional number` - `nbFrames: optional number` Number of frames in the video - `sampleRate: optional number` Sample rate of the media in Hz - `transcription: optional object { text }` Transcription of the audio - `text: string` - `vertexCount: optional number` Number of vertices in the mesh - `width: optional number` - `source: "3d23d" or "3d23d:texture" or "3d:texture" or 72 more` source of the asset - `"3d23d"` - `"3d23d:texture"` - `"3d:texture"` - `"3d:texture:albedo"` - `"3d:texture:metallic"` - `"3d:texture:mtl"` - `"3d:texture:normal"` - `"3d:texture:roughness"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-control-net"` - `"inference-control-net-img"` - `"inference-control-net-inpainting"` - `"inference-control-net-inpainting-ip-adapter"` - `"inference-control-net-ip-adapter"` - `"inference-control-net-reference"` - `"inference-control-net-texture"` - `"inference-img"` - `"inference-img-ip-adapter"` - `"inference-img-texture"` - `"inference-in-paint"` - `"inference-in-paint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt"` - `"inference-txt-ip-adapter"` - `"inference-txt-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture:albedo"` - `"texture:ao"` - `"texture:edge"` - `"texture:height"` - `"texture:metallic"` - `"texture:normal"` - `"texture:smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `status: "error" or "pending" or "success"` The actual status - `"error"` - `"pending"` - `"success"` - `tags: array of string` The associated tags (example: ["sci-fi", "landscape"]) - `updatedAt: string` The asset last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `url: string` Signed URL to get the asset content - `automaticCaptioning: optional string` Automatic captioning of the asset - `description: optional string` The description, it will contain in priority: - the manual description - the advanced captioning when the asset is used in training flow - the automatic captioning - `embedding: optional array of number` The embedding of the asset when requested. Only available when an asset can be embedded (ie: not Detection maps) - `firstFrame: optional object { assetId, url }` The video asset's first frame. Contains the assetId and the url of the first frame. - `assetId: string` - `url: string` - `isHidden: optional boolean` Whether the asset is hidden. - `lastFrame: optional object { assetId, url }` The video asset's last frame. Contains the assetId and the url of the last frame. - `assetId: string` - `url: string` - `nsfw: optional array of string` The NSFW labels - `originalFileUrl: optional string` The original file url. Contains the url of the original file. without any conversion. Only available for some specific video, audio and threeD assets. Is only specified if the given asset data has been replaced with a new file during the creation of the asset. - `outputIndex: optional number` The output index of the asset within a job This index is an positive integer that starts at 0 It is used to differentiate between multiple outputs of the same job If the job has only one output, this index is 0 - `preview: optional object { assetId, url }` The asset's preview. Contains the assetId and the url of the preview. - `assetId: string` - `url: string` - `thumbnail: optional object { assetId, url }` The asset's thumbnail. Contains the assetId and the url of the thumbnail. - `assetId: string` - `url: string` ### Example ```http curl https://api.cloud.scenario.com/v1/assets/$ASSET_ID/copy \ -H 'Content-Type: application/json' \ -u "$SCENARIO_SDK_API_KEY:SCENARIO_SDK_API_SECRET" \ -d '{}' ``` #### Response ```json { "asset": { "id": "id", "authorId": "authorId", "collectionIds": [ "string" ], "createdAt": "createdAt", "editCapabilities": [ "DETECTION" ], "kind": "3d", "metadata": { "kind": "3d", "type": "3d-texture", "angular": 0, "aspectRatio": "aspectRatio", "backgroundOpacity": 0, "baseModelId": "baseModelId", "bbox": [ 0, 0, 0, 0 ], "betterQuality": true, "cannyStructureImage": "cannyStructureImage", "clustering": true, "colorCorrection": true, "colorMode": "colorMode", "colorPrecision": 0, "concepts": [ { "modelId": "modelId", "scale": -2, "modelEpoch": "modelEpoch" } ], "contours": [ [ [ [ 0 ] ] ] ], "controlEnd": 0, "copiedAt": "copiedAt", "cornerThreshold": 0, "creativity": 0, "creativityDecay": 0, "defaultParameters": true, "depthFidelity": 0, "depthImage": "depthImage", "detailsLevel": -50, "dilate": 0, "factor": 0, "filterSpeckle": 0, "fractality": 0, "geometryEnforcement": 0, "guidance": 0, "halfMode": true, "hdr": 0, "height": 0, "highThreshold": 0, "horizontalExpansionRatio": 1, "image": "image", "imageFidelity": 0, "imageType": "seamfull", "inferenceId": "inferenceId", "inputFidelity": "high", "inputLocation": "bottom", "invert": true, "keypointThreshold": 0, "layerDifference": 0, "lengthThreshold": 0, "lockExpiresAt": "lockExpiresAt", "lowThreshold": 0, "mask": "mask", "maxIterations": 0, "maxThreshold": 0, "minThreshold": 0, "modality": "canny", "mode": "mode", "modelId": "modelId", "modelType": "custom", "name": "name", "nbMasks": 0, "negativePrompt": "negativePrompt", "negativePromptStrength": 0, "numInferenceSteps": 5, "numOutputs": 1, "originalAssetId": "originalAssetId", "outputIndex": 0, "overlapPercentage": 0, "overrideEmbeddings": true, "parentId": "parentId", "parentJobId": "parentJobId", "pathPrecision": 0, "points": [ [ 0 ], [ 0 ], [ 0 ] ], "polished": 0, "preset": "preset", "progressPercent": 0, "prompt": "prompt", "promptFidelity": 0, "raised": 0, "referenceImages": [ "string" ], "refinementSteps": 0, "removeBackground": true, "resizeOption": 0.1, "resultContours": true, "resultImage": true, "resultMask": true, "rootParentId": "rootParentId", "saveFlipbook": true, "scalingFactor": 1, "scheduler": "scheduler", "seed": "seed", "sharpen": true, "shiny": 0, "size": 0, "sketch": true, "sourceProjectId": "sourceProjectId", "spliceThreshold": 0, "strength": 0, "structureFidelity": 0, "structureImage": "structureImage", "style": "3d-cartoon", "styleFidelity": 0, "styleImages": [ "string" ], "styleImagesFidelity": 0, "targetHeight": 0, "targetWidth": 1024, "text": "text", "texture": "texture", "thumbnail": { "assetId": "assetId", "url": "url" }, "tileStyle": true, "trainingImage": true, "verticalExpansionRatio": 1, "width": 1024 }, "mimeType": "mimeType", "ownerId": "ownerId", "privacy": "private", "properties": { "size": 0, "animationFrameCount": 0, "bitrate": 0, "boneCount": 0, "channels": 0, "classification": "effect", "codecName": "codecName", "description": "description", "dimensions": [ 0, 0, 0 ], "duration": 0, "faceCount": 0, "format": "format", "frameRate": 0, "hasAnimations": true, "hasNormals": true, "hasSkeleton": true, "hasUVs": true, "height": 0, "nbFrames": 0, "sampleRate": 0, "transcription": { "text": "text" }, "vertexCount": 0, "width": 0 }, "source": "3d23d", "status": "error", "tags": [ "string" ], "updatedAt": "updatedAt", "url": "url", "automaticCaptioning": "automaticCaptioning", "description": "description", "embedding": [ 0 ], "firstFrame": { "assetId": "assetId", "url": "url" }, "isHidden": true, "lastFrame": { "assetId": "assetId", "url": "url" }, "nsfw": [ "string" ], "originalFileUrl": "originalFileUrl", "outputIndex": 0, "preview": { "assetId": "assetId", "url": "url" }, "thumbnail": { "assetId": "assetId", "url": "url" } } } ``` ## Lock **put** `/assets/{assetId}/lock` Lock a canvas ### Path Parameters - `assetId: string` ### Query Parameters - `originalAssets: optional boolean` If set to true, returns the original asset without transformation ### Body Parameters - `lockExpiresAt: string` The ISO timestamp when the lock on the canvas will expire. ### Returns - `asset: object { id, authorId, collectionIds, 24 more }` - `id: string` The asset ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `authorId: string` The author user ID (example: "dcf121faaa1a0a0bbbd9ca1b73d62aea") - `collectionIds: array of string` A list of CollectionId this asset belongs to - `createdAt: string` The asset creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `editCapabilities: array of "DETECTION" or "GENERATIVE_FILL" or "PIXELATE" or 8 more` List of edit capabilities - `"DETECTION"` - `"GENERATIVE_FILL"` - `"PIXELATE"` - `"PROMPT_EDITING"` - `"REFINE"` - `"REFRAME"` - `"REMOVE_BACKGROUND"` - `"SEGMENTATION"` - `"UPSCALE"` - `"UPSCALE_360"` - `"VECTORIZATION"` - `kind: "3d" or "audio" or "document" or 4 more` The kind of asset - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `metadata: object { kind, type, angular, 106 more }` Metadata of the asset with some additional information - `kind: "3d" or "audio" or "document" or 4 more` - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `type: "3d-texture" or "3d-texture-albedo" or "3d-texture-metallic" or 72 more` The type of the asset. Ex: 'inference-txt2img' will represent an asset generated from a text to image model - `"3d-texture"` - `"3d-texture-albedo"` - `"3d-texture-metallic"` - `"3d-texture-mtl"` - `"3d-texture-normal"` - `"3d-texture-roughness"` - `"3d23d"` - `"3d23d-texture"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-controlnet"` - `"inference-controlnet-img2img"` - `"inference-controlnet-inpaint"` - `"inference-controlnet-inpaint-ip-adapter"` - `"inference-controlnet-ip-adapter"` - `"inference-controlnet-reference"` - `"inference-controlnet-texture"` - `"inference-img2img"` - `"inference-img2img-ip-adapter"` - `"inference-img2img-texture"` - `"inference-inpaint"` - `"inference-inpaint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt2img"` - `"inference-txt2img-ip-adapter"` - `"inference-txt2img-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture-albedo"` - `"texture-ao"` - `"texture-edge"` - `"texture-height"` - `"texture-metallic"` - `"texture-normal"` - `"texture-smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `angular: optional number` How angular is the surface? 0 is like a sphere, 1 is like a mechanical object - `aspectRatio: optional string` The optional aspect ratio given for the generation, only applicable for some models - `backgroundOpacity: optional number` Int to set between 0 and 255 for the opacity of the background in the result images. - `baseModelId: optional string` The baseModelId that maybe changed at inference time - `bbox: optional array of number` A bounding box around the object of interest, in the format [x1, y1, x2, y2]. - `betterQuality: optional boolean` Remove small dark spots (i.e. “pepper”) and connect small bright cracks. - `cannyStructureImage: optional string` The control image already processed by canny detector. Must reference an existing AssetId. - `clustering: optional boolean` Activate clustering. - `colorCorrection: optional boolean` Ensure upscaled tile have the same color histogram as original tile. - `colorMode: optional string` - `colorPrecision: optional number` - `concepts: optional array of object { modelId, scale, modelEpoch }` Flux Kontext LoRA to style the image. For Flux Kontext Prompt Editing. - `modelId: string` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: number` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `modelEpoch: optional string` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `contours: optional array of array of array of array of number` - `controlEnd: optional number` End step for control. - `copiedAt: optional string` The date when the asset was copied to a project - `cornerThreshold: optional number` - `creativity: optional number` Allow the generation of "hallucinations" during the upscale process, which adds additional details and deviates from the original image. Default: optimized for your preset and style. - `creativityDecay: optional number` Amount of decay in creativity over the upscale process. The lowest the value, the less the creativity will be preserved over the upscale process. - `defaultParameters: optional boolean` If true, use the default parameters - `depthFidelity: optional number` The depth fidelity if a depth image provided - `depthImage: optional string` The control image processed by depth estimator. Must reference an existing AssetId. - `detailsLevel: optional number` Amount of details to remove or add - `dilate: optional number` The number of pixels to dilate the result masks. - `factor: optional number` Contrast factor for Grayscale detector - `filterSpeckle: optional number` - `fractality: optional number` Determine the scale at which the upscale process works. - With a small value, the upscale works at the largest scale, resulting in fewer added details and more coherent images. Ideal for portraits, for example. - With a large value, the upscale works at the smallest scale, resulting in more added details and more hallucinations. Ideal for landscapes, for example. (info): A small value is slower and more expensive to run. - `geometryEnforcement: optional number` Apply extra control to the Skybox 360 geometry. The higher the value, the more the 360 geometry will influence the generated skybox image. Use with caution. Default is adapted to the other parameters. - `guidance: optional number` The guidance used to generate this asset - `halfMode: optional boolean` - `hdr: optional number` - `height: optional number` - `highThreshold: optional number` High threshold for Canny detector - `horizontalExpansionRatio: optional number` (deprecated) Horizontal expansion ratio. - `image: optional string` The input image to process. Must reference an existing AssetId or be a data URL. - `imageFidelity: optional number` Strengthen the similarity to the original image during the upscale. Default: optimized for your preset and style. - `imageType: optional "seamfull" or "skybox" or "texture"` Preserve the seamless properties of skybox or texture images. Input has to be of same type (seamless). - `"seamfull"` - `"skybox"` - `"texture"` - `inferenceId: optional string` The id of the Inference describing how this image was generated - `inputFidelity: optional "high" or "low"` When set to `high`, allows to better preserve details from the input images in the output. This is especially useful when using images that contain elements like faces or logos that require accurate preservation in the generated image. You can provide multiple input images that will all be preserved with high fidelity, but keep in mind that the first image will be preserved with richer textures and finer details, so if you include elements such as faces, consider placing them in the first image. Only available for the `gpt-image-1` model. - `"high"` - `"low"` - `inputLocation: optional "bottom" or "left" or "middle" or 2 more` Location of the input image in the output. - `"bottom"` - `"left"` - `"middle"` - `"right"` - `"top"` - `invert: optional boolean` To invert the relief - `keypointThreshold: optional number` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `layerDifference: optional number` - `lengthThreshold: optional number` - `lockExpiresAt: optional string` The ISO timestamp when the lock on the canvas will expire - `lowThreshold: optional number` Low threshold for Canny detector - `mask: optional string` The mask used for the asset generation or editing - `maxIterations: optional number` - `maxThreshold: optional number` Maximum threshold for Grayscale conversion - `minThreshold: optional number` Minimum threshold for Grayscale conversion - `modality: optional "canny" or "depth" or "grayscale" or 7 more` Modality to detect - `"canny"` - `"depth"` - `"grayscale"` - `"lineart_anime"` - `"mlsd"` - `"normal"` - `"pose"` - `"scribble"` - `"segmentation"` - `"sketch"` - `mode: optional string` - `modelId: optional string` The modelId used to generate this asset - `modelType: optional "custom" or "elevenlabs-voice" or "flux.1" or 34 more` The type of the generator used - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `name: optional string` - `nbMasks: optional number` - `negativePrompt: optional string` The negative prompt used to generate this asset - `negativePromptStrength: optional number` Controls the influence of the negative prompt. Default 0 means the negative prompt has no effect. Higher values increase negative prompt influence. Must be > 0 if negativePrompt is provided. - `numInferenceSteps: optional number` The number of denoising steps for each image generation. - `numOutputs: optional number` The number of outputs to generate. - `originalAssetId: optional string` - `outputIndex: optional number` - `overlapPercentage: optional number` Overlap percentage for the output image. - `overrideEmbeddings: optional boolean` Override the embeddings of the model. Only your prompt and negativePrompt will be used. Use with caution. - `parentId: optional string` - `parentJobId: optional string` - `pathPrecision: optional number` - `points: optional array of array of number` List of points (label, x, y) in the image where label = 0 for background and 1 for object. - `polished: optional number` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `preset: optional string` - `progressPercent: optional number` - `prompt: optional string` The prompt that guided the asset generation or editing - `promptFidelity: optional number` Increase the fidelity to the prompt during upscale. Default: optimized for your preset and style. - `raised: optional number` How raised is the surface? 0 is flat like water, 1 is like a very rough rock - `referenceImages: optional array of string` The reference images used for the asset generation or editing - `refinementSteps: optional number` Additional refinement steps before scaling. If scalingFactor == 1, the refinement process will be applied (1 + refinementSteps) times. If scalingFactor > 1, the refinement process will be applied refinementSteps times. - `removeBackground: optional boolean` Remove background for Grayscale detector - `resizeOption: optional number` Size proportion of the input image in the output. - `resultContours: optional boolean` Boolean to output the contours. - `resultImage: optional boolean` Boolean to able output the cut out object. - `resultMask: optional boolean` Boolean to able return the masks (binary image) in the response. - `rootParentId: optional string` - `saveFlipbook: optional boolean` Save a flipbook of the texture. Deactivated when the input texture is larger than 2048x2048px - `scalingFactor: optional number` Scaling factor (when `targetWidth` not specified) - `scheduler: optional string` The scheduler used to generate this asset - `seed: optional string` The seed used to generate this asset. Can be a string or a number in some cases . - `sharpen: optional boolean` Sharpen tiles. - `shiny: optional number` How shiny is the surface? 0 is like a matte surface, 1 is like a diamond - `size: optional number` - `sketch: optional boolean` Activate sketch detection instead of canny. - `sourceProjectId: optional string` - `spliceThreshold: optional number` - `strength: optional number` The strength Only available for the `flux-kontext` LoRA model. - `structureFidelity: optional number` Strength for the input image structure preservation - `structureImage: optional string` The control image for structure. A canny detector will be applied to this image. Must reference an existing AssetId. - `style: optional "3d-cartoon" or "3d-rendered" or "anime" or 23 more` - `"3d-cartoon"` - `"3d-rendered"` - `"anime"` - `"cartoon"` - `"cinematic"` - `"claymation"` - `"cloud-skydome"` - `"comic"` - `"cyberpunk"` - `"enchanted"` - `"fantasy"` - `"ink"` - `"manga"` - `"manga-color"` - `"minimalist"` - `"neon-tron"` - `"oil-painting"` - `"pastel"` - `"photo"` - `"photography"` - `"psychedelic"` - `"retro-fantasy"` - `"scifi-concept-art"` - `"space"` - `"standard"` - `"whimsical"` - `styleFidelity: optional number` The higher the value the more it will look like the style image(s) - `styleImages: optional array of string` List of style images. Most of the time, only one image is enough. It must be existing AssetIds. - `styleImagesFidelity: optional number` Condition the influence of the style image(s). The higher the value, the more the style images will influence the upscaled image. - `targetHeight: optional number` The target height of the output image. - `targetWidth: optional number` Target width for the upscaled image, take priority over scaling factor - `text: optional string` A textual description / keywords describing the object of interest. - `texture: optional string` The asset to convert in texture maps. Must reference an existing AssetId. - `thumbnail: optional object { assetId, url }` The thumbnail of the canvas - `assetId: string` The AssetId of the image used as a thumbnail for the canvas (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: string` The url of the image used as a thumbnail for the canvas - `tileStyle: optional boolean` If set to true, during the upscaling process, the model will match tiles of the source image with tiles of the style image(s). This will result in a more coherent restyle. Works best with style images that have a similar composition. - `trainingImage: optional boolean` - `verticalExpansionRatio: optional number` (deprecated) Vertical expansion ratio. - `width: optional number` The width of the rendered image. - `mimeType: string` The mime type of the asset (example: "image/png") - `ownerId: string` The owner (project) ID (example: "proj_23tlk332lkht3kl2" or "team_dlkhgs23tlk3hlkth32lkht3kl2" for old teams) - `privacy: "private" or "public" or "unlisted"` The privacy of the asset - `"private"` - `"public"` - `"unlisted"` - `properties: object { size, animationFrameCount, bitrate, 20 more }` The properties of the asset, content may depend on the kind of asset returned - `size: number` - `animationFrameCount: optional number` Number of animation frames if animations exist - `bitrate: optional number` Bitrate of the media in bits per second - `boneCount: optional number` Number of bones if skeleton exists - `channels: optional number` Number of channels of the audio - `classification: optional "effect" or "interview" or "music" or 5 more` Classification of the audio - `"effect"` - `"interview"` - `"music"` - `"other"` - `"sound"` - `"speech"` - `"text"` - `"unknown"` - `codecName: optional string` Codec name of the media - `description: optional string` Description of the audio - `dimensions: optional array of number` Bounding box dimensions [width, height, depth] - `duration: optional number` Duration of the media in seconds - `faceCount: optional number` Number of faces/triangles in the mesh - `format: optional string` Format of the mesh file (e.g. 'glb', etc.) - `frameRate: optional number` Frame rate of the video in frames per second - `hasAnimations: optional boolean` Whether the mesh has animations - `hasNormals: optional boolean` Whether the mesh has normal vectors - `hasSkeleton: optional boolean` Whether the mesh has bones/skeleton - `hasUVs: optional boolean` Whether the mesh has UV coordinates - `height: optional number` - `nbFrames: optional number` Number of frames in the video - `sampleRate: optional number` Sample rate of the media in Hz - `transcription: optional object { text }` Transcription of the audio - `text: string` - `vertexCount: optional number` Number of vertices in the mesh - `width: optional number` - `source: "3d23d" or "3d23d:texture" or "3d:texture" or 72 more` source of the asset - `"3d23d"` - `"3d23d:texture"` - `"3d:texture"` - `"3d:texture:albedo"` - `"3d:texture:metallic"` - `"3d:texture:mtl"` - `"3d:texture:normal"` - `"3d:texture:roughness"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-control-net"` - `"inference-control-net-img"` - `"inference-control-net-inpainting"` - `"inference-control-net-inpainting-ip-adapter"` - `"inference-control-net-ip-adapter"` - `"inference-control-net-reference"` - `"inference-control-net-texture"` - `"inference-img"` - `"inference-img-ip-adapter"` - `"inference-img-texture"` - `"inference-in-paint"` - `"inference-in-paint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt"` - `"inference-txt-ip-adapter"` - `"inference-txt-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture:albedo"` - `"texture:ao"` - `"texture:edge"` - `"texture:height"` - `"texture:metallic"` - `"texture:normal"` - `"texture:smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `status: "error" or "pending" or "success"` The actual status - `"error"` - `"pending"` - `"success"` - `tags: array of string` The associated tags (example: ["sci-fi", "landscape"]) - `updatedAt: string` The asset last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `url: string` Signed URL to get the asset content - `automaticCaptioning: optional string` Automatic captioning of the asset - `description: optional string` The description, it will contain in priority: - the manual description - the advanced captioning when the asset is used in training flow - the automatic captioning - `embedding: optional array of number` The embedding of the asset when requested. Only available when an asset can be embedded (ie: not Detection maps) - `firstFrame: optional object { assetId, url }` The video asset's first frame. Contains the assetId and the url of the first frame. - `assetId: string` - `url: string` - `isHidden: optional boolean` Whether the asset is hidden. - `lastFrame: optional object { assetId, url }` The video asset's last frame. Contains the assetId and the url of the last frame. - `assetId: string` - `url: string` - `nsfw: optional array of string` The NSFW labels - `originalFileUrl: optional string` The original file url. Contains the url of the original file. without any conversion. Only available for some specific video, audio and threeD assets. Is only specified if the given asset data has been replaced with a new file during the creation of the asset. - `outputIndex: optional number` The output index of the asset within a job This index is an positive integer that starts at 0 It is used to differentiate between multiple outputs of the same job If the job has only one output, this index is 0 - `preview: optional object { assetId, url }` The asset's preview. Contains the assetId and the url of the preview. - `assetId: string` - `url: string` - `thumbnail: optional object { assetId, url }` The asset's thumbnail. Contains the assetId and the url of the thumbnail. - `assetId: string` - `url: string` - `lockId: optional string` The value of the lock to use when updating/unlocking the canvas. ### Example ```http curl https://api.cloud.scenario.com/v1/assets/$ASSET_ID/lock \ -X PUT \ -H 'Content-Type: application/json' \ -u "$SCENARIO_SDK_API_KEY:SCENARIO_SDK_API_SECRET" \ -d '{ "lockExpiresAt": "lockExpiresAt" }' ``` #### Response ```json { "asset": { "id": "id", "authorId": "authorId", "collectionIds": [ "string" ], "createdAt": "createdAt", "editCapabilities": [ "DETECTION" ], "kind": "3d", "metadata": { "kind": "3d", "type": "3d-texture", "angular": 0, "aspectRatio": "aspectRatio", "backgroundOpacity": 0, "baseModelId": "baseModelId", "bbox": [ 0, 0, 0, 0 ], "betterQuality": true, "cannyStructureImage": "cannyStructureImage", "clustering": true, "colorCorrection": true, "colorMode": "colorMode", "colorPrecision": 0, "concepts": [ { "modelId": "modelId", "scale": -2, "modelEpoch": "modelEpoch" } ], "contours": [ [ [ [ 0 ] ] ] ], "controlEnd": 0, "copiedAt": "copiedAt", "cornerThreshold": 0, "creativity": 0, "creativityDecay": 0, "defaultParameters": true, "depthFidelity": 0, "depthImage": "depthImage", "detailsLevel": -50, "dilate": 0, "factor": 0, "filterSpeckle": 0, "fractality": 0, "geometryEnforcement": 0, "guidance": 0, "halfMode": true, "hdr": 0, "height": 0, "highThreshold": 0, "horizontalExpansionRatio": 1, "image": "image", "imageFidelity": 0, "imageType": "seamfull", "inferenceId": "inferenceId", "inputFidelity": "high", "inputLocation": "bottom", "invert": true, "keypointThreshold": 0, "layerDifference": 0, "lengthThreshold": 0, "lockExpiresAt": "lockExpiresAt", "lowThreshold": 0, "mask": "mask", "maxIterations": 0, "maxThreshold": 0, "minThreshold": 0, "modality": "canny", "mode": "mode", "modelId": "modelId", "modelType": "custom", "name": "name", "nbMasks": 0, "negativePrompt": "negativePrompt", "negativePromptStrength": 0, "numInferenceSteps": 5, "numOutputs": 1, "originalAssetId": "originalAssetId", "outputIndex": 0, "overlapPercentage": 0, "overrideEmbeddings": true, "parentId": "parentId", "parentJobId": "parentJobId", "pathPrecision": 0, "points": [ [ 0 ], [ 0 ], [ 0 ] ], "polished": 0, "preset": "preset", "progressPercent": 0, "prompt": "prompt", "promptFidelity": 0, "raised": 0, "referenceImages": [ "string" ], "refinementSteps": 0, "removeBackground": true, "resizeOption": 0.1, "resultContours": true, "resultImage": true, "resultMask": true, "rootParentId": "rootParentId", "saveFlipbook": true, "scalingFactor": 1, "scheduler": "scheduler", "seed": "seed", "sharpen": true, "shiny": 0, "size": 0, "sketch": true, "sourceProjectId": "sourceProjectId", "spliceThreshold": 0, "strength": 0, "structureFidelity": 0, "structureImage": "structureImage", "style": "3d-cartoon", "styleFidelity": 0, "styleImages": [ "string" ], "styleImagesFidelity": 0, "targetHeight": 0, "targetWidth": 1024, "text": "text", "texture": "texture", "thumbnail": { "assetId": "assetId", "url": "url" }, "tileStyle": true, "trainingImage": true, "verticalExpansionRatio": 1, "width": 1024 }, "mimeType": "mimeType", "ownerId": "ownerId", "privacy": "private", "properties": { "size": 0, "animationFrameCount": 0, "bitrate": 0, "boneCount": 0, "channels": 0, "classification": "effect", "codecName": "codecName", "description": "description", "dimensions": [ 0, 0, 0 ], "duration": 0, "faceCount": 0, "format": "format", "frameRate": 0, "hasAnimations": true, "hasNormals": true, "hasSkeleton": true, "hasUVs": true, "height": 0, "nbFrames": 0, "sampleRate": 0, "transcription": { "text": "text" }, "vertexCount": 0, "width": 0 }, "source": "3d23d", "status": "error", "tags": [ "string" ], "updatedAt": "updatedAt", "url": "url", "automaticCaptioning": "automaticCaptioning", "description": "description", "embedding": [ 0 ], "firstFrame": { "assetId": "assetId", "url": "url" }, "isHidden": true, "lastFrame": { "assetId": "assetId", "url": "url" }, "nsfw": [ "string" ], "originalFileUrl": "originalFileUrl", "outputIndex": 0, "preview": { "assetId": "assetId", "url": "url" }, "thumbnail": { "assetId": "assetId", "url": "url" } }, "lockId": "lockId" } ``` ## List Snapshots **get** `/assets/{assetId}/snapshots` List snapshots of a canvas type asset ### Path Parameters - `assetId: string` ### Query Parameters - `pageSize: optional number` The number of items to return in the response. The default value is 10, maximum value is 100, minimum value is 10 - `paginationToken: optional string` A token you received in a previous request to query the next page of items ### Returns - `snapshots: array of object { authorId, hash, rawData, takenAt }` - `authorId: string` - `hash: string` - `rawData: string` - `takenAt: number` - `nextPaginationToken: optional string` A token to query the next page of snapshots ### Example ```http curl https://api.cloud.scenario.com/v1/assets/$ASSET_ID/snapshots \ -u "$SCENARIO_SDK_API_KEY:SCENARIO_SDK_API_SECRET" ``` #### Response ```json { "snapshots": [ { "authorId": "authorId", "hash": "hash", "rawData": "rawData", "takenAt": 0 } ], "nextPaginationToken": "nextPaginationToken" } ``` ## Update Tags **put** `/assets/{assetId}/tags` Add/delete tags on a specific asset ### Path Parameters - `assetId: string` ### Body Parameters - `add: optional array of string` The list of tags to add - `delete: optional array of string` The list of tags to delete - `strict: optional boolean` If true, the function will throw an error if: - one of the tags to add already exists - one of the tags to delete is not found If false, the endpoint will behave as if it was idempotent ### Returns - `added: array of string` The list of added tags - `deleted: array of string` The list of deleted tags ### Example ```http curl https://api.cloud.scenario.com/v1/assets/$ASSET_ID/tags \ -X PUT \ -H 'Content-Type: application/json' \ -u "$SCENARIO_SDK_API_KEY:SCENARIO_SDK_API_SECRET" \ -d '{}' ``` #### Response ```json { "added": [ "string" ], "deleted": [ "string" ] } ``` ## Unlock **put** `/assets/{assetId}/unlock` Unlock a canvas ### Path Parameters - `assetId: string` ### Query Parameters - `originalAssets: optional boolean` If set to true, returns the original asset without transformation ### Body Parameters - `forceUnlock: optional boolean` If true, no need to pass a lockId. - `lockId: optional string` The value of the lock on this canvas. ### Returns - `asset: object { id, authorId, collectionIds, 24 more }` - `id: string` The asset ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `authorId: string` The author user ID (example: "dcf121faaa1a0a0bbbd9ca1b73d62aea") - `collectionIds: array of string` A list of CollectionId this asset belongs to - `createdAt: string` The asset creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `editCapabilities: array of "DETECTION" or "GENERATIVE_FILL" or "PIXELATE" or 8 more` List of edit capabilities - `"DETECTION"` - `"GENERATIVE_FILL"` - `"PIXELATE"` - `"PROMPT_EDITING"` - `"REFINE"` - `"REFRAME"` - `"REMOVE_BACKGROUND"` - `"SEGMENTATION"` - `"UPSCALE"` - `"UPSCALE_360"` - `"VECTORIZATION"` - `kind: "3d" or "audio" or "document" or 4 more` The kind of asset - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `metadata: object { kind, type, angular, 106 more }` Metadata of the asset with some additional information - `kind: "3d" or "audio" or "document" or 4 more` - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `type: "3d-texture" or "3d-texture-albedo" or "3d-texture-metallic" or 72 more` The type of the asset. Ex: 'inference-txt2img' will represent an asset generated from a text to image model - `"3d-texture"` - `"3d-texture-albedo"` - `"3d-texture-metallic"` - `"3d-texture-mtl"` - `"3d-texture-normal"` - `"3d-texture-roughness"` - `"3d23d"` - `"3d23d-texture"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-controlnet"` - `"inference-controlnet-img2img"` - `"inference-controlnet-inpaint"` - `"inference-controlnet-inpaint-ip-adapter"` - `"inference-controlnet-ip-adapter"` - `"inference-controlnet-reference"` - `"inference-controlnet-texture"` - `"inference-img2img"` - `"inference-img2img-ip-adapter"` - `"inference-img2img-texture"` - `"inference-inpaint"` - `"inference-inpaint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt2img"` - `"inference-txt2img-ip-adapter"` - `"inference-txt2img-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture-albedo"` - `"texture-ao"` - `"texture-edge"` - `"texture-height"` - `"texture-metallic"` - `"texture-normal"` - `"texture-smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `angular: optional number` How angular is the surface? 0 is like a sphere, 1 is like a mechanical object - `aspectRatio: optional string` The optional aspect ratio given for the generation, only applicable for some models - `backgroundOpacity: optional number` Int to set between 0 and 255 for the opacity of the background in the result images. - `baseModelId: optional string` The baseModelId that maybe changed at inference time - `bbox: optional array of number` A bounding box around the object of interest, in the format [x1, y1, x2, y2]. - `betterQuality: optional boolean` Remove small dark spots (i.e. “pepper”) and connect small bright cracks. - `cannyStructureImage: optional string` The control image already processed by canny detector. Must reference an existing AssetId. - `clustering: optional boolean` Activate clustering. - `colorCorrection: optional boolean` Ensure upscaled tile have the same color histogram as original tile. - `colorMode: optional string` - `colorPrecision: optional number` - `concepts: optional array of object { modelId, scale, modelEpoch }` Flux Kontext LoRA to style the image. For Flux Kontext Prompt Editing. - `modelId: string` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: number` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `modelEpoch: optional string` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `contours: optional array of array of array of array of number` - `controlEnd: optional number` End step for control. - `copiedAt: optional string` The date when the asset was copied to a project - `cornerThreshold: optional number` - `creativity: optional number` Allow the generation of "hallucinations" during the upscale process, which adds additional details and deviates from the original image. Default: optimized for your preset and style. - `creativityDecay: optional number` Amount of decay in creativity over the upscale process. The lowest the value, the less the creativity will be preserved over the upscale process. - `defaultParameters: optional boolean` If true, use the default parameters - `depthFidelity: optional number` The depth fidelity if a depth image provided - `depthImage: optional string` The control image processed by depth estimator. Must reference an existing AssetId. - `detailsLevel: optional number` Amount of details to remove or add - `dilate: optional number` The number of pixels to dilate the result masks. - `factor: optional number` Contrast factor for Grayscale detector - `filterSpeckle: optional number` - `fractality: optional number` Determine the scale at which the upscale process works. - With a small value, the upscale works at the largest scale, resulting in fewer added details and more coherent images. Ideal for portraits, for example. - With a large value, the upscale works at the smallest scale, resulting in more added details and more hallucinations. Ideal for landscapes, for example. (info): A small value is slower and more expensive to run. - `geometryEnforcement: optional number` Apply extra control to the Skybox 360 geometry. The higher the value, the more the 360 geometry will influence the generated skybox image. Use with caution. Default is adapted to the other parameters. - `guidance: optional number` The guidance used to generate this asset - `halfMode: optional boolean` - `hdr: optional number` - `height: optional number` - `highThreshold: optional number` High threshold for Canny detector - `horizontalExpansionRatio: optional number` (deprecated) Horizontal expansion ratio. - `image: optional string` The input image to process. Must reference an existing AssetId or be a data URL. - `imageFidelity: optional number` Strengthen the similarity to the original image during the upscale. Default: optimized for your preset and style. - `imageType: optional "seamfull" or "skybox" or "texture"` Preserve the seamless properties of skybox or texture images. Input has to be of same type (seamless). - `"seamfull"` - `"skybox"` - `"texture"` - `inferenceId: optional string` The id of the Inference describing how this image was generated - `inputFidelity: optional "high" or "low"` When set to `high`, allows to better preserve details from the input images in the output. This is especially useful when using images that contain elements like faces or logos that require accurate preservation in the generated image. You can provide multiple input images that will all be preserved with high fidelity, but keep in mind that the first image will be preserved with richer textures and finer details, so if you include elements such as faces, consider placing them in the first image. Only available for the `gpt-image-1` model. - `"high"` - `"low"` - `inputLocation: optional "bottom" or "left" or "middle" or 2 more` Location of the input image in the output. - `"bottom"` - `"left"` - `"middle"` - `"right"` - `"top"` - `invert: optional boolean` To invert the relief - `keypointThreshold: optional number` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `layerDifference: optional number` - `lengthThreshold: optional number` - `lockExpiresAt: optional string` The ISO timestamp when the lock on the canvas will expire - `lowThreshold: optional number` Low threshold for Canny detector - `mask: optional string` The mask used for the asset generation or editing - `maxIterations: optional number` - `maxThreshold: optional number` Maximum threshold for Grayscale conversion - `minThreshold: optional number` Minimum threshold for Grayscale conversion - `modality: optional "canny" or "depth" or "grayscale" or 7 more` Modality to detect - `"canny"` - `"depth"` - `"grayscale"` - `"lineart_anime"` - `"mlsd"` - `"normal"` - `"pose"` - `"scribble"` - `"segmentation"` - `"sketch"` - `mode: optional string` - `modelId: optional string` The modelId used to generate this asset - `modelType: optional "custom" or "elevenlabs-voice" or "flux.1" or 34 more` The type of the generator used - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `name: optional string` - `nbMasks: optional number` - `negativePrompt: optional string` The negative prompt used to generate this asset - `negativePromptStrength: optional number` Controls the influence of the negative prompt. Default 0 means the negative prompt has no effect. Higher values increase negative prompt influence. Must be > 0 if negativePrompt is provided. - `numInferenceSteps: optional number` The number of denoising steps for each image generation. - `numOutputs: optional number` The number of outputs to generate. - `originalAssetId: optional string` - `outputIndex: optional number` - `overlapPercentage: optional number` Overlap percentage for the output image. - `overrideEmbeddings: optional boolean` Override the embeddings of the model. Only your prompt and negativePrompt will be used. Use with caution. - `parentId: optional string` - `parentJobId: optional string` - `pathPrecision: optional number` - `points: optional array of array of number` List of points (label, x, y) in the image where label = 0 for background and 1 for object. - `polished: optional number` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `preset: optional string` - `progressPercent: optional number` - `prompt: optional string` The prompt that guided the asset generation or editing - `promptFidelity: optional number` Increase the fidelity to the prompt during upscale. Default: optimized for your preset and style. - `raised: optional number` How raised is the surface? 0 is flat like water, 1 is like a very rough rock - `referenceImages: optional array of string` The reference images used for the asset generation or editing - `refinementSteps: optional number` Additional refinement steps before scaling. If scalingFactor == 1, the refinement process will be applied (1 + refinementSteps) times. If scalingFactor > 1, the refinement process will be applied refinementSteps times. - `removeBackground: optional boolean` Remove background for Grayscale detector - `resizeOption: optional number` Size proportion of the input image in the output. - `resultContours: optional boolean` Boolean to output the contours. - `resultImage: optional boolean` Boolean to able output the cut out object. - `resultMask: optional boolean` Boolean to able return the masks (binary image) in the response. - `rootParentId: optional string` - `saveFlipbook: optional boolean` Save a flipbook of the texture. Deactivated when the input texture is larger than 2048x2048px - `scalingFactor: optional number` Scaling factor (when `targetWidth` not specified) - `scheduler: optional string` The scheduler used to generate this asset - `seed: optional string` The seed used to generate this asset. Can be a string or a number in some cases . - `sharpen: optional boolean` Sharpen tiles. - `shiny: optional number` How shiny is the surface? 0 is like a matte surface, 1 is like a diamond - `size: optional number` - `sketch: optional boolean` Activate sketch detection instead of canny. - `sourceProjectId: optional string` - `spliceThreshold: optional number` - `strength: optional number` The strength Only available for the `flux-kontext` LoRA model. - `structureFidelity: optional number` Strength for the input image structure preservation - `structureImage: optional string` The control image for structure. A canny detector will be applied to this image. Must reference an existing AssetId. - `style: optional "3d-cartoon" or "3d-rendered" or "anime" or 23 more` - `"3d-cartoon"` - `"3d-rendered"` - `"anime"` - `"cartoon"` - `"cinematic"` - `"claymation"` - `"cloud-skydome"` - `"comic"` - `"cyberpunk"` - `"enchanted"` - `"fantasy"` - `"ink"` - `"manga"` - `"manga-color"` - `"minimalist"` - `"neon-tron"` - `"oil-painting"` - `"pastel"` - `"photo"` - `"photography"` - `"psychedelic"` - `"retro-fantasy"` - `"scifi-concept-art"` - `"space"` - `"standard"` - `"whimsical"` - `styleFidelity: optional number` The higher the value the more it will look like the style image(s) - `styleImages: optional array of string` List of style images. Most of the time, only one image is enough. It must be existing AssetIds. - `styleImagesFidelity: optional number` Condition the influence of the style image(s). The higher the value, the more the style images will influence the upscaled image. - `targetHeight: optional number` The target height of the output image. - `targetWidth: optional number` Target width for the upscaled image, take priority over scaling factor - `text: optional string` A textual description / keywords describing the object of interest. - `texture: optional string` The asset to convert in texture maps. Must reference an existing AssetId. - `thumbnail: optional object { assetId, url }` The thumbnail of the canvas - `assetId: string` The AssetId of the image used as a thumbnail for the canvas (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: string` The url of the image used as a thumbnail for the canvas - `tileStyle: optional boolean` If set to true, during the upscaling process, the model will match tiles of the source image with tiles of the style image(s). This will result in a more coherent restyle. Works best with style images that have a similar composition. - `trainingImage: optional boolean` - `verticalExpansionRatio: optional number` (deprecated) Vertical expansion ratio. - `width: optional number` The width of the rendered image. - `mimeType: string` The mime type of the asset (example: "image/png") - `ownerId: string` The owner (project) ID (example: "proj_23tlk332lkht3kl2" or "team_dlkhgs23tlk3hlkth32lkht3kl2" for old teams) - `privacy: "private" or "public" or "unlisted"` The privacy of the asset - `"private"` - `"public"` - `"unlisted"` - `properties: object { size, animationFrameCount, bitrate, 20 more }` The properties of the asset, content may depend on the kind of asset returned - `size: number` - `animationFrameCount: optional number` Number of animation frames if animations exist - `bitrate: optional number` Bitrate of the media in bits per second - `boneCount: optional number` Number of bones if skeleton exists - `channels: optional number` Number of channels of the audio - `classification: optional "effect" or "interview" or "music" or 5 more` Classification of the audio - `"effect"` - `"interview"` - `"music"` - `"other"` - `"sound"` - `"speech"` - `"text"` - `"unknown"` - `codecName: optional string` Codec name of the media - `description: optional string` Description of the audio - `dimensions: optional array of number` Bounding box dimensions [width, height, depth] - `duration: optional number` Duration of the media in seconds - `faceCount: optional number` Number of faces/triangles in the mesh - `format: optional string` Format of the mesh file (e.g. 'glb', etc.) - `frameRate: optional number` Frame rate of the video in frames per second - `hasAnimations: optional boolean` Whether the mesh has animations - `hasNormals: optional boolean` Whether the mesh has normal vectors - `hasSkeleton: optional boolean` Whether the mesh has bones/skeleton - `hasUVs: optional boolean` Whether the mesh has UV coordinates - `height: optional number` - `nbFrames: optional number` Number of frames in the video - `sampleRate: optional number` Sample rate of the media in Hz - `transcription: optional object { text }` Transcription of the audio - `text: string` - `vertexCount: optional number` Number of vertices in the mesh - `width: optional number` - `source: "3d23d" or "3d23d:texture" or "3d:texture" or 72 more` source of the asset - `"3d23d"` - `"3d23d:texture"` - `"3d:texture"` - `"3d:texture:albedo"` - `"3d:texture:metallic"` - `"3d:texture:mtl"` - `"3d:texture:normal"` - `"3d:texture:roughness"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-control-net"` - `"inference-control-net-img"` - `"inference-control-net-inpainting"` - `"inference-control-net-inpainting-ip-adapter"` - `"inference-control-net-ip-adapter"` - `"inference-control-net-reference"` - `"inference-control-net-texture"` - `"inference-img"` - `"inference-img-ip-adapter"` - `"inference-img-texture"` - `"inference-in-paint"` - `"inference-in-paint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt"` - `"inference-txt-ip-adapter"` - `"inference-txt-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture:albedo"` - `"texture:ao"` - `"texture:edge"` - `"texture:height"` - `"texture:metallic"` - `"texture:normal"` - `"texture:smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `status: "error" or "pending" or "success"` The actual status - `"error"` - `"pending"` - `"success"` - `tags: array of string` The associated tags (example: ["sci-fi", "landscape"]) - `updatedAt: string` The asset last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `url: string` Signed URL to get the asset content - `automaticCaptioning: optional string` Automatic captioning of the asset - `description: optional string` The description, it will contain in priority: - the manual description - the advanced captioning when the asset is used in training flow - the automatic captioning - `embedding: optional array of number` The embedding of the asset when requested. Only available when an asset can be embedded (ie: not Detection maps) - `firstFrame: optional object { assetId, url }` The video asset's first frame. Contains the assetId and the url of the first frame. - `assetId: string` - `url: string` - `isHidden: optional boolean` Whether the asset is hidden. - `lastFrame: optional object { assetId, url }` The video asset's last frame. Contains the assetId and the url of the last frame. - `assetId: string` - `url: string` - `nsfw: optional array of string` The NSFW labels - `originalFileUrl: optional string` The original file url. Contains the url of the original file. without any conversion. Only available for some specific video, audio and threeD assets. Is only specified if the given asset data has been replaced with a new file during the creation of the asset. - `outputIndex: optional number` The output index of the asset within a job This index is an positive integer that starts at 0 It is used to differentiate between multiple outputs of the same job If the job has only one output, this index is 0 - `preview: optional object { assetId, url }` The asset's preview. Contains the assetId and the url of the preview. - `assetId: string` - `url: string` - `thumbnail: optional object { assetId, url }` The asset's thumbnail. Contains the assetId and the url of the thumbnail. - `assetId: string` - `url: string` ### Example ```http curl https://api.cloud.scenario.com/v1/assets/$ASSET_ID/unlock \ -X PUT \ -H 'Content-Type: application/json' \ -u "$SCENARIO_SDK_API_KEY:SCENARIO_SDK_API_SECRET" \ -d '{}' ``` #### Response ```json { "asset": { "id": "id", "authorId": "authorId", "collectionIds": [ "string" ], "createdAt": "createdAt", "editCapabilities": [ "DETECTION" ], "kind": "3d", "metadata": { "kind": "3d", "type": "3d-texture", "angular": 0, "aspectRatio": "aspectRatio", "backgroundOpacity": 0, "baseModelId": "baseModelId", "bbox": [ 0, 0, 0, 0 ], "betterQuality": true, "cannyStructureImage": "cannyStructureImage", "clustering": true, "colorCorrection": true, "colorMode": "colorMode", "colorPrecision": 0, "concepts": [ { "modelId": "modelId", "scale": -2, "modelEpoch": "modelEpoch" } ], "contours": [ [ [ [ 0 ] ] ] ], "controlEnd": 0, "copiedAt": "copiedAt", "cornerThreshold": 0, "creativity": 0, "creativityDecay": 0, "defaultParameters": true, "depthFidelity": 0, "depthImage": "depthImage", "detailsLevel": -50, "dilate": 0, "factor": 0, "filterSpeckle": 0, "fractality": 0, "geometryEnforcement": 0, "guidance": 0, "halfMode": true, "hdr": 0, "height": 0, "highThreshold": 0, "horizontalExpansionRatio": 1, "image": "image", "imageFidelity": 0, "imageType": "seamfull", "inferenceId": "inferenceId", "inputFidelity": "high", "inputLocation": "bottom", "invert": true, "keypointThreshold": 0, "layerDifference": 0, "lengthThreshold": 0, "lockExpiresAt": "lockExpiresAt", "lowThreshold": 0, "mask": "mask", "maxIterations": 0, "maxThreshold": 0, "minThreshold": 0, "modality": "canny", "mode": "mode", "modelId": "modelId", "modelType": "custom", "name": "name", "nbMasks": 0, "negativePrompt": "negativePrompt", "negativePromptStrength": 0, "numInferenceSteps": 5, "numOutputs": 1, "originalAssetId": "originalAssetId", "outputIndex": 0, "overlapPercentage": 0, "overrideEmbeddings": true, "parentId": "parentId", "parentJobId": "parentJobId", "pathPrecision": 0, "points": [ [ 0 ], [ 0 ], [ 0 ] ], "polished": 0, "preset": "preset", "progressPercent": 0, "prompt": "prompt", "promptFidelity": 0, "raised": 0, "referenceImages": [ "string" ], "refinementSteps": 0, "removeBackground": true, "resizeOption": 0.1, "resultContours": true, "resultImage": true, "resultMask": true, "rootParentId": "rootParentId", "saveFlipbook": true, "scalingFactor": 1, "scheduler": "scheduler", "seed": "seed", "sharpen": true, "shiny": 0, "size": 0, "sketch": true, "sourceProjectId": "sourceProjectId", "spliceThreshold": 0, "strength": 0, "structureFidelity": 0, "structureImage": "structureImage", "style": "3d-cartoon", "styleFidelity": 0, "styleImages": [ "string" ], "styleImagesFidelity": 0, "targetHeight": 0, "targetWidth": 1024, "text": "text", "texture": "texture", "thumbnail": { "assetId": "assetId", "url": "url" }, "tileStyle": true, "trainingImage": true, "verticalExpansionRatio": 1, "width": 1024 }, "mimeType": "mimeType", "ownerId": "ownerId", "privacy": "private", "properties": { "size": 0, "animationFrameCount": 0, "bitrate": 0, "boneCount": 0, "channels": 0, "classification": "effect", "codecName": "codecName", "description": "description", "dimensions": [ 0, 0, 0 ], "duration": 0, "faceCount": 0, "format": "format", "frameRate": 0, "hasAnimations": true, "hasNormals": true, "hasSkeleton": true, "hasUVs": true, "height": 0, "nbFrames": 0, "sampleRate": 0, "transcription": { "text": "text" }, "vertexCount": 0, "width": 0 }, "source": "3d23d", "status": "error", "tags": [ "string" ], "updatedAt": "updatedAt", "url": "url", "automaticCaptioning": "automaticCaptioning", "description": "description", "embedding": [ 0 ], "firstFrame": { "assetId": "assetId", "url": "url" }, "isHidden": true, "lastFrame": { "assetId": "assetId", "url": "url" }, "nsfw": [ "string" ], "originalFileUrl": "originalFileUrl", "outputIndex": 0, "preview": { "assetId": "assetId", "url": "url" }, "thumbnail": { "assetId": "assetId", "url": "url" } } } ``` ## Domain Types ### Asset List Response - `AssetListResponse object { id, authorId, collectionIds, 24 more }` - `id: string` The asset ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `authorId: string` The author user ID (example: "dcf121faaa1a0a0bbbd9ca1b73d62aea") - `collectionIds: array of string` A list of CollectionId this asset belongs to - `createdAt: string` The asset creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `editCapabilities: array of "DETECTION" or "GENERATIVE_FILL" or "PIXELATE" or 8 more` List of edit capabilities - `"DETECTION"` - `"GENERATIVE_FILL"` - `"PIXELATE"` - `"PROMPT_EDITING"` - `"REFINE"` - `"REFRAME"` - `"REMOVE_BACKGROUND"` - `"SEGMENTATION"` - `"UPSCALE"` - `"UPSCALE_360"` - `"VECTORIZATION"` - `kind: "3d" or "audio" or "document" or 4 more` The kind of asset - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `metadata: object { kind, type, angular, 106 more }` Metadata of the asset with some additional information - `kind: "3d" or "audio" or "document" or 4 more` - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `type: "3d-texture" or "3d-texture-albedo" or "3d-texture-metallic" or 72 more` The type of the asset. Ex: 'inference-txt2img' will represent an asset generated from a text to image model - `"3d-texture"` - `"3d-texture-albedo"` - `"3d-texture-metallic"` - `"3d-texture-mtl"` - `"3d-texture-normal"` - `"3d-texture-roughness"` - `"3d23d"` - `"3d23d-texture"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-controlnet"` - `"inference-controlnet-img2img"` - `"inference-controlnet-inpaint"` - `"inference-controlnet-inpaint-ip-adapter"` - `"inference-controlnet-ip-adapter"` - `"inference-controlnet-reference"` - `"inference-controlnet-texture"` - `"inference-img2img"` - `"inference-img2img-ip-adapter"` - `"inference-img2img-texture"` - `"inference-inpaint"` - `"inference-inpaint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt2img"` - `"inference-txt2img-ip-adapter"` - `"inference-txt2img-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture-albedo"` - `"texture-ao"` - `"texture-edge"` - `"texture-height"` - `"texture-metallic"` - `"texture-normal"` - `"texture-smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `angular: optional number` How angular is the surface? 0 is like a sphere, 1 is like a mechanical object - `aspectRatio: optional string` The optional aspect ratio given for the generation, only applicable for some models - `backgroundOpacity: optional number` Int to set between 0 and 255 for the opacity of the background in the result images. - `baseModelId: optional string` The baseModelId that maybe changed at inference time - `bbox: optional array of number` A bounding box around the object of interest, in the format [x1, y1, x2, y2]. - `betterQuality: optional boolean` Remove small dark spots (i.e. “pepper”) and connect small bright cracks. - `cannyStructureImage: optional string` The control image already processed by canny detector. Must reference an existing AssetId. - `clustering: optional boolean` Activate clustering. - `colorCorrection: optional boolean` Ensure upscaled tile have the same color histogram as original tile. - `colorMode: optional string` - `colorPrecision: optional number` - `concepts: optional array of object { modelId, scale, modelEpoch }` Flux Kontext LoRA to style the image. For Flux Kontext Prompt Editing. - `modelId: string` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: number` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `modelEpoch: optional string` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `contours: optional array of array of array of array of number` - `controlEnd: optional number` End step for control. - `copiedAt: optional string` The date when the asset was copied to a project - `cornerThreshold: optional number` - `creativity: optional number` Allow the generation of "hallucinations" during the upscale process, which adds additional details and deviates from the original image. Default: optimized for your preset and style. - `creativityDecay: optional number` Amount of decay in creativity over the upscale process. The lowest the value, the less the creativity will be preserved over the upscale process. - `defaultParameters: optional boolean` If true, use the default parameters - `depthFidelity: optional number` The depth fidelity if a depth image provided - `depthImage: optional string` The control image processed by depth estimator. Must reference an existing AssetId. - `detailsLevel: optional number` Amount of details to remove or add - `dilate: optional number` The number of pixels to dilate the result masks. - `factor: optional number` Contrast factor for Grayscale detector - `filterSpeckle: optional number` - `fractality: optional number` Determine the scale at which the upscale process works. - With a small value, the upscale works at the largest scale, resulting in fewer added details and more coherent images. Ideal for portraits, for example. - With a large value, the upscale works at the smallest scale, resulting in more added details and more hallucinations. Ideal for landscapes, for example. (info): A small value is slower and more expensive to run. - `geometryEnforcement: optional number` Apply extra control to the Skybox 360 geometry. The higher the value, the more the 360 geometry will influence the generated skybox image. Use with caution. Default is adapted to the other parameters. - `guidance: optional number` The guidance used to generate this asset - `halfMode: optional boolean` - `hdr: optional number` - `height: optional number` - `highThreshold: optional number` High threshold for Canny detector - `horizontalExpansionRatio: optional number` (deprecated) Horizontal expansion ratio. - `image: optional string` The input image to process. Must reference an existing AssetId or be a data URL. - `imageFidelity: optional number` Strengthen the similarity to the original image during the upscale. Default: optimized for your preset and style. - `imageType: optional "seamfull" or "skybox" or "texture"` Preserve the seamless properties of skybox or texture images. Input has to be of same type (seamless). - `"seamfull"` - `"skybox"` - `"texture"` - `inferenceId: optional string` The id of the Inference describing how this image was generated - `inputFidelity: optional "high" or "low"` When set to `high`, allows to better preserve details from the input images in the output. This is especially useful when using images that contain elements like faces or logos that require accurate preservation in the generated image. You can provide multiple input images that will all be preserved with high fidelity, but keep in mind that the first image will be preserved with richer textures and finer details, so if you include elements such as faces, consider placing them in the first image. Only available for the `gpt-image-1` model. - `"high"` - `"low"` - `inputLocation: optional "bottom" or "left" or "middle" or 2 more` Location of the input image in the output. - `"bottom"` - `"left"` - `"middle"` - `"right"` - `"top"` - `invert: optional boolean` To invert the relief - `keypointThreshold: optional number` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `layerDifference: optional number` - `lengthThreshold: optional number` - `lockExpiresAt: optional string` The ISO timestamp when the lock on the canvas will expire - `lowThreshold: optional number` Low threshold for Canny detector - `mask: optional string` The mask used for the asset generation or editing - `maxIterations: optional number` - `maxThreshold: optional number` Maximum threshold for Grayscale conversion - `minThreshold: optional number` Minimum threshold for Grayscale conversion - `modality: optional "canny" or "depth" or "grayscale" or 7 more` Modality to detect - `"canny"` - `"depth"` - `"grayscale"` - `"lineart_anime"` - `"mlsd"` - `"normal"` - `"pose"` - `"scribble"` - `"segmentation"` - `"sketch"` - `mode: optional string` - `modelId: optional string` The modelId used to generate this asset - `modelType: optional "custom" or "elevenlabs-voice" or "flux.1" or 34 more` The type of the generator used - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `name: optional string` - `nbMasks: optional number` - `negativePrompt: optional string` The negative prompt used to generate this asset - `negativePromptStrength: optional number` Controls the influence of the negative prompt. Default 0 means the negative prompt has no effect. Higher values increase negative prompt influence. Must be > 0 if negativePrompt is provided. - `numInferenceSteps: optional number` The number of denoising steps for each image generation. - `numOutputs: optional number` The number of outputs to generate. - `originalAssetId: optional string` - `outputIndex: optional number` - `overlapPercentage: optional number` Overlap percentage for the output image. - `overrideEmbeddings: optional boolean` Override the embeddings of the model. Only your prompt and negativePrompt will be used. Use with caution. - `parentId: optional string` - `parentJobId: optional string` - `pathPrecision: optional number` - `points: optional array of array of number` List of points (label, x, y) in the image where label = 0 for background and 1 for object. - `polished: optional number` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `preset: optional string` - `progressPercent: optional number` - `prompt: optional string` The prompt that guided the asset generation or editing - `promptFidelity: optional number` Increase the fidelity to the prompt during upscale. Default: optimized for your preset and style. - `raised: optional number` How raised is the surface? 0 is flat like water, 1 is like a very rough rock - `referenceImages: optional array of string` The reference images used for the asset generation or editing - `refinementSteps: optional number` Additional refinement steps before scaling. If scalingFactor == 1, the refinement process will be applied (1 + refinementSteps) times. If scalingFactor > 1, the refinement process will be applied refinementSteps times. - `removeBackground: optional boolean` Remove background for Grayscale detector - `resizeOption: optional number` Size proportion of the input image in the output. - `resultContours: optional boolean` Boolean to output the contours. - `resultImage: optional boolean` Boolean to able output the cut out object. - `resultMask: optional boolean` Boolean to able return the masks (binary image) in the response. - `rootParentId: optional string` - `saveFlipbook: optional boolean` Save a flipbook of the texture. Deactivated when the input texture is larger than 2048x2048px - `scalingFactor: optional number` Scaling factor (when `targetWidth` not specified) - `scheduler: optional string` The scheduler used to generate this asset - `seed: optional string` The seed used to generate this asset. Can be a string or a number in some cases . - `sharpen: optional boolean` Sharpen tiles. - `shiny: optional number` How shiny is the surface? 0 is like a matte surface, 1 is like a diamond - `size: optional number` - `sketch: optional boolean` Activate sketch detection instead of canny. - `sourceProjectId: optional string` - `spliceThreshold: optional number` - `strength: optional number` The strength Only available for the `flux-kontext` LoRA model. - `structureFidelity: optional number` Strength for the input image structure preservation - `structureImage: optional string` The control image for structure. A canny detector will be applied to this image. Must reference an existing AssetId. - `style: optional "3d-cartoon" or "3d-rendered" or "anime" or 23 more` - `"3d-cartoon"` - `"3d-rendered"` - `"anime"` - `"cartoon"` - `"cinematic"` - `"claymation"` - `"cloud-skydome"` - `"comic"` - `"cyberpunk"` - `"enchanted"` - `"fantasy"` - `"ink"` - `"manga"` - `"manga-color"` - `"minimalist"` - `"neon-tron"` - `"oil-painting"` - `"pastel"` - `"photo"` - `"photography"` - `"psychedelic"` - `"retro-fantasy"` - `"scifi-concept-art"` - `"space"` - `"standard"` - `"whimsical"` - `styleFidelity: optional number` The higher the value the more it will look like the style image(s) - `styleImages: optional array of string` List of style images. Most of the time, only one image is enough. It must be existing AssetIds. - `styleImagesFidelity: optional number` Condition the influence of the style image(s). The higher the value, the more the style images will influence the upscaled image. - `targetHeight: optional number` The target height of the output image. - `targetWidth: optional number` Target width for the upscaled image, take priority over scaling factor - `text: optional string` A textual description / keywords describing the object of interest. - `texture: optional string` The asset to convert in texture maps. Must reference an existing AssetId. - `thumbnail: optional object { assetId, url }` The thumbnail of the canvas - `assetId: string` The AssetId of the image used as a thumbnail for the canvas (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: string` The url of the image used as a thumbnail for the canvas - `tileStyle: optional boolean` If set to true, during the upscaling process, the model will match tiles of the source image with tiles of the style image(s). This will result in a more coherent restyle. Works best with style images that have a similar composition. - `trainingImage: optional boolean` - `verticalExpansionRatio: optional number` (deprecated) Vertical expansion ratio. - `width: optional number` The width of the rendered image. - `mimeType: string` The mime type of the asset (example: "image/png") - `ownerId: string` The owner (project) ID (example: "proj_23tlk332lkht3kl2" or "team_dlkhgs23tlk3hlkth32lkht3kl2" for old teams) - `privacy: "private" or "public" or "unlisted"` The privacy of the asset - `"private"` - `"public"` - `"unlisted"` - `properties: object { size, animationFrameCount, bitrate, 20 more }` The properties of the asset, content may depend on the kind of asset returned - `size: number` - `animationFrameCount: optional number` Number of animation frames if animations exist - `bitrate: optional number` Bitrate of the media in bits per second - `boneCount: optional number` Number of bones if skeleton exists - `channels: optional number` Number of channels of the audio - `classification: optional "effect" or "interview" or "music" or 5 more` Classification of the audio - `"effect"` - `"interview"` - `"music"` - `"other"` - `"sound"` - `"speech"` - `"text"` - `"unknown"` - `codecName: optional string` Codec name of the media - `description: optional string` Description of the audio - `dimensions: optional array of number` Bounding box dimensions [width, height, depth] - `duration: optional number` Duration of the media in seconds - `faceCount: optional number` Number of faces/triangles in the mesh - `format: optional string` Format of the mesh file (e.g. 'glb', etc.) - `frameRate: optional number` Frame rate of the video in frames per second - `hasAnimations: optional boolean` Whether the mesh has animations - `hasNormals: optional boolean` Whether the mesh has normal vectors - `hasSkeleton: optional boolean` Whether the mesh has bones/skeleton - `hasUVs: optional boolean` Whether the mesh has UV coordinates - `height: optional number` - `nbFrames: optional number` Number of frames in the video - `sampleRate: optional number` Sample rate of the media in Hz - `transcription: optional object { text }` Transcription of the audio - `text: string` - `vertexCount: optional number` Number of vertices in the mesh - `width: optional number` - `source: "3d23d" or "3d23d:texture" or "3d:texture" or 72 more` source of the asset - `"3d23d"` - `"3d23d:texture"` - `"3d:texture"` - `"3d:texture:albedo"` - `"3d:texture:metallic"` - `"3d:texture:mtl"` - `"3d:texture:normal"` - `"3d:texture:roughness"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-control-net"` - `"inference-control-net-img"` - `"inference-control-net-inpainting"` - `"inference-control-net-inpainting-ip-adapter"` - `"inference-control-net-ip-adapter"` - `"inference-control-net-reference"` - `"inference-control-net-texture"` - `"inference-img"` - `"inference-img-ip-adapter"` - `"inference-img-texture"` - `"inference-in-paint"` - `"inference-in-paint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt"` - `"inference-txt-ip-adapter"` - `"inference-txt-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture:albedo"` - `"texture:ao"` - `"texture:edge"` - `"texture:height"` - `"texture:metallic"` - `"texture:normal"` - `"texture:smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `status: "error" or "pending" or "success"` The actual status - `"error"` - `"pending"` - `"success"` - `tags: array of string` The associated tags (example: ["sci-fi", "landscape"]) - `updatedAt: string` The asset last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `url: string` Signed URL to get the asset content - `automaticCaptioning: optional string` Automatic captioning of the asset - `description: optional string` The description, it will contain in priority: - the manual description - the advanced captioning when the asset is used in training flow - the automatic captioning - `embedding: optional array of number` The embedding of the asset when requested. Only available when an asset can be embedded (ie: not Detection maps) - `firstFrame: optional object { assetId, url }` The video asset's first frame. Contains the assetId and the url of the first frame. - `assetId: string` - `url: string` - `isHidden: optional boolean` Whether the asset is hidden. - `lastFrame: optional object { assetId, url }` The video asset's last frame. Contains the assetId and the url of the last frame. - `assetId: string` - `url: string` - `nsfw: optional array of string` The NSFW labels - `originalFileUrl: optional string` The original file url. Contains the url of the original file. without any conversion. Only available for some specific video, audio and threeD assets. Is only specified if the given asset data has been replaced with a new file during the creation of the asset. - `outputIndex: optional number` The output index of the asset within a job This index is an positive integer that starts at 0 It is used to differentiate between multiple outputs of the same job If the job has only one output, this index is 0 - `preview: optional object { assetId, url }` The asset's preview. Contains the assetId and the url of the preview. - `assetId: string` - `url: string` - `thumbnail: optional object { assetId, url }` The asset's thumbnail. Contains the assetId and the url of the thumbnail. - `assetId: string` - `url: string` ### Asset Upload Response - `AssetUploadResponse object { asset, job }` - `asset: object { id, authorId, collectionIds, 24 more }` - `id: string` The asset ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `authorId: string` The author user ID (example: "dcf121faaa1a0a0bbbd9ca1b73d62aea") - `collectionIds: array of string` A list of CollectionId this asset belongs to - `createdAt: string` The asset creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `editCapabilities: array of "DETECTION" or "GENERATIVE_FILL" or "PIXELATE" or 8 more` List of edit capabilities - `"DETECTION"` - `"GENERATIVE_FILL"` - `"PIXELATE"` - `"PROMPT_EDITING"` - `"REFINE"` - `"REFRAME"` - `"REMOVE_BACKGROUND"` - `"SEGMENTATION"` - `"UPSCALE"` - `"UPSCALE_360"` - `"VECTORIZATION"` - `kind: "3d" or "audio" or "document" or 4 more` The kind of asset - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `metadata: object { kind, type, angular, 106 more }` Metadata of the asset with some additional information - `kind: "3d" or "audio" or "document" or 4 more` - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `type: "3d-texture" or "3d-texture-albedo" or "3d-texture-metallic" or 72 more` The type of the asset. Ex: 'inference-txt2img' will represent an asset generated from a text to image model - `"3d-texture"` - `"3d-texture-albedo"` - `"3d-texture-metallic"` - `"3d-texture-mtl"` - `"3d-texture-normal"` - `"3d-texture-roughness"` - `"3d23d"` - `"3d23d-texture"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-controlnet"` - `"inference-controlnet-img2img"` - `"inference-controlnet-inpaint"` - `"inference-controlnet-inpaint-ip-adapter"` - `"inference-controlnet-ip-adapter"` - `"inference-controlnet-reference"` - `"inference-controlnet-texture"` - `"inference-img2img"` - `"inference-img2img-ip-adapter"` - `"inference-img2img-texture"` - `"inference-inpaint"` - `"inference-inpaint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt2img"` - `"inference-txt2img-ip-adapter"` - `"inference-txt2img-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture-albedo"` - `"texture-ao"` - `"texture-edge"` - `"texture-height"` - `"texture-metallic"` - `"texture-normal"` - `"texture-smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `angular: optional number` How angular is the surface? 0 is like a sphere, 1 is like a mechanical object - `aspectRatio: optional string` The optional aspect ratio given for the generation, only applicable for some models - `backgroundOpacity: optional number` Int to set between 0 and 255 for the opacity of the background in the result images. - `baseModelId: optional string` The baseModelId that maybe changed at inference time - `bbox: optional array of number` A bounding box around the object of interest, in the format [x1, y1, x2, y2]. - `betterQuality: optional boolean` Remove small dark spots (i.e. “pepper”) and connect small bright cracks. - `cannyStructureImage: optional string` The control image already processed by canny detector. Must reference an existing AssetId. - `clustering: optional boolean` Activate clustering. - `colorCorrection: optional boolean` Ensure upscaled tile have the same color histogram as original tile. - `colorMode: optional string` - `colorPrecision: optional number` - `concepts: optional array of object { modelId, scale, modelEpoch }` Flux Kontext LoRA to style the image. For Flux Kontext Prompt Editing. - `modelId: string` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: number` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `modelEpoch: optional string` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `contours: optional array of array of array of array of number` - `controlEnd: optional number` End step for control. - `copiedAt: optional string` The date when the asset was copied to a project - `cornerThreshold: optional number` - `creativity: optional number` Allow the generation of "hallucinations" during the upscale process, which adds additional details and deviates from the original image. Default: optimized for your preset and style. - `creativityDecay: optional number` Amount of decay in creativity over the upscale process. The lowest the value, the less the creativity will be preserved over the upscale process. - `defaultParameters: optional boolean` If true, use the default parameters - `depthFidelity: optional number` The depth fidelity if a depth image provided - `depthImage: optional string` The control image processed by depth estimator. Must reference an existing AssetId. - `detailsLevel: optional number` Amount of details to remove or add - `dilate: optional number` The number of pixels to dilate the result masks. - `factor: optional number` Contrast factor for Grayscale detector - `filterSpeckle: optional number` - `fractality: optional number` Determine the scale at which the upscale process works. - With a small value, the upscale works at the largest scale, resulting in fewer added details and more coherent images. Ideal for portraits, for example. - With a large value, the upscale works at the smallest scale, resulting in more added details and more hallucinations. Ideal for landscapes, for example. (info): A small value is slower and more expensive to run. - `geometryEnforcement: optional number` Apply extra control to the Skybox 360 geometry. The higher the value, the more the 360 geometry will influence the generated skybox image. Use with caution. Default is adapted to the other parameters. - `guidance: optional number` The guidance used to generate this asset - `halfMode: optional boolean` - `hdr: optional number` - `height: optional number` - `highThreshold: optional number` High threshold for Canny detector - `horizontalExpansionRatio: optional number` (deprecated) Horizontal expansion ratio. - `image: optional string` The input image to process. Must reference an existing AssetId or be a data URL. - `imageFidelity: optional number` Strengthen the similarity to the original image during the upscale. Default: optimized for your preset and style. - `imageType: optional "seamfull" or "skybox" or "texture"` Preserve the seamless properties of skybox or texture images. Input has to be of same type (seamless). - `"seamfull"` - `"skybox"` - `"texture"` - `inferenceId: optional string` The id of the Inference describing how this image was generated - `inputFidelity: optional "high" or "low"` When set to `high`, allows to better preserve details from the input images in the output. This is especially useful when using images that contain elements like faces or logos that require accurate preservation in the generated image. You can provide multiple input images that will all be preserved with high fidelity, but keep in mind that the first image will be preserved with richer textures and finer details, so if you include elements such as faces, consider placing them in the first image. Only available for the `gpt-image-1` model. - `"high"` - `"low"` - `inputLocation: optional "bottom" or "left" or "middle" or 2 more` Location of the input image in the output. - `"bottom"` - `"left"` - `"middle"` - `"right"` - `"top"` - `invert: optional boolean` To invert the relief - `keypointThreshold: optional number` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `layerDifference: optional number` - `lengthThreshold: optional number` - `lockExpiresAt: optional string` The ISO timestamp when the lock on the canvas will expire - `lowThreshold: optional number` Low threshold for Canny detector - `mask: optional string` The mask used for the asset generation or editing - `maxIterations: optional number` - `maxThreshold: optional number` Maximum threshold for Grayscale conversion - `minThreshold: optional number` Minimum threshold for Grayscale conversion - `modality: optional "canny" or "depth" or "grayscale" or 7 more` Modality to detect - `"canny"` - `"depth"` - `"grayscale"` - `"lineart_anime"` - `"mlsd"` - `"normal"` - `"pose"` - `"scribble"` - `"segmentation"` - `"sketch"` - `mode: optional string` - `modelId: optional string` The modelId used to generate this asset - `modelType: optional "custom" or "elevenlabs-voice" or "flux.1" or 34 more` The type of the generator used - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `name: optional string` - `nbMasks: optional number` - `negativePrompt: optional string` The negative prompt used to generate this asset - `negativePromptStrength: optional number` Controls the influence of the negative prompt. Default 0 means the negative prompt has no effect. Higher values increase negative prompt influence. Must be > 0 if negativePrompt is provided. - `numInferenceSteps: optional number` The number of denoising steps for each image generation. - `numOutputs: optional number` The number of outputs to generate. - `originalAssetId: optional string` - `outputIndex: optional number` - `overlapPercentage: optional number` Overlap percentage for the output image. - `overrideEmbeddings: optional boolean` Override the embeddings of the model. Only your prompt and negativePrompt will be used. Use with caution. - `parentId: optional string` - `parentJobId: optional string` - `pathPrecision: optional number` - `points: optional array of array of number` List of points (label, x, y) in the image where label = 0 for background and 1 for object. - `polished: optional number` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `preset: optional string` - `progressPercent: optional number` - `prompt: optional string` The prompt that guided the asset generation or editing - `promptFidelity: optional number` Increase the fidelity to the prompt during upscale. Default: optimized for your preset and style. - `raised: optional number` How raised is the surface? 0 is flat like water, 1 is like a very rough rock - `referenceImages: optional array of string` The reference images used for the asset generation or editing - `refinementSteps: optional number` Additional refinement steps before scaling. If scalingFactor == 1, the refinement process will be applied (1 + refinementSteps) times. If scalingFactor > 1, the refinement process will be applied refinementSteps times. - `removeBackground: optional boolean` Remove background for Grayscale detector - `resizeOption: optional number` Size proportion of the input image in the output. - `resultContours: optional boolean` Boolean to output the contours. - `resultImage: optional boolean` Boolean to able output the cut out object. - `resultMask: optional boolean` Boolean to able return the masks (binary image) in the response. - `rootParentId: optional string` - `saveFlipbook: optional boolean` Save a flipbook of the texture. Deactivated when the input texture is larger than 2048x2048px - `scalingFactor: optional number` Scaling factor (when `targetWidth` not specified) - `scheduler: optional string` The scheduler used to generate this asset - `seed: optional string` The seed used to generate this asset. Can be a string or a number in some cases . - `sharpen: optional boolean` Sharpen tiles. - `shiny: optional number` How shiny is the surface? 0 is like a matte surface, 1 is like a diamond - `size: optional number` - `sketch: optional boolean` Activate sketch detection instead of canny. - `sourceProjectId: optional string` - `spliceThreshold: optional number` - `strength: optional number` The strength Only available for the `flux-kontext` LoRA model. - `structureFidelity: optional number` Strength for the input image structure preservation - `structureImage: optional string` The control image for structure. A canny detector will be applied to this image. Must reference an existing AssetId. - `style: optional "3d-cartoon" or "3d-rendered" or "anime" or 23 more` - `"3d-cartoon"` - `"3d-rendered"` - `"anime"` - `"cartoon"` - `"cinematic"` - `"claymation"` - `"cloud-skydome"` - `"comic"` - `"cyberpunk"` - `"enchanted"` - `"fantasy"` - `"ink"` - `"manga"` - `"manga-color"` - `"minimalist"` - `"neon-tron"` - `"oil-painting"` - `"pastel"` - `"photo"` - `"photography"` - `"psychedelic"` - `"retro-fantasy"` - `"scifi-concept-art"` - `"space"` - `"standard"` - `"whimsical"` - `styleFidelity: optional number` The higher the value the more it will look like the style image(s) - `styleImages: optional array of string` List of style images. Most of the time, only one image is enough. It must be existing AssetIds. - `styleImagesFidelity: optional number` Condition the influence of the style image(s). The higher the value, the more the style images will influence the upscaled image. - `targetHeight: optional number` The target height of the output image. - `targetWidth: optional number` Target width for the upscaled image, take priority over scaling factor - `text: optional string` A textual description / keywords describing the object of interest. - `texture: optional string` The asset to convert in texture maps. Must reference an existing AssetId. - `thumbnail: optional object { assetId, url }` The thumbnail of the canvas - `assetId: string` The AssetId of the image used as a thumbnail for the canvas (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: string` The url of the image used as a thumbnail for the canvas - `tileStyle: optional boolean` If set to true, during the upscaling process, the model will match tiles of the source image with tiles of the style image(s). This will result in a more coherent restyle. Works best with style images that have a similar composition. - `trainingImage: optional boolean` - `verticalExpansionRatio: optional number` (deprecated) Vertical expansion ratio. - `width: optional number` The width of the rendered image. - `mimeType: string` The mime type of the asset (example: "image/png") - `ownerId: string` The owner (project) ID (example: "proj_23tlk332lkht3kl2" or "team_dlkhgs23tlk3hlkth32lkht3kl2" for old teams) - `privacy: "private" or "public" or "unlisted"` The privacy of the asset - `"private"` - `"public"` - `"unlisted"` - `properties: object { size, animationFrameCount, bitrate, 20 more }` The properties of the asset, content may depend on the kind of asset returned - `size: number` - `animationFrameCount: optional number` Number of animation frames if animations exist - `bitrate: optional number` Bitrate of the media in bits per second - `boneCount: optional number` Number of bones if skeleton exists - `channels: optional number` Number of channels of the audio - `classification: optional "effect" or "interview" or "music" or 5 more` Classification of the audio - `"effect"` - `"interview"` - `"music"` - `"other"` - `"sound"` - `"speech"` - `"text"` - `"unknown"` - `codecName: optional string` Codec name of the media - `description: optional string` Description of the audio - `dimensions: optional array of number` Bounding box dimensions [width, height, depth] - `duration: optional number` Duration of the media in seconds - `faceCount: optional number` Number of faces/triangles in the mesh - `format: optional string` Format of the mesh file (e.g. 'glb', etc.) - `frameRate: optional number` Frame rate of the video in frames per second - `hasAnimations: optional boolean` Whether the mesh has animations - `hasNormals: optional boolean` Whether the mesh has normal vectors - `hasSkeleton: optional boolean` Whether the mesh has bones/skeleton - `hasUVs: optional boolean` Whether the mesh has UV coordinates - `height: optional number` - `nbFrames: optional number` Number of frames in the video - `sampleRate: optional number` Sample rate of the media in Hz - `transcription: optional object { text }` Transcription of the audio - `text: string` - `vertexCount: optional number` Number of vertices in the mesh - `width: optional number` - `source: "3d23d" or "3d23d:texture" or "3d:texture" or 72 more` source of the asset - `"3d23d"` - `"3d23d:texture"` - `"3d:texture"` - `"3d:texture:albedo"` - `"3d:texture:metallic"` - `"3d:texture:mtl"` - `"3d:texture:normal"` - `"3d:texture:roughness"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-control-net"` - `"inference-control-net-img"` - `"inference-control-net-inpainting"` - `"inference-control-net-inpainting-ip-adapter"` - `"inference-control-net-ip-adapter"` - `"inference-control-net-reference"` - `"inference-control-net-texture"` - `"inference-img"` - `"inference-img-ip-adapter"` - `"inference-img-texture"` - `"inference-in-paint"` - `"inference-in-paint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt"` - `"inference-txt-ip-adapter"` - `"inference-txt-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture:albedo"` - `"texture:ao"` - `"texture:edge"` - `"texture:height"` - `"texture:metallic"` - `"texture:normal"` - `"texture:smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `status: "error" or "pending" or "success"` The actual status - `"error"` - `"pending"` - `"success"` - `tags: array of string` The associated tags (example: ["sci-fi", "landscape"]) - `updatedAt: string` The asset last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `url: string` Signed URL to get the asset content - `automaticCaptioning: optional string` Automatic captioning of the asset - `description: optional string` The description, it will contain in priority: - the manual description - the advanced captioning when the asset is used in training flow - the automatic captioning - `embedding: optional array of number` The embedding of the asset when requested. Only available when an asset can be embedded (ie: not Detection maps) - `firstFrame: optional object { assetId, url }` The video asset's first frame. Contains the assetId and the url of the first frame. - `assetId: string` - `url: string` - `isHidden: optional boolean` Whether the asset is hidden. - `lastFrame: optional object { assetId, url }` The video asset's last frame. Contains the assetId and the url of the last frame. - `assetId: string` - `url: string` - `nsfw: optional array of string` The NSFW labels - `originalFileUrl: optional string` The original file url. Contains the url of the original file. without any conversion. Only available for some specific video, audio and threeD assets. Is only specified if the given asset data has been replaced with a new file during the creation of the asset. - `outputIndex: optional number` The output index of the asset within a job This index is an positive integer that starts at 0 It is used to differentiate between multiple outputs of the same job If the job has only one output, this index is 0 - `preview: optional object { assetId, url }` The asset's preview. Contains the assetId and the url of the preview. - `assetId: string` - `url: string` - `thumbnail: optional object { assetId, url }` The asset's thumbnail. Contains the assetId and the url of the thumbnail. - `assetId: string` - `url: string` - `job: optional object { createdAt, jobId, jobType, 8 more }` - `createdAt: string` The job creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `jobId: string` The job ID (example: "job_ocZCnG1Df35XRL1QyCZSRxAG8") - `jobType: "assets-download" or "canvas-export" or "caption" or 36 more` The type of job - `"assets-download"` - `"canvas-export"` - `"caption"` - `"caption-llava"` - `"custom"` - `"describe-style"` - `"detection"` - `"embed"` - `"flux"` - `"flux-model-training"` - `"generate-prompt"` - `"image-generation"` - `"image-prompt-editing"` - `"inference"` - `"mesh-preview-rendering"` - `"model-download"` - `"model-import"` - `"model-training"` - `"musubi-model-training"` - `"openai-image-generation"` - `"patch-image"` - `"pixelate"` - `"reframe"` - `"remove-background"` - `"repaint"` - `"restyle"` - `"segment"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"skybox-upscale-360"` - `"texture"` - `"translate"` - `"upload"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"vectorize"` - `"workflow"` - `metadata: object { assetIds, error, flow, 6 more }` Metadata of the job with some additional information - `assetIds: optional array of string` List of produced assets for this job - `error: optional string` Eventual error for the job - `flow: optional array of object { id, status, type, 15 more }` The flow of the job. Only available for workflow jobs. - `id: string` The id of the node. - `status: "failure" or "pending" or "processing" or 2 more` The status of the node. Only available for WorkflowJob nodes. - `"failure"` - `"pending"` - `"processing"` - `"skipped"` - `"success"` - `type: "custom-model" or "for-each" or "generate-prompt" or 7 more` The type of the job for the node. - `"custom-model"` - `"for-each"` - `"generate-prompt"` - `"list"` - `"logic"` - `"model"` - `"remove-background"` - `"transform"` - `"user-approval"` - `"workflow"` - `assets: optional array of object { assetId, url }` List of produced assets for this node. - `assetId: string` - `url: string` - `count: optional number` Fixed number of iterations for a ForEach node. When set, the loop runs exactly `count` times regardless of array input. When not set, the loop iterates over the resolved array input. Only available for ForEach nodes. - `dependsOn: optional array of string` The nodes that this node depends on. Only available for nodes that have dependencies. Mainly used for user approval nodes. - `includeOutputsInWorkflowJob: optional true` If true, the outputs of this node will be included in the workflow job's final output. Only applicable to producing nodes (custom-model, inference, etc.). By default, only last nodes (nodes not referenced by other nodes) contribute to outputs. Set this to true to also include intermediate nodes in the final output. Note: This should only be set to `true` or left undefined. - `true` - `inputs: optional array of object { name, type, allowedValues, 26 more }` The inputs of the node. - `name: string` The name that must be user to call the model through the API - `type: "boolean" or "file" or "file_array" or 7 more` The data type of the input - `"boolean"` - `"file"` - `"file_array"` - `"inputs_array"` - `"model"` - `"model_array"` - `"number"` - `"number_array"` - `"string"` - `"string_array"` - `allowedValues: optional array of unknown` The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown. - `backgroundBehavior: optional "opaque" or "transparent"` Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`. - `"opaque"` - `"transparent"` - `color: optional boolean` Whether the input is a color or not. Only available for `string` input type. - `costImpact: optional boolean` Whether this input affects the model's cost calculation - `default: optional unknown` The default value for the input - `description: optional string` Help text displayed in the UI to provide additional information about the input - `group: optional string` Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI. - `hint: optional string` Hint text displayed in the UI as a tooltip to guide the user - `inputs: optional array of map[unknown]` The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs. - `items: optional array of array of object { name, type, allowedValues, 25 more }` The configured items for inputs_array type inputs. Each item is an array of SubNodeInput that need ref/value resolution. Only available for inputs_array type. - `name: string` The name that must be user to call the model through the API - `type: "boolean" or "file" or "file_array" or 7 more` The data type of the input - `"boolean"` - `"file"` - `"file_array"` - `"inputs_array"` - `"model"` - `"model_array"` - `"number"` - `"number_array"` - `"string"` - `"string_array"` - `allowedValues: optional array of unknown` The allowed values for the input. For `string` or `number` types, creates a single-select dropdown. For `string_array` type, creates a multi-select dropdown. - `backgroundBehavior: optional "opaque" or "transparent"` Specifies the background behavior for the input. Only available for `file` and `file_array` input types with kind `image`. - `"opaque"` - `"transparent"` - `color: optional boolean` Whether the input is a color or not. Only available for `string` input type. - `costImpact: optional boolean` Whether this input affects the model's cost calculation - `default: optional unknown` The default value for the input - `description: optional string` Help text displayed in the UI to provide additional information about the input - `group: optional string` Used to visually group inputs together in the UI. Inputs with the same group value appear consecutively in the UI. - `hint: optional string` Hint text displayed in the UI as a tooltip to guide the user - `inputs: optional array of map[unknown]` The list of inputs which form an object within a container array. All inputs are the same as the current object. This is only available for type inputs_array inputs. - `kind: optional "3d" or "audio" or "document" or 4 more` The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `label: optional string` The label displayed in the UI for this input - `maskFrom: optional string` The name of the file input field to use as the mask source - `max: optional number` The maximum allowed value. Only available for `number` and `array` input types. - `maxLength: optional number` The maximum allowed length for `string` inputs. Also applies to each item in `string_array`. - `maxSize: optional number` The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time. - `min: optional number` The minimum allowed value. Only available for `number` and array input types. - `minLength: optional number` The minimum allowed length for string inputs. Also applies to each item in `string_array`. - `modelTypes: optional array of "custom" or "elevenlabs-voice" or "flux.1" or 34 more` The allowed model types for this input. Example: `["flux.1-lora"]`. Only available for `model_array` input type. - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `parent: optional boolean` Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types. For `file_array`, the parent asset is the first item in the array. - `placeholder: optional string` Placeholder text for the input. Only available for 'string' input type. - `prompt: optional boolean` Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type. - `promptSpark: optional boolean` Whether the input is used with prompt spark. Only available for `string` input type. - `ref: optional object { conditional, equal, name, node }` The reference to another input or output of the same workflow. Must have at least one of node or conditional. - `conditional: optional array of string` The conditional nodes to reference. If the conditional nodes are successful, the node will be successful. If the conditional nodes are skipped, the node will be skipped. Contains an array of node ids used to check the status of the nodes. - `equal: optional string` This is the desired node output value if ref is an if/else node. - `name: optional string` The name of the input or output to reference. If the type is 'workflow', the name is the name of the input of the workflow is required If the type is 'node', the name is not mandatory, except if you want all outputs of the node. To get all outputs of a node, you can use the name 'all'. - `node: optional string` The node id or 'workflow' if the source is a workflow input. - `required: optional object { always, conditionalValues, ifDefined, ifNotDefined }` Set of rules that describes when this input is required: - `always`: Input is always required - `ifNotDefined`: Input is required when another specified input is not defined - `ifDefined`: Input is required when another specified input is defined - `conditionalValues`: Input is required when another input has a specific value By default, the input is not required. - `always: optional boolean` Whether the input is always required - `conditionalValues: optional unknown` Makes this input required when another input has a specific value: - Key: name of the input to check - Value: operation and allowed values that trigger the requirement - `ifDefined: optional unknown` Makes this input required when another input is defined: - Key: name of the input that must be defined - Value: message to display when this input is required - `ifNotDefined: optional unknown` Makes this input required when another input is not defined: - Key: name of the input that must be undefined - Value: message to display when this input is required - `step: optional number` The step increment for numeric inputs. Only available for `number` input type. - `value: optional unknown` The value of the input. This is the value of the input that will be used to run the node. Only available for flows managed by a WorkflowJob. - `kind: optional "3d" or "audio" or "document" or 4 more` The asset kind of the input. Only taken into account for `file` and `file_array` input types. If model provides multiple kinds, the input will be not able to create the asset on the flight on API side with dataurl without data:kind, prefix - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `label: optional string` The label displayed in the UI for this input - `maskFrom: optional string` The name of the file input field to use as the mask source - `max: optional number` The maximum allowed value. Only available for `number` and `array` input types. - `maxLength: optional number` The maximum allowed length for `string` inputs. Also applies to each item in `string_array`. - `maxSize: optional number` The maximum allowed file size in bytes. Only applies to `file` and `file_array` input types. Validated against `asset.properties.size` at job creation time. - `min: optional number` The minimum allowed value. Only available for `number` and array input types. - `minLength: optional number` The minimum allowed length for string inputs. Also applies to each item in `string_array`. - `modelTypes: optional array of "custom" or "elevenlabs-voice" or "flux.1" or 34 more` The allowed model types for this input. Example: `["flux.1-lora"]`. Only available for `model_array` input type. - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `parent: optional boolean` Whether this input represents a parent asset to assign to the produced assets. Only available for `file` and `file_array` input types. For `file_array`, the parent asset is the first item in the array. - `placeholder: optional string` Placeholder text for the input. Only available for 'string' input type. - `prompt: optional boolean` Whether the input is a prompt. When true, displays as a text area with prompt spark feature. Only available for `string` input type. - `promptSpark: optional boolean` Whether the input is used with prompt spark. Only available for `string` input type. - `ref: optional object { conditional, equal, name, node }` The reference to another input or output of the same workflow. Must have at least one of node or conditional. - `conditional: optional array of string` The conditional nodes to reference. If the conditional nodes are successful, the node will be successful. If the conditional nodes are skipped, the node will be skipped. Contains an array of node ids used to check the status of the nodes. - `equal: optional string` This is the desired node output value if ref is an if/else node. - `name: optional string` The name of the input or output to reference. If the type is 'workflow', the name is the name of the input of the workflow is required If the type is 'node', the name is not mandatory, except if you want all outputs of the node. To get all outputs of a node, you can use the name 'all'. - `node: optional string` The node id or 'workflow' if the source is a workflow input. - `required: optional object { always, conditionalValues, ifDefined, ifNotDefined }` Set of rules that describes when this input is required: - `always`: Input is always required - `ifNotDefined`: Input is required when another specified input is not defined - `ifDefined`: Input is required when another specified input is defined - `conditionalValues`: Input is required when another input has a specific value By default, the input is not required. - `always: optional boolean` Whether the input is always required - `conditionalValues: optional unknown` Makes this input required when another input has a specific value: - Key: name of the input to check - Value: operation and allowed values that trigger the requirement - `ifDefined: optional unknown` Makes this input required when another input is defined: - Key: name of the input that must be defined - Value: message to display when this input is required - `ifNotDefined: optional unknown` Makes this input required when another input is not defined: - Key: name of the input that must be undefined - Value: message to display when this input is required - `step: optional number` The step increment for numeric inputs. Only available for `number` input type. - `value: optional unknown` The value of the input. This is the value of the input that will be used to run the node. Only available for flows managed by a WorkflowJob. - `items: optional array of string` Statically-configured items for a List node. The node outputs this array as-is when executed. Only available for List nodes. The values can be strings, numbers, or asset IDs. - `iterationIndex: optional number` Zero-based index of the iteration this node copy belongs to. Set on dynamically-created copies of loop body nodes. - `jobId: optional string` If the flow is part of a WorkflowJob, this is the jobId for the node. jobId is only available for nodes started. A node "Pending" for a running workflow job is not started. - `logic: optional object { cases, default, transform }` The logic of the node. Only available for logic nodes. - `cases: optional array of object { condition, value }` The cases of the logic. Only available for if/else nodes. - `condition: string` - `value: string` - `default: optional string` The default case of the logic. Contains the id/output of the node to execute if no case is matched. Only available for if/else nodes. - `transform: optional string` The transform of the logic. Only available for transform nodes. - `logicType: optional "if-else"` The type of the logic for the node. Only available for logic nodes. - `"if-else"` - `loopBodyNodeIds: optional array of string` IDs of the body template nodes that belong to this ForEach loop. At runtime these templates are cloned once per iteration and marked Skipped. Only available for ForEach nodes. - `loopNodeId: optional string` ID of the ForEach node that spawned this iteration copy. Set on dynamically-created copies of loop body nodes. - `modelId: optional string` The model id for the node. Mainly used for custom model tasks. - `output: optional unknown` The output of the node. Only available for logic nodes. - `workflowId: optional string` The workflow id for the node. Mainly used for workflow tasks. - `hint: optional string` Actionable hint for the user explaining what went wrong and how to resolve it. - `input: optional map[unknown]` The inputs for the job - `output: optional map[unknown]` May contain the output of the job for specific custom models jobs. Only available for custom models which generate non-assets outputs. Example: LLM text results. - `outputModelId: optional string` For voice-clone jobs: the ID of the model being trained. - `workflowId: optional string` The workflow ID of the job if job is part of a workflow. - `workflowJobId: optional string` The workflow job ID of the job if job is part of a workflow job. - `progress: number` Progress of the job (between 0 and 1) - `status: "canceled" or "failure" or "finalizing" or 5 more` The current status of the job - `"canceled"` - `"failure"` - `"finalizing"` - `"in-progress"` - `"pending"` - `"queued"` - `"success"` - `"warming-up"` - `statusHistory: array of object { date, status }` The history of the different statuses the job went through with the ISO string date of when the job reached each statuses. - `date: string` - `status: "canceled" or "failure" or "finalizing" or 5 more` - `"canceled"` - `"failure"` - `"finalizing"` - `"in-progress"` - `"pending"` - `"queued"` - `"success"` - `"warming-up"` - `updatedAt: string` The job last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `authorId: optional string` The author user ID (example: "dcf121faaa1a0a0bbbd9ca1b73d62aea") - `billing: optional object { cuCost, cuDiscount }` The billing of the job - `cuCost: number` - `cuDiscount: number` - `ownerId: optional string` The owner ID (example: "team_U3Qmc8PCdWXwAQJ4Dvw4tV6D") ### Asset Delete Multiple Response - `AssetDeleteMultipleResponse = unknown` ### Asset Get Bulk Response - `AssetGetBulkResponse object { assets }` - `assets: array of object { id, authorId, collectionIds, 24 more }` - `id: string` The asset ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `authorId: string` The author user ID (example: "dcf121faaa1a0a0bbbd9ca1b73d62aea") - `collectionIds: array of string` A list of CollectionId this asset belongs to - `createdAt: string` The asset creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `editCapabilities: array of "DETECTION" or "GENERATIVE_FILL" or "PIXELATE" or 8 more` List of edit capabilities - `"DETECTION"` - `"GENERATIVE_FILL"` - `"PIXELATE"` - `"PROMPT_EDITING"` - `"REFINE"` - `"REFRAME"` - `"REMOVE_BACKGROUND"` - `"SEGMENTATION"` - `"UPSCALE"` - `"UPSCALE_360"` - `"VECTORIZATION"` - `kind: "3d" or "audio" or "document" or 4 more` The kind of asset - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `metadata: object { kind, type, angular, 106 more }` Metadata of the asset with some additional information - `kind: "3d" or "audio" or "document" or 4 more` - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `type: "3d-texture" or "3d-texture-albedo" or "3d-texture-metallic" or 72 more` The type of the asset. Ex: 'inference-txt2img' will represent an asset generated from a text to image model - `"3d-texture"` - `"3d-texture-albedo"` - `"3d-texture-metallic"` - `"3d-texture-mtl"` - `"3d-texture-normal"` - `"3d-texture-roughness"` - `"3d23d"` - `"3d23d-texture"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-controlnet"` - `"inference-controlnet-img2img"` - `"inference-controlnet-inpaint"` - `"inference-controlnet-inpaint-ip-adapter"` - `"inference-controlnet-ip-adapter"` - `"inference-controlnet-reference"` - `"inference-controlnet-texture"` - `"inference-img2img"` - `"inference-img2img-ip-adapter"` - `"inference-img2img-texture"` - `"inference-inpaint"` - `"inference-inpaint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt2img"` - `"inference-txt2img-ip-adapter"` - `"inference-txt2img-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture-albedo"` - `"texture-ao"` - `"texture-edge"` - `"texture-height"` - `"texture-metallic"` - `"texture-normal"` - `"texture-smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `angular: optional number` How angular is the surface? 0 is like a sphere, 1 is like a mechanical object - `aspectRatio: optional string` The optional aspect ratio given for the generation, only applicable for some models - `backgroundOpacity: optional number` Int to set between 0 and 255 for the opacity of the background in the result images. - `baseModelId: optional string` The baseModelId that maybe changed at inference time - `bbox: optional array of number` A bounding box around the object of interest, in the format [x1, y1, x2, y2]. - `betterQuality: optional boolean` Remove small dark spots (i.e. “pepper”) and connect small bright cracks. - `cannyStructureImage: optional string` The control image already processed by canny detector. Must reference an existing AssetId. - `clustering: optional boolean` Activate clustering. - `colorCorrection: optional boolean` Ensure upscaled tile have the same color histogram as original tile. - `colorMode: optional string` - `colorPrecision: optional number` - `concepts: optional array of object { modelId, scale, modelEpoch }` Flux Kontext LoRA to style the image. For Flux Kontext Prompt Editing. - `modelId: string` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: number` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `modelEpoch: optional string` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `contours: optional array of array of array of array of number` - `controlEnd: optional number` End step for control. - `copiedAt: optional string` The date when the asset was copied to a project - `cornerThreshold: optional number` - `creativity: optional number` Allow the generation of "hallucinations" during the upscale process, which adds additional details and deviates from the original image. Default: optimized for your preset and style. - `creativityDecay: optional number` Amount of decay in creativity over the upscale process. The lowest the value, the less the creativity will be preserved over the upscale process. - `defaultParameters: optional boolean` If true, use the default parameters - `depthFidelity: optional number` The depth fidelity if a depth image provided - `depthImage: optional string` The control image processed by depth estimator. Must reference an existing AssetId. - `detailsLevel: optional number` Amount of details to remove or add - `dilate: optional number` The number of pixels to dilate the result masks. - `factor: optional number` Contrast factor for Grayscale detector - `filterSpeckle: optional number` - `fractality: optional number` Determine the scale at which the upscale process works. - With a small value, the upscale works at the largest scale, resulting in fewer added details and more coherent images. Ideal for portraits, for example. - With a large value, the upscale works at the smallest scale, resulting in more added details and more hallucinations. Ideal for landscapes, for example. (info): A small value is slower and more expensive to run. - `geometryEnforcement: optional number` Apply extra control to the Skybox 360 geometry. The higher the value, the more the 360 geometry will influence the generated skybox image. Use with caution. Default is adapted to the other parameters. - `guidance: optional number` The guidance used to generate this asset - `halfMode: optional boolean` - `hdr: optional number` - `height: optional number` - `highThreshold: optional number` High threshold for Canny detector - `horizontalExpansionRatio: optional number` (deprecated) Horizontal expansion ratio. - `image: optional string` The input image to process. Must reference an existing AssetId or be a data URL. - `imageFidelity: optional number` Strengthen the similarity to the original image during the upscale. Default: optimized for your preset and style. - `imageType: optional "seamfull" or "skybox" or "texture"` Preserve the seamless properties of skybox or texture images. Input has to be of same type (seamless). - `"seamfull"` - `"skybox"` - `"texture"` - `inferenceId: optional string` The id of the Inference describing how this image was generated - `inputFidelity: optional "high" or "low"` When set to `high`, allows to better preserve details from the input images in the output. This is especially useful when using images that contain elements like faces or logos that require accurate preservation in the generated image. You can provide multiple input images that will all be preserved with high fidelity, but keep in mind that the first image will be preserved with richer textures and finer details, so if you include elements such as faces, consider placing them in the first image. Only available for the `gpt-image-1` model. - `"high"` - `"low"` - `inputLocation: optional "bottom" or "left" or "middle" or 2 more` Location of the input image in the output. - `"bottom"` - `"left"` - `"middle"` - `"right"` - `"top"` - `invert: optional boolean` To invert the relief - `keypointThreshold: optional number` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `layerDifference: optional number` - `lengthThreshold: optional number` - `lockExpiresAt: optional string` The ISO timestamp when the lock on the canvas will expire - `lowThreshold: optional number` Low threshold for Canny detector - `mask: optional string` The mask used for the asset generation or editing - `maxIterations: optional number` - `maxThreshold: optional number` Maximum threshold for Grayscale conversion - `minThreshold: optional number` Minimum threshold for Grayscale conversion - `modality: optional "canny" or "depth" or "grayscale" or 7 more` Modality to detect - `"canny"` - `"depth"` - `"grayscale"` - `"lineart_anime"` - `"mlsd"` - `"normal"` - `"pose"` - `"scribble"` - `"segmentation"` - `"sketch"` - `mode: optional string` - `modelId: optional string` The modelId used to generate this asset - `modelType: optional "custom" or "elevenlabs-voice" or "flux.1" or 34 more` The type of the generator used - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `name: optional string` - `nbMasks: optional number` - `negativePrompt: optional string` The negative prompt used to generate this asset - `negativePromptStrength: optional number` Controls the influence of the negative prompt. Default 0 means the negative prompt has no effect. Higher values increase negative prompt influence. Must be > 0 if negativePrompt is provided. - `numInferenceSteps: optional number` The number of denoising steps for each image generation. - `numOutputs: optional number` The number of outputs to generate. - `originalAssetId: optional string` - `outputIndex: optional number` - `overlapPercentage: optional number` Overlap percentage for the output image. - `overrideEmbeddings: optional boolean` Override the embeddings of the model. Only your prompt and negativePrompt will be used. Use with caution. - `parentId: optional string` - `parentJobId: optional string` - `pathPrecision: optional number` - `points: optional array of array of number` List of points (label, x, y) in the image where label = 0 for background and 1 for object. - `polished: optional number` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `preset: optional string` - `progressPercent: optional number` - `prompt: optional string` The prompt that guided the asset generation or editing - `promptFidelity: optional number` Increase the fidelity to the prompt during upscale. Default: optimized for your preset and style. - `raised: optional number` How raised is the surface? 0 is flat like water, 1 is like a very rough rock - `referenceImages: optional array of string` The reference images used for the asset generation or editing - `refinementSteps: optional number` Additional refinement steps before scaling. If scalingFactor == 1, the refinement process will be applied (1 + refinementSteps) times. If scalingFactor > 1, the refinement process will be applied refinementSteps times. - `removeBackground: optional boolean` Remove background for Grayscale detector - `resizeOption: optional number` Size proportion of the input image in the output. - `resultContours: optional boolean` Boolean to output the contours. - `resultImage: optional boolean` Boolean to able output the cut out object. - `resultMask: optional boolean` Boolean to able return the masks (binary image) in the response. - `rootParentId: optional string` - `saveFlipbook: optional boolean` Save a flipbook of the texture. Deactivated when the input texture is larger than 2048x2048px - `scalingFactor: optional number` Scaling factor (when `targetWidth` not specified) - `scheduler: optional string` The scheduler used to generate this asset - `seed: optional string` The seed used to generate this asset. Can be a string or a number in some cases . - `sharpen: optional boolean` Sharpen tiles. - `shiny: optional number` How shiny is the surface? 0 is like a matte surface, 1 is like a diamond - `size: optional number` - `sketch: optional boolean` Activate sketch detection instead of canny. - `sourceProjectId: optional string` - `spliceThreshold: optional number` - `strength: optional number` The strength Only available for the `flux-kontext` LoRA model. - `structureFidelity: optional number` Strength for the input image structure preservation - `structureImage: optional string` The control image for structure. A canny detector will be applied to this image. Must reference an existing AssetId. - `style: optional "3d-cartoon" or "3d-rendered" or "anime" or 23 more` - `"3d-cartoon"` - `"3d-rendered"` - `"anime"` - `"cartoon"` - `"cinematic"` - `"claymation"` - `"cloud-skydome"` - `"comic"` - `"cyberpunk"` - `"enchanted"` - `"fantasy"` - `"ink"` - `"manga"` - `"manga-color"` - `"minimalist"` - `"neon-tron"` - `"oil-painting"` - `"pastel"` - `"photo"` - `"photography"` - `"psychedelic"` - `"retro-fantasy"` - `"scifi-concept-art"` - `"space"` - `"standard"` - `"whimsical"` - `styleFidelity: optional number` The higher the value the more it will look like the style image(s) - `styleImages: optional array of string` List of style images. Most of the time, only one image is enough. It must be existing AssetIds. - `styleImagesFidelity: optional number` Condition the influence of the style image(s). The higher the value, the more the style images will influence the upscaled image. - `targetHeight: optional number` The target height of the output image. - `targetWidth: optional number` Target width for the upscaled image, take priority over scaling factor - `text: optional string` A textual description / keywords describing the object of interest. - `texture: optional string` The asset to convert in texture maps. Must reference an existing AssetId. - `thumbnail: optional object { assetId, url }` The thumbnail of the canvas - `assetId: string` The AssetId of the image used as a thumbnail for the canvas (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: string` The url of the image used as a thumbnail for the canvas - `tileStyle: optional boolean` If set to true, during the upscaling process, the model will match tiles of the source image with tiles of the style image(s). This will result in a more coherent restyle. Works best with style images that have a similar composition. - `trainingImage: optional boolean` - `verticalExpansionRatio: optional number` (deprecated) Vertical expansion ratio. - `width: optional number` The width of the rendered image. - `mimeType: string` The mime type of the asset (example: "image/png") - `ownerId: string` The owner (project) ID (example: "proj_23tlk332lkht3kl2" or "team_dlkhgs23tlk3hlkth32lkht3kl2" for old teams) - `privacy: "private" or "public" or "unlisted"` The privacy of the asset - `"private"` - `"public"` - `"unlisted"` - `properties: object { size, animationFrameCount, bitrate, 20 more }` The properties of the asset, content may depend on the kind of asset returned - `size: number` - `animationFrameCount: optional number` Number of animation frames if animations exist - `bitrate: optional number` Bitrate of the media in bits per second - `boneCount: optional number` Number of bones if skeleton exists - `channels: optional number` Number of channels of the audio - `classification: optional "effect" or "interview" or "music" or 5 more` Classification of the audio - `"effect"` - `"interview"` - `"music"` - `"other"` - `"sound"` - `"speech"` - `"text"` - `"unknown"` - `codecName: optional string` Codec name of the media - `description: optional string` Description of the audio - `dimensions: optional array of number` Bounding box dimensions [width, height, depth] - `duration: optional number` Duration of the media in seconds - `faceCount: optional number` Number of faces/triangles in the mesh - `format: optional string` Format of the mesh file (e.g. 'glb', etc.) - `frameRate: optional number` Frame rate of the video in frames per second - `hasAnimations: optional boolean` Whether the mesh has animations - `hasNormals: optional boolean` Whether the mesh has normal vectors - `hasSkeleton: optional boolean` Whether the mesh has bones/skeleton - `hasUVs: optional boolean` Whether the mesh has UV coordinates - `height: optional number` - `nbFrames: optional number` Number of frames in the video - `sampleRate: optional number` Sample rate of the media in Hz - `transcription: optional object { text }` Transcription of the audio - `text: string` - `vertexCount: optional number` Number of vertices in the mesh - `width: optional number` - `source: "3d23d" or "3d23d:texture" or "3d:texture" or 72 more` source of the asset - `"3d23d"` - `"3d23d:texture"` - `"3d:texture"` - `"3d:texture:albedo"` - `"3d:texture:metallic"` - `"3d:texture:mtl"` - `"3d:texture:normal"` - `"3d:texture:roughness"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-control-net"` - `"inference-control-net-img"` - `"inference-control-net-inpainting"` - `"inference-control-net-inpainting-ip-adapter"` - `"inference-control-net-ip-adapter"` - `"inference-control-net-reference"` - `"inference-control-net-texture"` - `"inference-img"` - `"inference-img-ip-adapter"` - `"inference-img-texture"` - `"inference-in-paint"` - `"inference-in-paint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt"` - `"inference-txt-ip-adapter"` - `"inference-txt-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture:albedo"` - `"texture:ao"` - `"texture:edge"` - `"texture:height"` - `"texture:metallic"` - `"texture:normal"` - `"texture:smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `status: "error" or "pending" or "success"` The actual status - `"error"` - `"pending"` - `"success"` - `tags: array of string` The associated tags (example: ["sci-fi", "landscape"]) - `updatedAt: string` The asset last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `url: string` Signed URL to get the asset content - `automaticCaptioning: optional string` Automatic captioning of the asset - `description: optional string` The description, it will contain in priority: - the manual description - the advanced captioning when the asset is used in training flow - the automatic captioning - `embedding: optional array of number` The embedding of the asset when requested. Only available when an asset can be embedded (ie: not Detection maps) - `firstFrame: optional object { assetId, url }` The video asset's first frame. Contains the assetId and the url of the first frame. - `assetId: string` - `url: string` - `isHidden: optional boolean` Whether the asset is hidden. - `lastFrame: optional object { assetId, url }` The video asset's last frame. Contains the assetId and the url of the last frame. - `assetId: string` - `url: string` - `nsfw: optional array of string` The NSFW labels - `originalFileUrl: optional string` The original file url. Contains the url of the original file. without any conversion. Only available for some specific video, audio and threeD assets. Is only specified if the given asset data has been replaced with a new file during the creation of the asset. - `outputIndex: optional number` The output index of the asset within a job This index is an positive integer that starts at 0 It is used to differentiate between multiple outputs of the same job If the job has only one output, this index is 0 - `preview: optional object { assetId, url }` The asset's preview. Contains the assetId and the url of the preview. - `assetId: string` - `url: string` - `thumbnail: optional object { assetId, url }` The asset's thumbnail. Contains the assetId and the url of the thumbnail. - `assetId: string` - `url: string` ### Asset Retrieve Response - `AssetRetrieveResponse object { asset }` - `asset: object { id, authorId, collectionIds, 24 more }` - `id: string` The asset ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `authorId: string` The author user ID (example: "dcf121faaa1a0a0bbbd9ca1b73d62aea") - `collectionIds: array of string` A list of CollectionId this asset belongs to - `createdAt: string` The asset creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `editCapabilities: array of "DETECTION" or "GENERATIVE_FILL" or "PIXELATE" or 8 more` List of edit capabilities - `"DETECTION"` - `"GENERATIVE_FILL"` - `"PIXELATE"` - `"PROMPT_EDITING"` - `"REFINE"` - `"REFRAME"` - `"REMOVE_BACKGROUND"` - `"SEGMENTATION"` - `"UPSCALE"` - `"UPSCALE_360"` - `"VECTORIZATION"` - `kind: "3d" or "audio" or "document" or 4 more` The kind of asset - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `metadata: object { kind, type, angular, 106 more }` Metadata of the asset with some additional information - `kind: "3d" or "audio" or "document" or 4 more` - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `type: "3d-texture" or "3d-texture-albedo" or "3d-texture-metallic" or 72 more` The type of the asset. Ex: 'inference-txt2img' will represent an asset generated from a text to image model - `"3d-texture"` - `"3d-texture-albedo"` - `"3d-texture-metallic"` - `"3d-texture-mtl"` - `"3d-texture-normal"` - `"3d-texture-roughness"` - `"3d23d"` - `"3d23d-texture"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-controlnet"` - `"inference-controlnet-img2img"` - `"inference-controlnet-inpaint"` - `"inference-controlnet-inpaint-ip-adapter"` - `"inference-controlnet-ip-adapter"` - `"inference-controlnet-reference"` - `"inference-controlnet-texture"` - `"inference-img2img"` - `"inference-img2img-ip-adapter"` - `"inference-img2img-texture"` - `"inference-inpaint"` - `"inference-inpaint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt2img"` - `"inference-txt2img-ip-adapter"` - `"inference-txt2img-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture-albedo"` - `"texture-ao"` - `"texture-edge"` - `"texture-height"` - `"texture-metallic"` - `"texture-normal"` - `"texture-smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `angular: optional number` How angular is the surface? 0 is like a sphere, 1 is like a mechanical object - `aspectRatio: optional string` The optional aspect ratio given for the generation, only applicable for some models - `backgroundOpacity: optional number` Int to set between 0 and 255 for the opacity of the background in the result images. - `baseModelId: optional string` The baseModelId that maybe changed at inference time - `bbox: optional array of number` A bounding box around the object of interest, in the format [x1, y1, x2, y2]. - `betterQuality: optional boolean` Remove small dark spots (i.e. “pepper”) and connect small bright cracks. - `cannyStructureImage: optional string` The control image already processed by canny detector. Must reference an existing AssetId. - `clustering: optional boolean` Activate clustering. - `colorCorrection: optional boolean` Ensure upscaled tile have the same color histogram as original tile. - `colorMode: optional string` - `colorPrecision: optional number` - `concepts: optional array of object { modelId, scale, modelEpoch }` Flux Kontext LoRA to style the image. For Flux Kontext Prompt Editing. - `modelId: string` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: number` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `modelEpoch: optional string` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `contours: optional array of array of array of array of number` - `controlEnd: optional number` End step for control. - `copiedAt: optional string` The date when the asset was copied to a project - `cornerThreshold: optional number` - `creativity: optional number` Allow the generation of "hallucinations" during the upscale process, which adds additional details and deviates from the original image. Default: optimized for your preset and style. - `creativityDecay: optional number` Amount of decay in creativity over the upscale process. The lowest the value, the less the creativity will be preserved over the upscale process. - `defaultParameters: optional boolean` If true, use the default parameters - `depthFidelity: optional number` The depth fidelity if a depth image provided - `depthImage: optional string` The control image processed by depth estimator. Must reference an existing AssetId. - `detailsLevel: optional number` Amount of details to remove or add - `dilate: optional number` The number of pixels to dilate the result masks. - `factor: optional number` Contrast factor for Grayscale detector - `filterSpeckle: optional number` - `fractality: optional number` Determine the scale at which the upscale process works. - With a small value, the upscale works at the largest scale, resulting in fewer added details and more coherent images. Ideal for portraits, for example. - With a large value, the upscale works at the smallest scale, resulting in more added details and more hallucinations. Ideal for landscapes, for example. (info): A small value is slower and more expensive to run. - `geometryEnforcement: optional number` Apply extra control to the Skybox 360 geometry. The higher the value, the more the 360 geometry will influence the generated skybox image. Use with caution. Default is adapted to the other parameters. - `guidance: optional number` The guidance used to generate this asset - `halfMode: optional boolean` - `hdr: optional number` - `height: optional number` - `highThreshold: optional number` High threshold for Canny detector - `horizontalExpansionRatio: optional number` (deprecated) Horizontal expansion ratio. - `image: optional string` The input image to process. Must reference an existing AssetId or be a data URL. - `imageFidelity: optional number` Strengthen the similarity to the original image during the upscale. Default: optimized for your preset and style. - `imageType: optional "seamfull" or "skybox" or "texture"` Preserve the seamless properties of skybox or texture images. Input has to be of same type (seamless). - `"seamfull"` - `"skybox"` - `"texture"` - `inferenceId: optional string` The id of the Inference describing how this image was generated - `inputFidelity: optional "high" or "low"` When set to `high`, allows to better preserve details from the input images in the output. This is especially useful when using images that contain elements like faces or logos that require accurate preservation in the generated image. You can provide multiple input images that will all be preserved with high fidelity, but keep in mind that the first image will be preserved with richer textures and finer details, so if you include elements such as faces, consider placing them in the first image. Only available for the `gpt-image-1` model. - `"high"` - `"low"` - `inputLocation: optional "bottom" or "left" or "middle" or 2 more` Location of the input image in the output. - `"bottom"` - `"left"` - `"middle"` - `"right"` - `"top"` - `invert: optional boolean` To invert the relief - `keypointThreshold: optional number` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `layerDifference: optional number` - `lengthThreshold: optional number` - `lockExpiresAt: optional string` The ISO timestamp when the lock on the canvas will expire - `lowThreshold: optional number` Low threshold for Canny detector - `mask: optional string` The mask used for the asset generation or editing - `maxIterations: optional number` - `maxThreshold: optional number` Maximum threshold for Grayscale conversion - `minThreshold: optional number` Minimum threshold for Grayscale conversion - `modality: optional "canny" or "depth" or "grayscale" or 7 more` Modality to detect - `"canny"` - `"depth"` - `"grayscale"` - `"lineart_anime"` - `"mlsd"` - `"normal"` - `"pose"` - `"scribble"` - `"segmentation"` - `"sketch"` - `mode: optional string` - `modelId: optional string` The modelId used to generate this asset - `modelType: optional "custom" or "elevenlabs-voice" or "flux.1" or 34 more` The type of the generator used - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `name: optional string` - `nbMasks: optional number` - `negativePrompt: optional string` The negative prompt used to generate this asset - `negativePromptStrength: optional number` Controls the influence of the negative prompt. Default 0 means the negative prompt has no effect. Higher values increase negative prompt influence. Must be > 0 if negativePrompt is provided. - `numInferenceSteps: optional number` The number of denoising steps for each image generation. - `numOutputs: optional number` The number of outputs to generate. - `originalAssetId: optional string` - `outputIndex: optional number` - `overlapPercentage: optional number` Overlap percentage for the output image. - `overrideEmbeddings: optional boolean` Override the embeddings of the model. Only your prompt and negativePrompt will be used. Use with caution. - `parentId: optional string` - `parentJobId: optional string` - `pathPrecision: optional number` - `points: optional array of array of number` List of points (label, x, y) in the image where label = 0 for background and 1 for object. - `polished: optional number` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `preset: optional string` - `progressPercent: optional number` - `prompt: optional string` The prompt that guided the asset generation or editing - `promptFidelity: optional number` Increase the fidelity to the prompt during upscale. Default: optimized for your preset and style. - `raised: optional number` How raised is the surface? 0 is flat like water, 1 is like a very rough rock - `referenceImages: optional array of string` The reference images used for the asset generation or editing - `refinementSteps: optional number` Additional refinement steps before scaling. If scalingFactor == 1, the refinement process will be applied (1 + refinementSteps) times. If scalingFactor > 1, the refinement process will be applied refinementSteps times. - `removeBackground: optional boolean` Remove background for Grayscale detector - `resizeOption: optional number` Size proportion of the input image in the output. - `resultContours: optional boolean` Boolean to output the contours. - `resultImage: optional boolean` Boolean to able output the cut out object. - `resultMask: optional boolean` Boolean to able return the masks (binary image) in the response. - `rootParentId: optional string` - `saveFlipbook: optional boolean` Save a flipbook of the texture. Deactivated when the input texture is larger than 2048x2048px - `scalingFactor: optional number` Scaling factor (when `targetWidth` not specified) - `scheduler: optional string` The scheduler used to generate this asset - `seed: optional string` The seed used to generate this asset. Can be a string or a number in some cases . - `sharpen: optional boolean` Sharpen tiles. - `shiny: optional number` How shiny is the surface? 0 is like a matte surface, 1 is like a diamond - `size: optional number` - `sketch: optional boolean` Activate sketch detection instead of canny. - `sourceProjectId: optional string` - `spliceThreshold: optional number` - `strength: optional number` The strength Only available for the `flux-kontext` LoRA model. - `structureFidelity: optional number` Strength for the input image structure preservation - `structureImage: optional string` The control image for structure. A canny detector will be applied to this image. Must reference an existing AssetId. - `style: optional "3d-cartoon" or "3d-rendered" or "anime" or 23 more` - `"3d-cartoon"` - `"3d-rendered"` - `"anime"` - `"cartoon"` - `"cinematic"` - `"claymation"` - `"cloud-skydome"` - `"comic"` - `"cyberpunk"` - `"enchanted"` - `"fantasy"` - `"ink"` - `"manga"` - `"manga-color"` - `"minimalist"` - `"neon-tron"` - `"oil-painting"` - `"pastel"` - `"photo"` - `"photography"` - `"psychedelic"` - `"retro-fantasy"` - `"scifi-concept-art"` - `"space"` - `"standard"` - `"whimsical"` - `styleFidelity: optional number` The higher the value the more it will look like the style image(s) - `styleImages: optional array of string` List of style images. Most of the time, only one image is enough. It must be existing AssetIds. - `styleImagesFidelity: optional number` Condition the influence of the style image(s). The higher the value, the more the style images will influence the upscaled image. - `targetHeight: optional number` The target height of the output image. - `targetWidth: optional number` Target width for the upscaled image, take priority over scaling factor - `text: optional string` A textual description / keywords describing the object of interest. - `texture: optional string` The asset to convert in texture maps. Must reference an existing AssetId. - `thumbnail: optional object { assetId, url }` The thumbnail of the canvas - `assetId: string` The AssetId of the image used as a thumbnail for the canvas (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: string` The url of the image used as a thumbnail for the canvas - `tileStyle: optional boolean` If set to true, during the upscaling process, the model will match tiles of the source image with tiles of the style image(s). This will result in a more coherent restyle. Works best with style images that have a similar composition. - `trainingImage: optional boolean` - `verticalExpansionRatio: optional number` (deprecated) Vertical expansion ratio. - `width: optional number` The width of the rendered image. - `mimeType: string` The mime type of the asset (example: "image/png") - `ownerId: string` The owner (project) ID (example: "proj_23tlk332lkht3kl2" or "team_dlkhgs23tlk3hlkth32lkht3kl2" for old teams) - `privacy: "private" or "public" or "unlisted"` The privacy of the asset - `"private"` - `"public"` - `"unlisted"` - `properties: object { size, animationFrameCount, bitrate, 20 more }` The properties of the asset, content may depend on the kind of asset returned - `size: number` - `animationFrameCount: optional number` Number of animation frames if animations exist - `bitrate: optional number` Bitrate of the media in bits per second - `boneCount: optional number` Number of bones if skeleton exists - `channels: optional number` Number of channels of the audio - `classification: optional "effect" or "interview" or "music" or 5 more` Classification of the audio - `"effect"` - `"interview"` - `"music"` - `"other"` - `"sound"` - `"speech"` - `"text"` - `"unknown"` - `codecName: optional string` Codec name of the media - `description: optional string` Description of the audio - `dimensions: optional array of number` Bounding box dimensions [width, height, depth] - `duration: optional number` Duration of the media in seconds - `faceCount: optional number` Number of faces/triangles in the mesh - `format: optional string` Format of the mesh file (e.g. 'glb', etc.) - `frameRate: optional number` Frame rate of the video in frames per second - `hasAnimations: optional boolean` Whether the mesh has animations - `hasNormals: optional boolean` Whether the mesh has normal vectors - `hasSkeleton: optional boolean` Whether the mesh has bones/skeleton - `hasUVs: optional boolean` Whether the mesh has UV coordinates - `height: optional number` - `nbFrames: optional number` Number of frames in the video - `sampleRate: optional number` Sample rate of the media in Hz - `transcription: optional object { text }` Transcription of the audio - `text: string` - `vertexCount: optional number` Number of vertices in the mesh - `width: optional number` - `source: "3d23d" or "3d23d:texture" or "3d:texture" or 72 more` source of the asset - `"3d23d"` - `"3d23d:texture"` - `"3d:texture"` - `"3d:texture:albedo"` - `"3d:texture:metallic"` - `"3d:texture:mtl"` - `"3d:texture:normal"` - `"3d:texture:roughness"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-control-net"` - `"inference-control-net-img"` - `"inference-control-net-inpainting"` - `"inference-control-net-inpainting-ip-adapter"` - `"inference-control-net-ip-adapter"` - `"inference-control-net-reference"` - `"inference-control-net-texture"` - `"inference-img"` - `"inference-img-ip-adapter"` - `"inference-img-texture"` - `"inference-in-paint"` - `"inference-in-paint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt"` - `"inference-txt-ip-adapter"` - `"inference-txt-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture:albedo"` - `"texture:ao"` - `"texture:edge"` - `"texture:height"` - `"texture:metallic"` - `"texture:normal"` - `"texture:smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `status: "error" or "pending" or "success"` The actual status - `"error"` - `"pending"` - `"success"` - `tags: array of string` The associated tags (example: ["sci-fi", "landscape"]) - `updatedAt: string` The asset last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `url: string` Signed URL to get the asset content - `automaticCaptioning: optional string` Automatic captioning of the asset - `description: optional string` The description, it will contain in priority: - the manual description - the advanced captioning when the asset is used in training flow - the automatic captioning - `embedding: optional array of number` The embedding of the asset when requested. Only available when an asset can be embedded (ie: not Detection maps) - `firstFrame: optional object { assetId, url }` The video asset's first frame. Contains the assetId and the url of the first frame. - `assetId: string` - `url: string` - `isHidden: optional boolean` Whether the asset is hidden. - `lastFrame: optional object { assetId, url }` The video asset's last frame. Contains the assetId and the url of the last frame. - `assetId: string` - `url: string` - `nsfw: optional array of string` The NSFW labels - `originalFileUrl: optional string` The original file url. Contains the url of the original file. without any conversion. Only available for some specific video, audio and threeD assets. Is only specified if the given asset data has been replaced with a new file during the creation of the asset. - `outputIndex: optional number` The output index of the asset within a job This index is an positive integer that starts at 0 It is used to differentiate between multiple outputs of the same job If the job has only one output, this index is 0 - `preview: optional object { assetId, url }` The asset's preview. Contains the assetId and the url of the preview. - `assetId: string` - `url: string` - `thumbnail: optional object { assetId, url }` The asset's thumbnail. Contains the assetId and the url of the thumbnail. - `assetId: string` - `url: string` ### Asset Update Response - `AssetUpdateResponse object { asset }` - `asset: object { id, authorId, collectionIds, 24 more }` - `id: string` The asset ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `authorId: string` The author user ID (example: "dcf121faaa1a0a0bbbd9ca1b73d62aea") - `collectionIds: array of string` A list of CollectionId this asset belongs to - `createdAt: string` The asset creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `editCapabilities: array of "DETECTION" or "GENERATIVE_FILL" or "PIXELATE" or 8 more` List of edit capabilities - `"DETECTION"` - `"GENERATIVE_FILL"` - `"PIXELATE"` - `"PROMPT_EDITING"` - `"REFINE"` - `"REFRAME"` - `"REMOVE_BACKGROUND"` - `"SEGMENTATION"` - `"UPSCALE"` - `"UPSCALE_360"` - `"VECTORIZATION"` - `kind: "3d" or "audio" or "document" or 4 more` The kind of asset - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `metadata: object { kind, type, angular, 106 more }` Metadata of the asset with some additional information - `kind: "3d" or "audio" or "document" or 4 more` - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `type: "3d-texture" or "3d-texture-albedo" or "3d-texture-metallic" or 72 more` The type of the asset. Ex: 'inference-txt2img' will represent an asset generated from a text to image model - `"3d-texture"` - `"3d-texture-albedo"` - `"3d-texture-metallic"` - `"3d-texture-mtl"` - `"3d-texture-normal"` - `"3d-texture-roughness"` - `"3d23d"` - `"3d23d-texture"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-controlnet"` - `"inference-controlnet-img2img"` - `"inference-controlnet-inpaint"` - `"inference-controlnet-inpaint-ip-adapter"` - `"inference-controlnet-ip-adapter"` - `"inference-controlnet-reference"` - `"inference-controlnet-texture"` - `"inference-img2img"` - `"inference-img2img-ip-adapter"` - `"inference-img2img-texture"` - `"inference-inpaint"` - `"inference-inpaint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt2img"` - `"inference-txt2img-ip-adapter"` - `"inference-txt2img-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture-albedo"` - `"texture-ao"` - `"texture-edge"` - `"texture-height"` - `"texture-metallic"` - `"texture-normal"` - `"texture-smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `angular: optional number` How angular is the surface? 0 is like a sphere, 1 is like a mechanical object - `aspectRatio: optional string` The optional aspect ratio given for the generation, only applicable for some models - `backgroundOpacity: optional number` Int to set between 0 and 255 for the opacity of the background in the result images. - `baseModelId: optional string` The baseModelId that maybe changed at inference time - `bbox: optional array of number` A bounding box around the object of interest, in the format [x1, y1, x2, y2]. - `betterQuality: optional boolean` Remove small dark spots (i.e. “pepper”) and connect small bright cracks. - `cannyStructureImage: optional string` The control image already processed by canny detector. Must reference an existing AssetId. - `clustering: optional boolean` Activate clustering. - `colorCorrection: optional boolean` Ensure upscaled tile have the same color histogram as original tile. - `colorMode: optional string` - `colorPrecision: optional number` - `concepts: optional array of object { modelId, scale, modelEpoch }` Flux Kontext LoRA to style the image. For Flux Kontext Prompt Editing. - `modelId: string` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: number` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `modelEpoch: optional string` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `contours: optional array of array of array of array of number` - `controlEnd: optional number` End step for control. - `copiedAt: optional string` The date when the asset was copied to a project - `cornerThreshold: optional number` - `creativity: optional number` Allow the generation of "hallucinations" during the upscale process, which adds additional details and deviates from the original image. Default: optimized for your preset and style. - `creativityDecay: optional number` Amount of decay in creativity over the upscale process. The lowest the value, the less the creativity will be preserved over the upscale process. - `defaultParameters: optional boolean` If true, use the default parameters - `depthFidelity: optional number` The depth fidelity if a depth image provided - `depthImage: optional string` The control image processed by depth estimator. Must reference an existing AssetId. - `detailsLevel: optional number` Amount of details to remove or add - `dilate: optional number` The number of pixels to dilate the result masks. - `factor: optional number` Contrast factor for Grayscale detector - `filterSpeckle: optional number` - `fractality: optional number` Determine the scale at which the upscale process works. - With a small value, the upscale works at the largest scale, resulting in fewer added details and more coherent images. Ideal for portraits, for example. - With a large value, the upscale works at the smallest scale, resulting in more added details and more hallucinations. Ideal for landscapes, for example. (info): A small value is slower and more expensive to run. - `geometryEnforcement: optional number` Apply extra control to the Skybox 360 geometry. The higher the value, the more the 360 geometry will influence the generated skybox image. Use with caution. Default is adapted to the other parameters. - `guidance: optional number` The guidance used to generate this asset - `halfMode: optional boolean` - `hdr: optional number` - `height: optional number` - `highThreshold: optional number` High threshold for Canny detector - `horizontalExpansionRatio: optional number` (deprecated) Horizontal expansion ratio. - `image: optional string` The input image to process. Must reference an existing AssetId or be a data URL. - `imageFidelity: optional number` Strengthen the similarity to the original image during the upscale. Default: optimized for your preset and style. - `imageType: optional "seamfull" or "skybox" or "texture"` Preserve the seamless properties of skybox or texture images. Input has to be of same type (seamless). - `"seamfull"` - `"skybox"` - `"texture"` - `inferenceId: optional string` The id of the Inference describing how this image was generated - `inputFidelity: optional "high" or "low"` When set to `high`, allows to better preserve details from the input images in the output. This is especially useful when using images that contain elements like faces or logos that require accurate preservation in the generated image. You can provide multiple input images that will all be preserved with high fidelity, but keep in mind that the first image will be preserved with richer textures and finer details, so if you include elements such as faces, consider placing them in the first image. Only available for the `gpt-image-1` model. - `"high"` - `"low"` - `inputLocation: optional "bottom" or "left" or "middle" or 2 more` Location of the input image in the output. - `"bottom"` - `"left"` - `"middle"` - `"right"` - `"top"` - `invert: optional boolean` To invert the relief - `keypointThreshold: optional number` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `layerDifference: optional number` - `lengthThreshold: optional number` - `lockExpiresAt: optional string` The ISO timestamp when the lock on the canvas will expire - `lowThreshold: optional number` Low threshold for Canny detector - `mask: optional string` The mask used for the asset generation or editing - `maxIterations: optional number` - `maxThreshold: optional number` Maximum threshold for Grayscale conversion - `minThreshold: optional number` Minimum threshold for Grayscale conversion - `modality: optional "canny" or "depth" or "grayscale" or 7 more` Modality to detect - `"canny"` - `"depth"` - `"grayscale"` - `"lineart_anime"` - `"mlsd"` - `"normal"` - `"pose"` - `"scribble"` - `"segmentation"` - `"sketch"` - `mode: optional string` - `modelId: optional string` The modelId used to generate this asset - `modelType: optional "custom" or "elevenlabs-voice" or "flux.1" or 34 more` The type of the generator used - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `name: optional string` - `nbMasks: optional number` - `negativePrompt: optional string` The negative prompt used to generate this asset - `negativePromptStrength: optional number` Controls the influence of the negative prompt. Default 0 means the negative prompt has no effect. Higher values increase negative prompt influence. Must be > 0 if negativePrompt is provided. - `numInferenceSteps: optional number` The number of denoising steps for each image generation. - `numOutputs: optional number` The number of outputs to generate. - `originalAssetId: optional string` - `outputIndex: optional number` - `overlapPercentage: optional number` Overlap percentage for the output image. - `overrideEmbeddings: optional boolean` Override the embeddings of the model. Only your prompt and negativePrompt will be used. Use with caution. - `parentId: optional string` - `parentJobId: optional string` - `pathPrecision: optional number` - `points: optional array of array of number` List of points (label, x, y) in the image where label = 0 for background and 1 for object. - `polished: optional number` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `preset: optional string` - `progressPercent: optional number` - `prompt: optional string` The prompt that guided the asset generation or editing - `promptFidelity: optional number` Increase the fidelity to the prompt during upscale. Default: optimized for your preset and style. - `raised: optional number` How raised is the surface? 0 is flat like water, 1 is like a very rough rock - `referenceImages: optional array of string` The reference images used for the asset generation or editing - `refinementSteps: optional number` Additional refinement steps before scaling. If scalingFactor == 1, the refinement process will be applied (1 + refinementSteps) times. If scalingFactor > 1, the refinement process will be applied refinementSteps times. - `removeBackground: optional boolean` Remove background for Grayscale detector - `resizeOption: optional number` Size proportion of the input image in the output. - `resultContours: optional boolean` Boolean to output the contours. - `resultImage: optional boolean` Boolean to able output the cut out object. - `resultMask: optional boolean` Boolean to able return the masks (binary image) in the response. - `rootParentId: optional string` - `saveFlipbook: optional boolean` Save a flipbook of the texture. Deactivated when the input texture is larger than 2048x2048px - `scalingFactor: optional number` Scaling factor (when `targetWidth` not specified) - `scheduler: optional string` The scheduler used to generate this asset - `seed: optional string` The seed used to generate this asset. Can be a string or a number in some cases . - `sharpen: optional boolean` Sharpen tiles. - `shiny: optional number` How shiny is the surface? 0 is like a matte surface, 1 is like a diamond - `size: optional number` - `sketch: optional boolean` Activate sketch detection instead of canny. - `sourceProjectId: optional string` - `spliceThreshold: optional number` - `strength: optional number` The strength Only available for the `flux-kontext` LoRA model. - `structureFidelity: optional number` Strength for the input image structure preservation - `structureImage: optional string` The control image for structure. A canny detector will be applied to this image. Must reference an existing AssetId. - `style: optional "3d-cartoon" or "3d-rendered" or "anime" or 23 more` - `"3d-cartoon"` - `"3d-rendered"` - `"anime"` - `"cartoon"` - `"cinematic"` - `"claymation"` - `"cloud-skydome"` - `"comic"` - `"cyberpunk"` - `"enchanted"` - `"fantasy"` - `"ink"` - `"manga"` - `"manga-color"` - `"minimalist"` - `"neon-tron"` - `"oil-painting"` - `"pastel"` - `"photo"` - `"photography"` - `"psychedelic"` - `"retro-fantasy"` - `"scifi-concept-art"` - `"space"` - `"standard"` - `"whimsical"` - `styleFidelity: optional number` The higher the value the more it will look like the style image(s) - `styleImages: optional array of string` List of style images. Most of the time, only one image is enough. It must be existing AssetIds. - `styleImagesFidelity: optional number` Condition the influence of the style image(s). The higher the value, the more the style images will influence the upscaled image. - `targetHeight: optional number` The target height of the output image. - `targetWidth: optional number` Target width for the upscaled image, take priority over scaling factor - `text: optional string` A textual description / keywords describing the object of interest. - `texture: optional string` The asset to convert in texture maps. Must reference an existing AssetId. - `thumbnail: optional object { assetId, url }` The thumbnail of the canvas - `assetId: string` The AssetId of the image used as a thumbnail for the canvas (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: string` The url of the image used as a thumbnail for the canvas - `tileStyle: optional boolean` If set to true, during the upscaling process, the model will match tiles of the source image with tiles of the style image(s). This will result in a more coherent restyle. Works best with style images that have a similar composition. - `trainingImage: optional boolean` - `verticalExpansionRatio: optional number` (deprecated) Vertical expansion ratio. - `width: optional number` The width of the rendered image. - `mimeType: string` The mime type of the asset (example: "image/png") - `ownerId: string` The owner (project) ID (example: "proj_23tlk332lkht3kl2" or "team_dlkhgs23tlk3hlkth32lkht3kl2" for old teams) - `privacy: "private" or "public" or "unlisted"` The privacy of the asset - `"private"` - `"public"` - `"unlisted"` - `properties: object { size, animationFrameCount, bitrate, 20 more }` The properties of the asset, content may depend on the kind of asset returned - `size: number` - `animationFrameCount: optional number` Number of animation frames if animations exist - `bitrate: optional number` Bitrate of the media in bits per second - `boneCount: optional number` Number of bones if skeleton exists - `channels: optional number` Number of channels of the audio - `classification: optional "effect" or "interview" or "music" or 5 more` Classification of the audio - `"effect"` - `"interview"` - `"music"` - `"other"` - `"sound"` - `"speech"` - `"text"` - `"unknown"` - `codecName: optional string` Codec name of the media - `description: optional string` Description of the audio - `dimensions: optional array of number` Bounding box dimensions [width, height, depth] - `duration: optional number` Duration of the media in seconds - `faceCount: optional number` Number of faces/triangles in the mesh - `format: optional string` Format of the mesh file (e.g. 'glb', etc.) - `frameRate: optional number` Frame rate of the video in frames per second - `hasAnimations: optional boolean` Whether the mesh has animations - `hasNormals: optional boolean` Whether the mesh has normal vectors - `hasSkeleton: optional boolean` Whether the mesh has bones/skeleton - `hasUVs: optional boolean` Whether the mesh has UV coordinates - `height: optional number` - `nbFrames: optional number` Number of frames in the video - `sampleRate: optional number` Sample rate of the media in Hz - `transcription: optional object { text }` Transcription of the audio - `text: string` - `vertexCount: optional number` Number of vertices in the mesh - `width: optional number` - `source: "3d23d" or "3d23d:texture" or "3d:texture" or 72 more` source of the asset - `"3d23d"` - `"3d23d:texture"` - `"3d:texture"` - `"3d:texture:albedo"` - `"3d:texture:metallic"` - `"3d:texture:mtl"` - `"3d:texture:normal"` - `"3d:texture:roughness"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-control-net"` - `"inference-control-net-img"` - `"inference-control-net-inpainting"` - `"inference-control-net-inpainting-ip-adapter"` - `"inference-control-net-ip-adapter"` - `"inference-control-net-reference"` - `"inference-control-net-texture"` - `"inference-img"` - `"inference-img-ip-adapter"` - `"inference-img-texture"` - `"inference-in-paint"` - `"inference-in-paint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt"` - `"inference-txt-ip-adapter"` - `"inference-txt-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture:albedo"` - `"texture:ao"` - `"texture:edge"` - `"texture:height"` - `"texture:metallic"` - `"texture:normal"` - `"texture:smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `status: "error" or "pending" or "success"` The actual status - `"error"` - `"pending"` - `"success"` - `tags: array of string` The associated tags (example: ["sci-fi", "landscape"]) - `updatedAt: string` The asset last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `url: string` Signed URL to get the asset content - `automaticCaptioning: optional string` Automatic captioning of the asset - `description: optional string` The description, it will contain in priority: - the manual description - the advanced captioning when the asset is used in training flow - the automatic captioning - `embedding: optional array of number` The embedding of the asset when requested. Only available when an asset can be embedded (ie: not Detection maps) - `firstFrame: optional object { assetId, url }` The video asset's first frame. Contains the assetId and the url of the first frame. - `assetId: string` - `url: string` - `isHidden: optional boolean` Whether the asset is hidden. - `lastFrame: optional object { assetId, url }` The video asset's last frame. Contains the assetId and the url of the last frame. - `assetId: string` - `url: string` - `nsfw: optional array of string` The NSFW labels - `originalFileUrl: optional string` The original file url. Contains the url of the original file. without any conversion. Only available for some specific video, audio and threeD assets. Is only specified if the given asset data has been replaced with a new file during the creation of the asset. - `outputIndex: optional number` The output index of the asset within a job This index is an positive integer that starts at 0 It is used to differentiate between multiple outputs of the same job If the job has only one output, this index is 0 - `preview: optional object { assetId, url }` The asset's preview. Contains the assetId and the url of the preview. - `assetId: string` - `url: string` - `thumbnail: optional object { assetId, url }` The asset's thumbnail. Contains the assetId and the url of the thumbnail. - `assetId: string` - `url: string` ### Asset Duplicate Response - `AssetDuplicateResponse object { asset }` - `asset: object { id, authorId, collectionIds, 24 more }` - `id: string` The asset ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `authorId: string` The author user ID (example: "dcf121faaa1a0a0bbbd9ca1b73d62aea") - `collectionIds: array of string` A list of CollectionId this asset belongs to - `createdAt: string` The asset creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `editCapabilities: array of "DETECTION" or "GENERATIVE_FILL" or "PIXELATE" or 8 more` List of edit capabilities - `"DETECTION"` - `"GENERATIVE_FILL"` - `"PIXELATE"` - `"PROMPT_EDITING"` - `"REFINE"` - `"REFRAME"` - `"REMOVE_BACKGROUND"` - `"SEGMENTATION"` - `"UPSCALE"` - `"UPSCALE_360"` - `"VECTORIZATION"` - `kind: "3d" or "audio" or "document" or 4 more` The kind of asset - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `metadata: object { kind, type, angular, 106 more }` Metadata of the asset with some additional information - `kind: "3d" or "audio" or "document" or 4 more` - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `type: "3d-texture" or "3d-texture-albedo" or "3d-texture-metallic" or 72 more` The type of the asset. Ex: 'inference-txt2img' will represent an asset generated from a text to image model - `"3d-texture"` - `"3d-texture-albedo"` - `"3d-texture-metallic"` - `"3d-texture-mtl"` - `"3d-texture-normal"` - `"3d-texture-roughness"` - `"3d23d"` - `"3d23d-texture"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-controlnet"` - `"inference-controlnet-img2img"` - `"inference-controlnet-inpaint"` - `"inference-controlnet-inpaint-ip-adapter"` - `"inference-controlnet-ip-adapter"` - `"inference-controlnet-reference"` - `"inference-controlnet-texture"` - `"inference-img2img"` - `"inference-img2img-ip-adapter"` - `"inference-img2img-texture"` - `"inference-inpaint"` - `"inference-inpaint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt2img"` - `"inference-txt2img-ip-adapter"` - `"inference-txt2img-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture-albedo"` - `"texture-ao"` - `"texture-edge"` - `"texture-height"` - `"texture-metallic"` - `"texture-normal"` - `"texture-smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `angular: optional number` How angular is the surface? 0 is like a sphere, 1 is like a mechanical object - `aspectRatio: optional string` The optional aspect ratio given for the generation, only applicable for some models - `backgroundOpacity: optional number` Int to set between 0 and 255 for the opacity of the background in the result images. - `baseModelId: optional string` The baseModelId that maybe changed at inference time - `bbox: optional array of number` A bounding box around the object of interest, in the format [x1, y1, x2, y2]. - `betterQuality: optional boolean` Remove small dark spots (i.e. “pepper”) and connect small bright cracks. - `cannyStructureImage: optional string` The control image already processed by canny detector. Must reference an existing AssetId. - `clustering: optional boolean` Activate clustering. - `colorCorrection: optional boolean` Ensure upscaled tile have the same color histogram as original tile. - `colorMode: optional string` - `colorPrecision: optional number` - `concepts: optional array of object { modelId, scale, modelEpoch }` Flux Kontext LoRA to style the image. For Flux Kontext Prompt Editing. - `modelId: string` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: number` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `modelEpoch: optional string` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `contours: optional array of array of array of array of number` - `controlEnd: optional number` End step for control. - `copiedAt: optional string` The date when the asset was copied to a project - `cornerThreshold: optional number` - `creativity: optional number` Allow the generation of "hallucinations" during the upscale process, which adds additional details and deviates from the original image. Default: optimized for your preset and style. - `creativityDecay: optional number` Amount of decay in creativity over the upscale process. The lowest the value, the less the creativity will be preserved over the upscale process. - `defaultParameters: optional boolean` If true, use the default parameters - `depthFidelity: optional number` The depth fidelity if a depth image provided - `depthImage: optional string` The control image processed by depth estimator. Must reference an existing AssetId. - `detailsLevel: optional number` Amount of details to remove or add - `dilate: optional number` The number of pixels to dilate the result masks. - `factor: optional number` Contrast factor for Grayscale detector - `filterSpeckle: optional number` - `fractality: optional number` Determine the scale at which the upscale process works. - With a small value, the upscale works at the largest scale, resulting in fewer added details and more coherent images. Ideal for portraits, for example. - With a large value, the upscale works at the smallest scale, resulting in more added details and more hallucinations. Ideal for landscapes, for example. (info): A small value is slower and more expensive to run. - `geometryEnforcement: optional number` Apply extra control to the Skybox 360 geometry. The higher the value, the more the 360 geometry will influence the generated skybox image. Use with caution. Default is adapted to the other parameters. - `guidance: optional number` The guidance used to generate this asset - `halfMode: optional boolean` - `hdr: optional number` - `height: optional number` - `highThreshold: optional number` High threshold for Canny detector - `horizontalExpansionRatio: optional number` (deprecated) Horizontal expansion ratio. - `image: optional string` The input image to process. Must reference an existing AssetId or be a data URL. - `imageFidelity: optional number` Strengthen the similarity to the original image during the upscale. Default: optimized for your preset and style. - `imageType: optional "seamfull" or "skybox" or "texture"` Preserve the seamless properties of skybox or texture images. Input has to be of same type (seamless). - `"seamfull"` - `"skybox"` - `"texture"` - `inferenceId: optional string` The id of the Inference describing how this image was generated - `inputFidelity: optional "high" or "low"` When set to `high`, allows to better preserve details from the input images in the output. This is especially useful when using images that contain elements like faces or logos that require accurate preservation in the generated image. You can provide multiple input images that will all be preserved with high fidelity, but keep in mind that the first image will be preserved with richer textures and finer details, so if you include elements such as faces, consider placing them in the first image. Only available for the `gpt-image-1` model. - `"high"` - `"low"` - `inputLocation: optional "bottom" or "left" or "middle" or 2 more` Location of the input image in the output. - `"bottom"` - `"left"` - `"middle"` - `"right"` - `"top"` - `invert: optional boolean` To invert the relief - `keypointThreshold: optional number` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `layerDifference: optional number` - `lengthThreshold: optional number` - `lockExpiresAt: optional string` The ISO timestamp when the lock on the canvas will expire - `lowThreshold: optional number` Low threshold for Canny detector - `mask: optional string` The mask used for the asset generation or editing - `maxIterations: optional number` - `maxThreshold: optional number` Maximum threshold for Grayscale conversion - `minThreshold: optional number` Minimum threshold for Grayscale conversion - `modality: optional "canny" or "depth" or "grayscale" or 7 more` Modality to detect - `"canny"` - `"depth"` - `"grayscale"` - `"lineart_anime"` - `"mlsd"` - `"normal"` - `"pose"` - `"scribble"` - `"segmentation"` - `"sketch"` - `mode: optional string` - `modelId: optional string` The modelId used to generate this asset - `modelType: optional "custom" or "elevenlabs-voice" or "flux.1" or 34 more` The type of the generator used - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `name: optional string` - `nbMasks: optional number` - `negativePrompt: optional string` The negative prompt used to generate this asset - `negativePromptStrength: optional number` Controls the influence of the negative prompt. Default 0 means the negative prompt has no effect. Higher values increase negative prompt influence. Must be > 0 if negativePrompt is provided. - `numInferenceSteps: optional number` The number of denoising steps for each image generation. - `numOutputs: optional number` The number of outputs to generate. - `originalAssetId: optional string` - `outputIndex: optional number` - `overlapPercentage: optional number` Overlap percentage for the output image. - `overrideEmbeddings: optional boolean` Override the embeddings of the model. Only your prompt and negativePrompt will be used. Use with caution. - `parentId: optional string` - `parentJobId: optional string` - `pathPrecision: optional number` - `points: optional array of array of number` List of points (label, x, y) in the image where label = 0 for background and 1 for object. - `polished: optional number` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `preset: optional string` - `progressPercent: optional number` - `prompt: optional string` The prompt that guided the asset generation or editing - `promptFidelity: optional number` Increase the fidelity to the prompt during upscale. Default: optimized for your preset and style. - `raised: optional number` How raised is the surface? 0 is flat like water, 1 is like a very rough rock - `referenceImages: optional array of string` The reference images used for the asset generation or editing - `refinementSteps: optional number` Additional refinement steps before scaling. If scalingFactor == 1, the refinement process will be applied (1 + refinementSteps) times. If scalingFactor > 1, the refinement process will be applied refinementSteps times. - `removeBackground: optional boolean` Remove background for Grayscale detector - `resizeOption: optional number` Size proportion of the input image in the output. - `resultContours: optional boolean` Boolean to output the contours. - `resultImage: optional boolean` Boolean to able output the cut out object. - `resultMask: optional boolean` Boolean to able return the masks (binary image) in the response. - `rootParentId: optional string` - `saveFlipbook: optional boolean` Save a flipbook of the texture. Deactivated when the input texture is larger than 2048x2048px - `scalingFactor: optional number` Scaling factor (when `targetWidth` not specified) - `scheduler: optional string` The scheduler used to generate this asset - `seed: optional string` The seed used to generate this asset. Can be a string or a number in some cases . - `sharpen: optional boolean` Sharpen tiles. - `shiny: optional number` How shiny is the surface? 0 is like a matte surface, 1 is like a diamond - `size: optional number` - `sketch: optional boolean` Activate sketch detection instead of canny. - `sourceProjectId: optional string` - `spliceThreshold: optional number` - `strength: optional number` The strength Only available for the `flux-kontext` LoRA model. - `structureFidelity: optional number` Strength for the input image structure preservation - `structureImage: optional string` The control image for structure. A canny detector will be applied to this image. Must reference an existing AssetId. - `style: optional "3d-cartoon" or "3d-rendered" or "anime" or 23 more` - `"3d-cartoon"` - `"3d-rendered"` - `"anime"` - `"cartoon"` - `"cinematic"` - `"claymation"` - `"cloud-skydome"` - `"comic"` - `"cyberpunk"` - `"enchanted"` - `"fantasy"` - `"ink"` - `"manga"` - `"manga-color"` - `"minimalist"` - `"neon-tron"` - `"oil-painting"` - `"pastel"` - `"photo"` - `"photography"` - `"psychedelic"` - `"retro-fantasy"` - `"scifi-concept-art"` - `"space"` - `"standard"` - `"whimsical"` - `styleFidelity: optional number` The higher the value the more it will look like the style image(s) - `styleImages: optional array of string` List of style images. Most of the time, only one image is enough. It must be existing AssetIds. - `styleImagesFidelity: optional number` Condition the influence of the style image(s). The higher the value, the more the style images will influence the upscaled image. - `targetHeight: optional number` The target height of the output image. - `targetWidth: optional number` Target width for the upscaled image, take priority over scaling factor - `text: optional string` A textual description / keywords describing the object of interest. - `texture: optional string` The asset to convert in texture maps. Must reference an existing AssetId. - `thumbnail: optional object { assetId, url }` The thumbnail of the canvas - `assetId: string` The AssetId of the image used as a thumbnail for the canvas (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: string` The url of the image used as a thumbnail for the canvas - `tileStyle: optional boolean` If set to true, during the upscaling process, the model will match tiles of the source image with tiles of the style image(s). This will result in a more coherent restyle. Works best with style images that have a similar composition. - `trainingImage: optional boolean` - `verticalExpansionRatio: optional number` (deprecated) Vertical expansion ratio. - `width: optional number` The width of the rendered image. - `mimeType: string` The mime type of the asset (example: "image/png") - `ownerId: string` The owner (project) ID (example: "proj_23tlk332lkht3kl2" or "team_dlkhgs23tlk3hlkth32lkht3kl2" for old teams) - `privacy: "private" or "public" or "unlisted"` The privacy of the asset - `"private"` - `"public"` - `"unlisted"` - `properties: object { size, animationFrameCount, bitrate, 20 more }` The properties of the asset, content may depend on the kind of asset returned - `size: number` - `animationFrameCount: optional number` Number of animation frames if animations exist - `bitrate: optional number` Bitrate of the media in bits per second - `boneCount: optional number` Number of bones if skeleton exists - `channels: optional number` Number of channels of the audio - `classification: optional "effect" or "interview" or "music" or 5 more` Classification of the audio - `"effect"` - `"interview"` - `"music"` - `"other"` - `"sound"` - `"speech"` - `"text"` - `"unknown"` - `codecName: optional string` Codec name of the media - `description: optional string` Description of the audio - `dimensions: optional array of number` Bounding box dimensions [width, height, depth] - `duration: optional number` Duration of the media in seconds - `faceCount: optional number` Number of faces/triangles in the mesh - `format: optional string` Format of the mesh file (e.g. 'glb', etc.) - `frameRate: optional number` Frame rate of the video in frames per second - `hasAnimations: optional boolean` Whether the mesh has animations - `hasNormals: optional boolean` Whether the mesh has normal vectors - `hasSkeleton: optional boolean` Whether the mesh has bones/skeleton - `hasUVs: optional boolean` Whether the mesh has UV coordinates - `height: optional number` - `nbFrames: optional number` Number of frames in the video - `sampleRate: optional number` Sample rate of the media in Hz - `transcription: optional object { text }` Transcription of the audio - `text: string` - `vertexCount: optional number` Number of vertices in the mesh - `width: optional number` - `source: "3d23d" or "3d23d:texture" or "3d:texture" or 72 more` source of the asset - `"3d23d"` - `"3d23d:texture"` - `"3d:texture"` - `"3d:texture:albedo"` - `"3d:texture:metallic"` - `"3d:texture:mtl"` - `"3d:texture:normal"` - `"3d:texture:roughness"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-control-net"` - `"inference-control-net-img"` - `"inference-control-net-inpainting"` - `"inference-control-net-inpainting-ip-adapter"` - `"inference-control-net-ip-adapter"` - `"inference-control-net-reference"` - `"inference-control-net-texture"` - `"inference-img"` - `"inference-img-ip-adapter"` - `"inference-img-texture"` - `"inference-in-paint"` - `"inference-in-paint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt"` - `"inference-txt-ip-adapter"` - `"inference-txt-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture:albedo"` - `"texture:ao"` - `"texture:edge"` - `"texture:height"` - `"texture:metallic"` - `"texture:normal"` - `"texture:smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `status: "error" or "pending" or "success"` The actual status - `"error"` - `"pending"` - `"success"` - `tags: array of string` The associated tags (example: ["sci-fi", "landscape"]) - `updatedAt: string` The asset last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `url: string` Signed URL to get the asset content - `automaticCaptioning: optional string` Automatic captioning of the asset - `description: optional string` The description, it will contain in priority: - the manual description - the advanced captioning when the asset is used in training flow - the automatic captioning - `embedding: optional array of number` The embedding of the asset when requested. Only available when an asset can be embedded (ie: not Detection maps) - `firstFrame: optional object { assetId, url }` The video asset's first frame. Contains the assetId and the url of the first frame. - `assetId: string` - `url: string` - `isHidden: optional boolean` Whether the asset is hidden. - `lastFrame: optional object { assetId, url }` The video asset's last frame. Contains the assetId and the url of the last frame. - `assetId: string` - `url: string` - `nsfw: optional array of string` The NSFW labels - `originalFileUrl: optional string` The original file url. Contains the url of the original file. without any conversion. Only available for some specific video, audio and threeD assets. Is only specified if the given asset data has been replaced with a new file during the creation of the asset. - `outputIndex: optional number` The output index of the asset within a job This index is an positive integer that starts at 0 It is used to differentiate between multiple outputs of the same job If the job has only one output, this index is 0 - `preview: optional object { assetId, url }` The asset's preview. Contains the assetId and the url of the preview. - `assetId: string` - `url: string` - `thumbnail: optional object { assetId, url }` The asset's thumbnail. Contains the assetId and the url of the thumbnail. - `assetId: string` - `url: string` ### Asset Lock Response - `AssetLockResponse object { asset, lockId }` - `asset: object { id, authorId, collectionIds, 24 more }` - `id: string` The asset ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `authorId: string` The author user ID (example: "dcf121faaa1a0a0bbbd9ca1b73d62aea") - `collectionIds: array of string` A list of CollectionId this asset belongs to - `createdAt: string` The asset creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `editCapabilities: array of "DETECTION" or "GENERATIVE_FILL" or "PIXELATE" or 8 more` List of edit capabilities - `"DETECTION"` - `"GENERATIVE_FILL"` - `"PIXELATE"` - `"PROMPT_EDITING"` - `"REFINE"` - `"REFRAME"` - `"REMOVE_BACKGROUND"` - `"SEGMENTATION"` - `"UPSCALE"` - `"UPSCALE_360"` - `"VECTORIZATION"` - `kind: "3d" or "audio" or "document" or 4 more` The kind of asset - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `metadata: object { kind, type, angular, 106 more }` Metadata of the asset with some additional information - `kind: "3d" or "audio" or "document" or 4 more` - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `type: "3d-texture" or "3d-texture-albedo" or "3d-texture-metallic" or 72 more` The type of the asset. Ex: 'inference-txt2img' will represent an asset generated from a text to image model - `"3d-texture"` - `"3d-texture-albedo"` - `"3d-texture-metallic"` - `"3d-texture-mtl"` - `"3d-texture-normal"` - `"3d-texture-roughness"` - `"3d23d"` - `"3d23d-texture"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-controlnet"` - `"inference-controlnet-img2img"` - `"inference-controlnet-inpaint"` - `"inference-controlnet-inpaint-ip-adapter"` - `"inference-controlnet-ip-adapter"` - `"inference-controlnet-reference"` - `"inference-controlnet-texture"` - `"inference-img2img"` - `"inference-img2img-ip-adapter"` - `"inference-img2img-texture"` - `"inference-inpaint"` - `"inference-inpaint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt2img"` - `"inference-txt2img-ip-adapter"` - `"inference-txt2img-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture-albedo"` - `"texture-ao"` - `"texture-edge"` - `"texture-height"` - `"texture-metallic"` - `"texture-normal"` - `"texture-smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `angular: optional number` How angular is the surface? 0 is like a sphere, 1 is like a mechanical object - `aspectRatio: optional string` The optional aspect ratio given for the generation, only applicable for some models - `backgroundOpacity: optional number` Int to set between 0 and 255 for the opacity of the background in the result images. - `baseModelId: optional string` The baseModelId that maybe changed at inference time - `bbox: optional array of number` A bounding box around the object of interest, in the format [x1, y1, x2, y2]. - `betterQuality: optional boolean` Remove small dark spots (i.e. “pepper”) and connect small bright cracks. - `cannyStructureImage: optional string` The control image already processed by canny detector. Must reference an existing AssetId. - `clustering: optional boolean` Activate clustering. - `colorCorrection: optional boolean` Ensure upscaled tile have the same color histogram as original tile. - `colorMode: optional string` - `colorPrecision: optional number` - `concepts: optional array of object { modelId, scale, modelEpoch }` Flux Kontext LoRA to style the image. For Flux Kontext Prompt Editing. - `modelId: string` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: number` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `modelEpoch: optional string` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `contours: optional array of array of array of array of number` - `controlEnd: optional number` End step for control. - `copiedAt: optional string` The date when the asset was copied to a project - `cornerThreshold: optional number` - `creativity: optional number` Allow the generation of "hallucinations" during the upscale process, which adds additional details and deviates from the original image. Default: optimized for your preset and style. - `creativityDecay: optional number` Amount of decay in creativity over the upscale process. The lowest the value, the less the creativity will be preserved over the upscale process. - `defaultParameters: optional boolean` If true, use the default parameters - `depthFidelity: optional number` The depth fidelity if a depth image provided - `depthImage: optional string` The control image processed by depth estimator. Must reference an existing AssetId. - `detailsLevel: optional number` Amount of details to remove or add - `dilate: optional number` The number of pixels to dilate the result masks. - `factor: optional number` Contrast factor for Grayscale detector - `filterSpeckle: optional number` - `fractality: optional number` Determine the scale at which the upscale process works. - With a small value, the upscale works at the largest scale, resulting in fewer added details and more coherent images. Ideal for portraits, for example. - With a large value, the upscale works at the smallest scale, resulting in more added details and more hallucinations. Ideal for landscapes, for example. (info): A small value is slower and more expensive to run. - `geometryEnforcement: optional number` Apply extra control to the Skybox 360 geometry. The higher the value, the more the 360 geometry will influence the generated skybox image. Use with caution. Default is adapted to the other parameters. - `guidance: optional number` The guidance used to generate this asset - `halfMode: optional boolean` - `hdr: optional number` - `height: optional number` - `highThreshold: optional number` High threshold for Canny detector - `horizontalExpansionRatio: optional number` (deprecated) Horizontal expansion ratio. - `image: optional string` The input image to process. Must reference an existing AssetId or be a data URL. - `imageFidelity: optional number` Strengthen the similarity to the original image during the upscale. Default: optimized for your preset and style. - `imageType: optional "seamfull" or "skybox" or "texture"` Preserve the seamless properties of skybox or texture images. Input has to be of same type (seamless). - `"seamfull"` - `"skybox"` - `"texture"` - `inferenceId: optional string` The id of the Inference describing how this image was generated - `inputFidelity: optional "high" or "low"` When set to `high`, allows to better preserve details from the input images in the output. This is especially useful when using images that contain elements like faces or logos that require accurate preservation in the generated image. You can provide multiple input images that will all be preserved with high fidelity, but keep in mind that the first image will be preserved with richer textures and finer details, so if you include elements such as faces, consider placing them in the first image. Only available for the `gpt-image-1` model. - `"high"` - `"low"` - `inputLocation: optional "bottom" or "left" or "middle" or 2 more` Location of the input image in the output. - `"bottom"` - `"left"` - `"middle"` - `"right"` - `"top"` - `invert: optional boolean` To invert the relief - `keypointThreshold: optional number` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `layerDifference: optional number` - `lengthThreshold: optional number` - `lockExpiresAt: optional string` The ISO timestamp when the lock on the canvas will expire - `lowThreshold: optional number` Low threshold for Canny detector - `mask: optional string` The mask used for the asset generation or editing - `maxIterations: optional number` - `maxThreshold: optional number` Maximum threshold for Grayscale conversion - `minThreshold: optional number` Minimum threshold for Grayscale conversion - `modality: optional "canny" or "depth" or "grayscale" or 7 more` Modality to detect - `"canny"` - `"depth"` - `"grayscale"` - `"lineart_anime"` - `"mlsd"` - `"normal"` - `"pose"` - `"scribble"` - `"segmentation"` - `"sketch"` - `mode: optional string` - `modelId: optional string` The modelId used to generate this asset - `modelType: optional "custom" or "elevenlabs-voice" or "flux.1" or 34 more` The type of the generator used - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `name: optional string` - `nbMasks: optional number` - `negativePrompt: optional string` The negative prompt used to generate this asset - `negativePromptStrength: optional number` Controls the influence of the negative prompt. Default 0 means the negative prompt has no effect. Higher values increase negative prompt influence. Must be > 0 if negativePrompt is provided. - `numInferenceSteps: optional number` The number of denoising steps for each image generation. - `numOutputs: optional number` The number of outputs to generate. - `originalAssetId: optional string` - `outputIndex: optional number` - `overlapPercentage: optional number` Overlap percentage for the output image. - `overrideEmbeddings: optional boolean` Override the embeddings of the model. Only your prompt and negativePrompt will be used. Use with caution. - `parentId: optional string` - `parentJobId: optional string` - `pathPrecision: optional number` - `points: optional array of array of number` List of points (label, x, y) in the image where label = 0 for background and 1 for object. - `polished: optional number` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `preset: optional string` - `progressPercent: optional number` - `prompt: optional string` The prompt that guided the asset generation or editing - `promptFidelity: optional number` Increase the fidelity to the prompt during upscale. Default: optimized for your preset and style. - `raised: optional number` How raised is the surface? 0 is flat like water, 1 is like a very rough rock - `referenceImages: optional array of string` The reference images used for the asset generation or editing - `refinementSteps: optional number` Additional refinement steps before scaling. If scalingFactor == 1, the refinement process will be applied (1 + refinementSteps) times. If scalingFactor > 1, the refinement process will be applied refinementSteps times. - `removeBackground: optional boolean` Remove background for Grayscale detector - `resizeOption: optional number` Size proportion of the input image in the output. - `resultContours: optional boolean` Boolean to output the contours. - `resultImage: optional boolean` Boolean to able output the cut out object. - `resultMask: optional boolean` Boolean to able return the masks (binary image) in the response. - `rootParentId: optional string` - `saveFlipbook: optional boolean` Save a flipbook of the texture. Deactivated when the input texture is larger than 2048x2048px - `scalingFactor: optional number` Scaling factor (when `targetWidth` not specified) - `scheduler: optional string` The scheduler used to generate this asset - `seed: optional string` The seed used to generate this asset. Can be a string or a number in some cases . - `sharpen: optional boolean` Sharpen tiles. - `shiny: optional number` How shiny is the surface? 0 is like a matte surface, 1 is like a diamond - `size: optional number` - `sketch: optional boolean` Activate sketch detection instead of canny. - `sourceProjectId: optional string` - `spliceThreshold: optional number` - `strength: optional number` The strength Only available for the `flux-kontext` LoRA model. - `structureFidelity: optional number` Strength for the input image structure preservation - `structureImage: optional string` The control image for structure. A canny detector will be applied to this image. Must reference an existing AssetId. - `style: optional "3d-cartoon" or "3d-rendered" or "anime" or 23 more` - `"3d-cartoon"` - `"3d-rendered"` - `"anime"` - `"cartoon"` - `"cinematic"` - `"claymation"` - `"cloud-skydome"` - `"comic"` - `"cyberpunk"` - `"enchanted"` - `"fantasy"` - `"ink"` - `"manga"` - `"manga-color"` - `"minimalist"` - `"neon-tron"` - `"oil-painting"` - `"pastel"` - `"photo"` - `"photography"` - `"psychedelic"` - `"retro-fantasy"` - `"scifi-concept-art"` - `"space"` - `"standard"` - `"whimsical"` - `styleFidelity: optional number` The higher the value the more it will look like the style image(s) - `styleImages: optional array of string` List of style images. Most of the time, only one image is enough. It must be existing AssetIds. - `styleImagesFidelity: optional number` Condition the influence of the style image(s). The higher the value, the more the style images will influence the upscaled image. - `targetHeight: optional number` The target height of the output image. - `targetWidth: optional number` Target width for the upscaled image, take priority over scaling factor - `text: optional string` A textual description / keywords describing the object of interest. - `texture: optional string` The asset to convert in texture maps. Must reference an existing AssetId. - `thumbnail: optional object { assetId, url }` The thumbnail of the canvas - `assetId: string` The AssetId of the image used as a thumbnail for the canvas (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: string` The url of the image used as a thumbnail for the canvas - `tileStyle: optional boolean` If set to true, during the upscaling process, the model will match tiles of the source image with tiles of the style image(s). This will result in a more coherent restyle. Works best with style images that have a similar composition. - `trainingImage: optional boolean` - `verticalExpansionRatio: optional number` (deprecated) Vertical expansion ratio. - `width: optional number` The width of the rendered image. - `mimeType: string` The mime type of the asset (example: "image/png") - `ownerId: string` The owner (project) ID (example: "proj_23tlk332lkht3kl2" or "team_dlkhgs23tlk3hlkth32lkht3kl2" for old teams) - `privacy: "private" or "public" or "unlisted"` The privacy of the asset - `"private"` - `"public"` - `"unlisted"` - `properties: object { size, animationFrameCount, bitrate, 20 more }` The properties of the asset, content may depend on the kind of asset returned - `size: number` - `animationFrameCount: optional number` Number of animation frames if animations exist - `bitrate: optional number` Bitrate of the media in bits per second - `boneCount: optional number` Number of bones if skeleton exists - `channels: optional number` Number of channels of the audio - `classification: optional "effect" or "interview" or "music" or 5 more` Classification of the audio - `"effect"` - `"interview"` - `"music"` - `"other"` - `"sound"` - `"speech"` - `"text"` - `"unknown"` - `codecName: optional string` Codec name of the media - `description: optional string` Description of the audio - `dimensions: optional array of number` Bounding box dimensions [width, height, depth] - `duration: optional number` Duration of the media in seconds - `faceCount: optional number` Number of faces/triangles in the mesh - `format: optional string` Format of the mesh file (e.g. 'glb', etc.) - `frameRate: optional number` Frame rate of the video in frames per second - `hasAnimations: optional boolean` Whether the mesh has animations - `hasNormals: optional boolean` Whether the mesh has normal vectors - `hasSkeleton: optional boolean` Whether the mesh has bones/skeleton - `hasUVs: optional boolean` Whether the mesh has UV coordinates - `height: optional number` - `nbFrames: optional number` Number of frames in the video - `sampleRate: optional number` Sample rate of the media in Hz - `transcription: optional object { text }` Transcription of the audio - `text: string` - `vertexCount: optional number` Number of vertices in the mesh - `width: optional number` - `source: "3d23d" or "3d23d:texture" or "3d:texture" or 72 more` source of the asset - `"3d23d"` - `"3d23d:texture"` - `"3d:texture"` - `"3d:texture:albedo"` - `"3d:texture:metallic"` - `"3d:texture:mtl"` - `"3d:texture:normal"` - `"3d:texture:roughness"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-control-net"` - `"inference-control-net-img"` - `"inference-control-net-inpainting"` - `"inference-control-net-inpainting-ip-adapter"` - `"inference-control-net-ip-adapter"` - `"inference-control-net-reference"` - `"inference-control-net-texture"` - `"inference-img"` - `"inference-img-ip-adapter"` - `"inference-img-texture"` - `"inference-in-paint"` - `"inference-in-paint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt"` - `"inference-txt-ip-adapter"` - `"inference-txt-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture:albedo"` - `"texture:ao"` - `"texture:edge"` - `"texture:height"` - `"texture:metallic"` - `"texture:normal"` - `"texture:smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `status: "error" or "pending" or "success"` The actual status - `"error"` - `"pending"` - `"success"` - `tags: array of string` The associated tags (example: ["sci-fi", "landscape"]) - `updatedAt: string` The asset last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `url: string` Signed URL to get the asset content - `automaticCaptioning: optional string` Automatic captioning of the asset - `description: optional string` The description, it will contain in priority: - the manual description - the advanced captioning when the asset is used in training flow - the automatic captioning - `embedding: optional array of number` The embedding of the asset when requested. Only available when an asset can be embedded (ie: not Detection maps) - `firstFrame: optional object { assetId, url }` The video asset's first frame. Contains the assetId and the url of the first frame. - `assetId: string` - `url: string` - `isHidden: optional boolean` Whether the asset is hidden. - `lastFrame: optional object { assetId, url }` The video asset's last frame. Contains the assetId and the url of the last frame. - `assetId: string` - `url: string` - `nsfw: optional array of string` The NSFW labels - `originalFileUrl: optional string` The original file url. Contains the url of the original file. without any conversion. Only available for some specific video, audio and threeD assets. Is only specified if the given asset data has been replaced with a new file during the creation of the asset. - `outputIndex: optional number` The output index of the asset within a job This index is an positive integer that starts at 0 It is used to differentiate between multiple outputs of the same job If the job has only one output, this index is 0 - `preview: optional object { assetId, url }` The asset's preview. Contains the assetId and the url of the preview. - `assetId: string` - `url: string` - `thumbnail: optional object { assetId, url }` The asset's thumbnail. Contains the assetId and the url of the thumbnail. - `assetId: string` - `url: string` - `lockId: optional string` The value of the lock to use when updating/unlocking the canvas. ### Asset List Snapshots Response - `AssetListSnapshotsResponse object { authorId, hash, rawData, takenAt }` - `authorId: string` - `hash: string` - `rawData: string` - `takenAt: number` ### Asset Update Tags Response - `AssetUpdateTagsResponse object { added, deleted }` - `added: array of string` The list of added tags - `deleted: array of string` The list of deleted tags ### Asset Unlock Response - `AssetUnlockResponse object { asset }` - `asset: object { id, authorId, collectionIds, 24 more }` - `id: string` The asset ID (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `authorId: string` The author user ID (example: "dcf121faaa1a0a0bbbd9ca1b73d62aea") - `collectionIds: array of string` A list of CollectionId this asset belongs to - `createdAt: string` The asset creation date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `editCapabilities: array of "DETECTION" or "GENERATIVE_FILL" or "PIXELATE" or 8 more` List of edit capabilities - `"DETECTION"` - `"GENERATIVE_FILL"` - `"PIXELATE"` - `"PROMPT_EDITING"` - `"REFINE"` - `"REFRAME"` - `"REMOVE_BACKGROUND"` - `"SEGMENTATION"` - `"UPSCALE"` - `"UPSCALE_360"` - `"VECTORIZATION"` - `kind: "3d" or "audio" or "document" or 4 more` The kind of asset - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `metadata: object { kind, type, angular, 106 more }` Metadata of the asset with some additional information - `kind: "3d" or "audio" or "document" or 4 more` - `"3d"` - `"audio"` - `"document"` - `"image"` - `"image-hdr"` - `"json"` - `"video"` - `type: "3d-texture" or "3d-texture-albedo" or "3d-texture-metallic" or 72 more` The type of the asset. Ex: 'inference-txt2img' will represent an asset generated from a text to image model - `"3d-texture"` - `"3d-texture-albedo"` - `"3d-texture-metallic"` - `"3d-texture-mtl"` - `"3d-texture-normal"` - `"3d-texture-roughness"` - `"3d23d"` - `"3d23d-texture"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-controlnet"` - `"inference-controlnet-img2img"` - `"inference-controlnet-inpaint"` - `"inference-controlnet-inpaint-ip-adapter"` - `"inference-controlnet-ip-adapter"` - `"inference-controlnet-reference"` - `"inference-controlnet-texture"` - `"inference-img2img"` - `"inference-img2img-ip-adapter"` - `"inference-img2img-texture"` - `"inference-inpaint"` - `"inference-inpaint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt2img"` - `"inference-txt2img-ip-adapter"` - `"inference-txt2img-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture-albedo"` - `"texture-ao"` - `"texture-edge"` - `"texture-height"` - `"texture-metallic"` - `"texture-normal"` - `"texture-smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `angular: optional number` How angular is the surface? 0 is like a sphere, 1 is like a mechanical object - `aspectRatio: optional string` The optional aspect ratio given for the generation, only applicable for some models - `backgroundOpacity: optional number` Int to set between 0 and 255 for the opacity of the background in the result images. - `baseModelId: optional string` The baseModelId that maybe changed at inference time - `bbox: optional array of number` A bounding box around the object of interest, in the format [x1, y1, x2, y2]. - `betterQuality: optional boolean` Remove small dark spots (i.e. “pepper”) and connect small bright cracks. - `cannyStructureImage: optional string` The control image already processed by canny detector. Must reference an existing AssetId. - `clustering: optional boolean` Activate clustering. - `colorCorrection: optional boolean` Ensure upscaled tile have the same color histogram as original tile. - `colorMode: optional string` - `colorPrecision: optional number` - `concepts: optional array of object { modelId, scale, modelEpoch }` Flux Kontext LoRA to style the image. For Flux Kontext Prompt Editing. - `modelId: string` The model ID (example: "model_eyVcnFJcR92BxBkz7N6g5w") - `scale: number` The scale of the model (example: 1.0) For Flux Kontext Prompt Editing, the scale is between 0 and 2. - `modelEpoch: optional string` The epoch of the model (example: "000001") Only available for Flux Lora Trained models - `contours: optional array of array of array of array of number` - `controlEnd: optional number` End step for control. - `copiedAt: optional string` The date when the asset was copied to a project - `cornerThreshold: optional number` - `creativity: optional number` Allow the generation of "hallucinations" during the upscale process, which adds additional details and deviates from the original image. Default: optimized for your preset and style. - `creativityDecay: optional number` Amount of decay in creativity over the upscale process. The lowest the value, the less the creativity will be preserved over the upscale process. - `defaultParameters: optional boolean` If true, use the default parameters - `depthFidelity: optional number` The depth fidelity if a depth image provided - `depthImage: optional string` The control image processed by depth estimator. Must reference an existing AssetId. - `detailsLevel: optional number` Amount of details to remove or add - `dilate: optional number` The number of pixels to dilate the result masks. - `factor: optional number` Contrast factor for Grayscale detector - `filterSpeckle: optional number` - `fractality: optional number` Determine the scale at which the upscale process works. - With a small value, the upscale works at the largest scale, resulting in fewer added details and more coherent images. Ideal for portraits, for example. - With a large value, the upscale works at the smallest scale, resulting in more added details and more hallucinations. Ideal for landscapes, for example. (info): A small value is slower and more expensive to run. - `geometryEnforcement: optional number` Apply extra control to the Skybox 360 geometry. The higher the value, the more the 360 geometry will influence the generated skybox image. Use with caution. Default is adapted to the other parameters. - `guidance: optional number` The guidance used to generate this asset - `halfMode: optional boolean` - `hdr: optional number` - `height: optional number` - `highThreshold: optional number` High threshold for Canny detector - `horizontalExpansionRatio: optional number` (deprecated) Horizontal expansion ratio. - `image: optional string` The input image to process. Must reference an existing AssetId or be a data URL. - `imageFidelity: optional number` Strengthen the similarity to the original image during the upscale. Default: optimized for your preset and style. - `imageType: optional "seamfull" or "skybox" or "texture"` Preserve the seamless properties of skybox or texture images. Input has to be of same type (seamless). - `"seamfull"` - `"skybox"` - `"texture"` - `inferenceId: optional string` The id of the Inference describing how this image was generated - `inputFidelity: optional "high" or "low"` When set to `high`, allows to better preserve details from the input images in the output. This is especially useful when using images that contain elements like faces or logos that require accurate preservation in the generated image. You can provide multiple input images that will all be preserved with high fidelity, but keep in mind that the first image will be preserved with richer textures and finer details, so if you include elements such as faces, consider placing them in the first image. Only available for the `gpt-image-1` model. - `"high"` - `"low"` - `inputLocation: optional "bottom" or "left" or "middle" or 2 more` Location of the input image in the output. - `"bottom"` - `"left"` - `"middle"` - `"right"` - `"top"` - `invert: optional boolean` To invert the relief - `keypointThreshold: optional number` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `layerDifference: optional number` - `lengthThreshold: optional number` - `lockExpiresAt: optional string` The ISO timestamp when the lock on the canvas will expire - `lowThreshold: optional number` Low threshold for Canny detector - `mask: optional string` The mask used for the asset generation or editing - `maxIterations: optional number` - `maxThreshold: optional number` Maximum threshold for Grayscale conversion - `minThreshold: optional number` Minimum threshold for Grayscale conversion - `modality: optional "canny" or "depth" or "grayscale" or 7 more` Modality to detect - `"canny"` - `"depth"` - `"grayscale"` - `"lineart_anime"` - `"mlsd"` - `"normal"` - `"pose"` - `"scribble"` - `"segmentation"` - `"sketch"` - `mode: optional string` - `modelId: optional string` The modelId used to generate this asset - `modelType: optional "custom" or "elevenlabs-voice" or "flux.1" or 34 more` The type of the generator used - `"custom"` - `"elevenlabs-voice"` - `"flux.1"` - `"flux.1-composition"` - `"flux.1-kontext-dev"` - `"flux.1-kontext-lora"` - `"flux.1-krea-dev"` - `"flux.1-krea-lora"` - `"flux.1-lora"` - `"flux.1-pro"` - `"flux.1.1-pro-ultra"` - `"flux.2-dev-edit-lora"` - `"flux.2-dev-lora"` - `"flux.2-klein-4b-edit-lora"` - `"flux.2-klein-4b-lora"` - `"flux.2-klein-9b-edit-lora"` - `"flux.2-klein-9b-lora"` - `"flux.2-klein-base-4b-edit-lora"` - `"flux.2-klein-base-4b-lora"` - `"flux.2-klein-base-9b-edit-lora"` - `"flux.2-klein-base-9b-lora"` - `"flux1.1-pro"` - `"gpt-image-1"` - `"qwen-image-2512-lora"` - `"qwen-image-edit-2509-lora"` - `"qwen-image-edit-2511-lora"` - `"qwen-image-edit-lora"` - `"qwen-image-lora"` - `"sd-1_5"` - `"sd-1_5-composition"` - `"sd-1_5-lora"` - `"sd-xl"` - `"sd-xl-composition"` - `"sd-xl-lora"` - `"zimage-de-turbo-lora"` - `"zimage-lora"` - `"zimage-turbo-lora"` - `name: optional string` - `nbMasks: optional number` - `negativePrompt: optional string` The negative prompt used to generate this asset - `negativePromptStrength: optional number` Controls the influence of the negative prompt. Default 0 means the negative prompt has no effect. Higher values increase negative prompt influence. Must be > 0 if negativePrompt is provided. - `numInferenceSteps: optional number` The number of denoising steps for each image generation. - `numOutputs: optional number` The number of outputs to generate. - `originalAssetId: optional string` - `outputIndex: optional number` - `overlapPercentage: optional number` Overlap percentage for the output image. - `overrideEmbeddings: optional boolean` Override the embeddings of the model. Only your prompt and negativePrompt will be used. Use with caution. - `parentId: optional string` - `parentJobId: optional string` - `pathPrecision: optional number` - `points: optional array of array of number` List of points (label, x, y) in the image where label = 0 for background and 1 for object. - `polished: optional number` How polished is the surface? 0 is like a rough surface, 1 is like a mirror - `preset: optional string` - `progressPercent: optional number` - `prompt: optional string` The prompt that guided the asset generation or editing - `promptFidelity: optional number` Increase the fidelity to the prompt during upscale. Default: optimized for your preset and style. - `raised: optional number` How raised is the surface? 0 is flat like water, 1 is like a very rough rock - `referenceImages: optional array of string` The reference images used for the asset generation or editing - `refinementSteps: optional number` Additional refinement steps before scaling. If scalingFactor == 1, the refinement process will be applied (1 + refinementSteps) times. If scalingFactor > 1, the refinement process will be applied refinementSteps times. - `removeBackground: optional boolean` Remove background for Grayscale detector - `resizeOption: optional number` Size proportion of the input image in the output. - `resultContours: optional boolean` Boolean to output the contours. - `resultImage: optional boolean` Boolean to able output the cut out object. - `resultMask: optional boolean` Boolean to able return the masks (binary image) in the response. - `rootParentId: optional string` - `saveFlipbook: optional boolean` Save a flipbook of the texture. Deactivated when the input texture is larger than 2048x2048px - `scalingFactor: optional number` Scaling factor (when `targetWidth` not specified) - `scheduler: optional string` The scheduler used to generate this asset - `seed: optional string` The seed used to generate this asset. Can be a string or a number in some cases . - `sharpen: optional boolean` Sharpen tiles. - `shiny: optional number` How shiny is the surface? 0 is like a matte surface, 1 is like a diamond - `size: optional number` - `sketch: optional boolean` Activate sketch detection instead of canny. - `sourceProjectId: optional string` - `spliceThreshold: optional number` - `strength: optional number` The strength Only available for the `flux-kontext` LoRA model. - `structureFidelity: optional number` Strength for the input image structure preservation - `structureImage: optional string` The control image for structure. A canny detector will be applied to this image. Must reference an existing AssetId. - `style: optional "3d-cartoon" or "3d-rendered" or "anime" or 23 more` - `"3d-cartoon"` - `"3d-rendered"` - `"anime"` - `"cartoon"` - `"cinematic"` - `"claymation"` - `"cloud-skydome"` - `"comic"` - `"cyberpunk"` - `"enchanted"` - `"fantasy"` - `"ink"` - `"manga"` - `"manga-color"` - `"minimalist"` - `"neon-tron"` - `"oil-painting"` - `"pastel"` - `"photo"` - `"photography"` - `"psychedelic"` - `"retro-fantasy"` - `"scifi-concept-art"` - `"space"` - `"standard"` - `"whimsical"` - `styleFidelity: optional number` The higher the value the more it will look like the style image(s) - `styleImages: optional array of string` List of style images. Most of the time, only one image is enough. It must be existing AssetIds. - `styleImagesFidelity: optional number` Condition the influence of the style image(s). The higher the value, the more the style images will influence the upscaled image. - `targetHeight: optional number` The target height of the output image. - `targetWidth: optional number` Target width for the upscaled image, take priority over scaling factor - `text: optional string` A textual description / keywords describing the object of interest. - `texture: optional string` The asset to convert in texture maps. Must reference an existing AssetId. - `thumbnail: optional object { assetId, url }` The thumbnail of the canvas - `assetId: string` The AssetId of the image used as a thumbnail for the canvas (example: "asset_GTrL3mq4SXWyMxkOHRxlpw") - `url: string` The url of the image used as a thumbnail for the canvas - `tileStyle: optional boolean` If set to true, during the upscaling process, the model will match tiles of the source image with tiles of the style image(s). This will result in a more coherent restyle. Works best with style images that have a similar composition. - `trainingImage: optional boolean` - `verticalExpansionRatio: optional number` (deprecated) Vertical expansion ratio. - `width: optional number` The width of the rendered image. - `mimeType: string` The mime type of the asset (example: "image/png") - `ownerId: string` The owner (project) ID (example: "proj_23tlk332lkht3kl2" or "team_dlkhgs23tlk3hlkth32lkht3kl2" for old teams) - `privacy: "private" or "public" or "unlisted"` The privacy of the asset - `"private"` - `"public"` - `"unlisted"` - `properties: object { size, animationFrameCount, bitrate, 20 more }` The properties of the asset, content may depend on the kind of asset returned - `size: number` - `animationFrameCount: optional number` Number of animation frames if animations exist - `bitrate: optional number` Bitrate of the media in bits per second - `boneCount: optional number` Number of bones if skeleton exists - `channels: optional number` Number of channels of the audio - `classification: optional "effect" or "interview" or "music" or 5 more` Classification of the audio - `"effect"` - `"interview"` - `"music"` - `"other"` - `"sound"` - `"speech"` - `"text"` - `"unknown"` - `codecName: optional string` Codec name of the media - `description: optional string` Description of the audio - `dimensions: optional array of number` Bounding box dimensions [width, height, depth] - `duration: optional number` Duration of the media in seconds - `faceCount: optional number` Number of faces/triangles in the mesh - `format: optional string` Format of the mesh file (e.g. 'glb', etc.) - `frameRate: optional number` Frame rate of the video in frames per second - `hasAnimations: optional boolean` Whether the mesh has animations - `hasNormals: optional boolean` Whether the mesh has normal vectors - `hasSkeleton: optional boolean` Whether the mesh has bones/skeleton - `hasUVs: optional boolean` Whether the mesh has UV coordinates - `height: optional number` - `nbFrames: optional number` Number of frames in the video - `sampleRate: optional number` Sample rate of the media in Hz - `transcription: optional object { text }` Transcription of the audio - `text: string` - `vertexCount: optional number` Number of vertices in the mesh - `width: optional number` - `source: "3d23d" or "3d23d:texture" or "3d:texture" or 72 more` source of the asset - `"3d23d"` - `"3d23d:texture"` - `"3d:texture"` - `"3d:texture:albedo"` - `"3d:texture:metallic"` - `"3d:texture:mtl"` - `"3d:texture:normal"` - `"3d:texture:roughness"` - `"audio2audio"` - `"audio2video"` - `"background-removal"` - `"canvas"` - `"canvas-drawing"` - `"canvas-export"` - `"detection"` - `"generative-fill"` - `"image-prompt-editing"` - `"img23d"` - `"img2img"` - `"img2video"` - `"inference-control-net"` - `"inference-control-net-img"` - `"inference-control-net-inpainting"` - `"inference-control-net-inpainting-ip-adapter"` - `"inference-control-net-ip-adapter"` - `"inference-control-net-reference"` - `"inference-control-net-texture"` - `"inference-img"` - `"inference-img-ip-adapter"` - `"inference-img-texture"` - `"inference-in-paint"` - `"inference-in-paint-ip-adapter"` - `"inference-reference"` - `"inference-reference-texture"` - `"inference-txt"` - `"inference-txt-ip-adapter"` - `"inference-txt-texture"` - `"patch"` - `"pixelization"` - `"reframe"` - `"restyle"` - `"segment"` - `"segmentation-image"` - `"segmentation-mask"` - `"skybox-3d"` - `"skybox-base-360"` - `"skybox-hdri"` - `"texture"` - `"texture:albedo"` - `"texture:ao"` - `"texture:edge"` - `"texture:height"` - `"texture:metallic"` - `"texture:normal"` - `"texture:smoothness"` - `"txt23d"` - `"txt2audio"` - `"txt2img"` - `"txt2video"` - `"unknown"` - `"uploaded"` - `"uploaded-3d"` - `"uploaded-audio"` - `"uploaded-avatar"` - `"uploaded-video"` - `"upscale"` - `"upscale-skybox"` - `"upscale-texture"` - `"upscale-video"` - `"vectorization"` - `"video23d"` - `"video2audio"` - `"video2img"` - `"video2video"` - `"voice-clone"` - `status: "error" or "pending" or "success"` The actual status - `"error"` - `"pending"` - `"success"` - `tags: array of string` The associated tags (example: ["sci-fi", "landscape"]) - `updatedAt: string` The asset last update date as an ISO string (example: "2023-02-03T11:19:41.579Z") - `url: string` Signed URL to get the asset content - `automaticCaptioning: optional string` Automatic captioning of the asset - `description: optional string` The description, it will contain in priority: - the manual description - the advanced captioning when the asset is used in training flow - the automatic captioning - `embedding: optional array of number` The embedding of the asset when requested. Only available when an asset can be embedded (ie: not Detection maps) - `firstFrame: optional object { assetId, url }` The video asset's first frame. Contains the assetId and the url of the first frame. - `assetId: string` - `url: string` - `isHidden: optional boolean` Whether the asset is hidden. - `lastFrame: optional object { assetId, url }` The video asset's last frame. Contains the assetId and the url of the last frame. - `assetId: string` - `url: string` - `nsfw: optional array of string` The NSFW labels - `originalFileUrl: optional string` The original file url. Contains the url of the original file. without any conversion. Only available for some specific video, audio and threeD assets. Is only specified if the given asset data has been replaced with a new file during the creation of the asset. - `outputIndex: optional number` The output index of the asset within a job This index is an positive integer that starts at 0 It is used to differentiate between multiple outputs of the same job If the job has only one output, this index is 0 - `preview: optional object { assetId, url }` The asset's preview. Contains the assetId and the url of the preview. - `assetId: string` - `url: string` - `thumbnail: optional object { assetId, url }` The asset's thumbnail. Contains the assetId and the url of the thumbnail. - `assetId: string` - `url: string` # Download ## Request Batch **post** `/assets/download` Request a link to batch download assets (batch limited to 1000 assets) ### Body Parameters - `options: object { fileNameTemplate, flat }` - `fileNameTemplate: string` A file naming convention as a string with the following available parameters: (seed used to generate the asset) (index of the asset in the inference) (prompt of the inference) (prompt of the generator) Example: "---" - `flat: optional boolean` Flag to prevent grouping assets in directories and store them flat - `query: object { assetIds, inferenceIds, modelIds }` - `assetIds: array of string` Every individual assets specified will be included in the archive - `inferenceIds: array of string` All assets issued from the provided inference ids will be included in the archive - `modelIds: array of string` All assets issued from the provided model ids will be included in the archive ### Returns - `jobId: string` The job id associated with the download request ### Example ```http curl https://api.cloud.scenario.com/v1/assets/download \ -H 'Content-Type: application/json' \ -u "$SCENARIO_SDK_API_KEY:SCENARIO_SDK_API_SECRET" \ -d '{ "options": { "fileNameTemplate": "fileNameTemplate" }, "query": { "assetIds": [ "string" ], "inferenceIds": [ "string" ], "modelIds": [ "string" ] } }' ``` #### Response ```json { "jobId": "jobId" } ``` ## Get Status **get** `/assets/download/{jobId}` Retrieve the status and the url of a batch download assets request ### Path Parameters - `jobId: string` ### Returns - `jobId: string` The job id associated with the download request - `jobStatus: string` The current job status - `downloadUrl: optional string` The download url ### Example ```http curl https://api.cloud.scenario.com/v1/assets/download/$JOB_ID \ -u "$SCENARIO_SDK_API_KEY:SCENARIO_SDK_API_SECRET" ``` #### Response ```json { "jobId": "jobId", "jobStatus": "jobStatus", "downloadUrl": "downloadUrl" } ``` ## Request **post** `/assets/{assetId}/download` Request a link to download the given `assetId` in the given `targetFormat` ### Path Parameters - `assetId: string` ### Body Parameters - `targetFormat: optional "gif" or "heif" or "jpeg" or 10 more` The format to download the asset in - `"gif"` - `"heif"` - `"jpeg"` - `"jpg"` - `"png"` - `"svg"` - `"webp"` - `"avif"` - `"tif"` - `"tiff"` - `"glb"` - `"fbx"` - `"obj"` ### Returns - `url: string` The signed URL to download the asset in the given format ### Example ```http curl https://api.cloud.scenario.com/v1/assets/$ASSET_ID/download \ -H 'Content-Type: application/json' \ -u "$SCENARIO_SDK_API_KEY:SCENARIO_SDK_API_SECRET" \ -d '{}' ``` #### Response ```json { "url": "url" } ``` ## Domain Types ### Download Request Batch Response - `DownloadRequestBatchResponse object { jobId }` - `jobId: string` The job id associated with the download request ### Download Get Status Response - `DownloadGetStatusResponse object { jobId, jobStatus, downloadUrl }` - `jobId: string` The job id associated with the download request - `jobStatus: string` The current job status - `downloadUrl: optional string` The download url ### Download Request Response - `DownloadRequestResponse object { url }` - `url: string` The signed URL to download the asset in the given format