The Wan image editing model series supports multi-image input and output. You can use text instructions to perform tasks such as image editing, multi-image fusion, subject feature preservation, and object detection and segmentation.
Getting started
This example demonstrates how to use wan2.7-image-pro to generate an edited image from two input images and a text prompt.
Prompt: Spray the graffiti from image 2 onto the car in image 1
Input image 1 | Input image 2 | Output image (wan2.7-image-pro) |
|
|
|
Before making a call, get an API key and export the API key as an environment variable. To make calls using the SDK, install the DashScope SDK.
Synchronous call
Ensure that the DashScope Python SDK version is 1.25.15 or later, and the DashScope Java SDK version is 2.22.13 or later.
Python
Request Example
import os
import dashscope
from dashscope.aigc.image_generation import ImageGeneration
from dashscope.api_entities.dashscope_response import Message
# Base URL for the Singapore region. Base URLs differ by region.
dashscope.base_http_api_url = 'https://dashscope-intl.aliyuncs.com/api/v1'
# If you have not set an environment variable, replace the next line with: api_key="sk-xxx"
# API keys differ by region. Get your API key: https://www.alibabacloud.com/help/zh/model-studio/get-api-key
api_key = os.getenv("DASHSCOPE_API_KEY")
message = Message(
role="user",
# Supports local files, such as "image": "file://car.png"
content=[
{
"text": "Apply the graffiti from image 2 onto the car in image 1"
},
{
"image": "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20251229/pjeqdf/car.webp"
},
{
"image": "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20251229/xsunlm/paint.webp"
}
]
)
print("----Synchronous call. Please wait a moment----")
rsp = ImageGeneration.call(
model='wan2.7-image',
api_key=api_key,
messages=[message],
watermark=False,
n=1,
size="2K"
)
print(rsp)Response Example
The image URL expires in 24 hours. Download the image promptly.
{
"status_code": 200,
"request_id": "b6a4c68d-3a91-4018-ae96-3cf373xxxxxx",
"code": "",
"message": "",
"output": {
"text": null,
"finish_reason": null,
"choices": [
{
"finish_reason": "stop",
"message": {
"role": "assistant",
"content": [
{
"image": "https://dashscope-result-bj.oss-cn-beijing.aliyuncs.com/xxxxxx.png?Expires=xxxxxx",
"type": "image"
}
]
}
}
],
"audio": null,
"finished": true
},
"usage": {
"input_tokens": 0,
"output_tokens": 0,
"characters": 0,
"image_count": 1,
"size": "2985*1405",
"total_tokens": 0
}
}
{
"status_code": 200,
"request_id": "81d868c6-6ce1-92d8-a90d-d2ee71xxxxxx",
"code": "",
"message": "",
"output": {
"text": null,
"finish_reason": null,
"choices": [
{
"finish_reason": "stop",
"message": {
"role": "assistant",
"content": [
{
"image": "https://dashscope-result-bj.oss-cn-beijing.aliyuncs.com/xxxxxx.png?Expires=xxxxxx",
"type": "image"
}
]
}
}
],
"audio": null,
"finished": true
},
"usage": {
"input_tokens": 18790,
"output_tokens": 2,
"characters": 0,
"image_count": 1,
"size": "2985*1405",
"total_tokens": 18792
}
}Java
Request Example
import com.alibaba.dashscope.aigc.imagegeneration.*;
import com.alibaba.dashscope.exception.ApiException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import com.alibaba.dashscope.exception.UploadFileException;
import com.alibaba.dashscope.utils.Constants;
import com.alibaba.dashscope.utils.JsonUtils;
import java.util.Arrays;
import java.util.Collections;
/**
* wan2.7-image-pro Image Editing - Synchronous Call Example
*/
public class Main {
static {
// This is the base URL for the Singapore region. Base URLs vary by region.
Constants.baseHttpApiUrl = "https://dashscope-intl.aliyuncs.com/api/v1";
}
// If you have not configured the environment variable, replace the following line with: apiKey="sk-xxx" using your Model Studio API key.
// API keys vary by region. Get an API key: https://www.alibabacloud.com/help/zh/model-studio/get-api-key
static String apiKey = System.getenv("DASHSCOPE_API_KEY");
public static void basicCall() throws ApiException, NoApiKeyException, UploadFileException {
// Build a multi-image input message.
ImageGenerationMessage message = ImageGenerationMessage.builder()
.role("user")
.content(Arrays.asList(
// Supports multi-image input. Provide multiple reference images.
Collections.singletonMap("text", "Spray the graffiti from image 2 onto the car in image 1"),
Collections.singletonMap("image", "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20251229/pjeqdf/car.webp"),
Collections.singletonMap("image", "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20251229/xsunlm/paint.webp")
)).build();
// Image editing uses a regular synchronous call. No need to set stream and enable_interleave.
ImageGenerationParam param = ImageGenerationParam.builder()
.apiKey(apiKey)
.model("wan2.7-image-pro")
.messages(Collections.singletonList(message))
.n(1)
.size("2K")
.build();
ImageGeneration imageGeneration = new ImageGeneration();
ImageGenerationResult result = null;
try {
System.out.println("---Synchronous call for image editing. Please wait a moment.----");
result = imageGeneration.call(param);
} catch (ApiException | NoApiKeyException | UploadFileException e) {
throw new RuntimeException(e.getMessage());
}
System.out.println(JsonUtils.toJson(result));
}
public static void main(String[] args) {
try {
basicCall();
} catch (ApiException | NoApiKeyException | UploadFileException e) {
System.out.println(e.getMessage());
}
}
}
Response Example
The URL is valid for 24 hours. Save it promptly.
{
"requestId": "1bf6173a-e8de-9f75-94d3-5e618f875xxx",
"usage": {
"input_tokens": 18790,
"output_tokens": 2,
"total_tokens": 18792,
"image_count": 1,
"size": "2985*1405"
},
"output": {
"choices": [
{
"finish_reason": "stop",
"message": {
"role": "assistant",
"content": [
{
"image": "https://dashscope-result-bj.oss-cn-beijing.aliyuncs.com/xxxxxx.png?Expires=xxxxxx",
"type": "image"
}
]
}
}
],
"finished": true
},
"status_code": 200,
"code": "",
"message": ""
}curl
Request example
curl --location 'https://dashscope-intl.aliyuncs.com/api/v1/services/aigc/multimodal-generation/generation' \
--header 'Content-Type: application/json' \
--header "Authorization: Bearer $DASHSCOPE_API_KEY" \
--data '{
"model": "wan2.7-image-pro",
"input": {
"messages": [
{
"role": "user",
"content": [
{"image": "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20251229/pjeqdf/car.webp"},
{"image": "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20251229/xsunlm/paint.webp"},
{"text": "Spray-paint the graffiti from image 2 onto the car in image 1"}
]
}
]
},
"parameters": {
"size": "2K",
"n": 1,
"watermark": false,
"thinking_mode": true
}
}'
Response example
{
"output": {
"choices": [
{
"finish_reason": "stop",
"message": {
"content": [
{
"image": "https://dashscope-xxx.oss-xxx.aliyuncs.com/xxx.png?Expires=xxx",
"type": "image"
}
],
"role": "assistant"
}
}
],
"finished": true
},
"usage": {
"image_count": 1,
"input_tokens": 10867,
"output_tokens": 2,
"size": "1488*704",
"total_tokens": 10869
},
"request_id": "71dfc3c6-f796-9972-97e4-bc4efc4faxxx"
}Asynchronous call
Ensure that the DashScope Python SDK version is 1.25.15 or later, and the DashScope Java SDK version is 2.22.13 or later.
Python
Request Example
import os
import dashscope
from dashscope.aigc.image_generation import ImageGeneration
from dashscope.api_entities.dashscope_response import Message
from http import HTTPStatus
# Base URL for the Singapore region. Base URLs differ by region.
dashscope.base_http_api_url = 'https://dashscope-intl.aliyuncs.com/api/v1'
# If you have not set an environment variable, replace the next line with: api_key="sk-xxx"
# API keys differ by region. Get your API key: https://www.alibabacloud.com/help/zh/model-studio/get-api-key
api_key = os.getenv("DASHSCOPE_API_KEY")
# Create an asynchronous task
def create_async_task():
print("Creating async task...")
message = Message(
role="user",
content=[
{'text': 'Apply the graffiti from image 2 onto the car in image 1'},
{'image': 'https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20251229/pjeqdf/car.webp'},
{'image': 'https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20251229/xsunlm/paint.webp'}
]
)
response = ImageGeneration.async_call(
model="wan2.7-image",
api_key=api_key,
messages=[message],
watermark=False,
n=1,
size="2K"
)
if response.status_code == 200:
print("Task created successfully:", response)
return response # Returns the task ID
else:
raise Exception(f"Failed to create task: {response.code} - {response.message}")
# Wait for the task to complete
def wait_for_completion(task_response):
print("Waiting for task completion...")
status = ImageGeneration.wait(task=task_response, api_key=api_key)
if status.output.task_status == "SUCCEEDED":
print("Task succeeded!")
print("Response:", status)
else:
raise Exception(f"Task failed with status: {status.output.task_status}")
# Fetch the status of an asynchronous task
def fetch_task_status(task):
print("Fetching task status...")
status = ImageGeneration.fetch(task=task, api_key=api_key)
if status.status_code == HTTPStatus.OK:
print("Task status:", status.output.task_status)
print("Response details:", status)
else:
print(f"Failed to fetch status: {status.code} - {status.message}")
# Cancel an asynchronous task
def cancel_task(task):
print("Canceling task...")
response = ImageGeneration.cancel(task=task, api_key=api_key)
if response.status_code == HTTPStatus.OK:
print("Task canceled successfully:", response.output.task_status)
else:
print(f"Failed to cancel task: {response.code} - {response.message}")
# Main execution flow
if __name__ == "__main__":
task = create_async_task()
wait_for_completion(task)
Response Example
1. Response when creating a task
{
"status_code": 200,
"request_id": "4fb3050f-de57-4a24-84ff-e37ee5xxxxxx",
"code": "",
"message": "",
"output": {
"text": null,
"finish_reason": null,
"choices": null,
"audio": null,
"task_id": "127ec645-118f-4884-955d-0eba8dxxxxxx",
"task_status": "PENDING"
},
"usage": {
"input_tokens": 0,
"output_tokens": 0,
"characters": 0
}
}2. Response when fetching task results
The image URL expires in 24 hours. Download the image promptly.
{
"status_code": 200,
"request_id": "b2a7fab4-5e00-4b0a-86fe-8b9964xxxxxx",
"code": null,
"message": "",
"output": {
"text": null,
"finish_reason": null,
"choices": [
{
"finish_reason": "stop",
"message": {
"role": "assistant",
"content": [
{
"image": "https://dashscope-result-bj.oss-cn-beijing.aliyuncs.com/xxxxxx.png?Expires=xxxxxx",
"type": "image"
}
]
}
}
],
"audio": null,
"task_id": "127ec645-118f-4884-955d-0eba8xxxxxx",
"task_status": "SUCCEEDED",
"submit_time": "2026-01-09 17:52:04.136",
"scheduled_time": "2026-01-09 17:52:04.164",
"end_time": "2026-01-09 17:52:25.408",
"finished": true
},
"usage": {
"input_tokens": 0,
"output_tokens": 0,
"characters": 0,
"size": "1376*768",
"total_tokens": 0,
"image_count": 1
}
}Java
Request Example
import com.alibaba.dashscope.aigc.imagegeneration.*;
import com.alibaba.dashscope.exception.ApiException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import com.alibaba.dashscope.exception.UploadFileException;
import com.alibaba.dashscope.utils.Constants;
import com.alibaba.dashscope.utils.JsonUtils;
import java.util.Arrays;
import java.util.Collections;
/**
* wan2.7-image Image Editing - Asynchronous Call Example
*/
public class Main {
static {
// This is the base URL for the Singapore region. Base URLs vary by region.
Constants.baseHttpApiUrl = "https://dashscope-intl.aliyuncs.com/api/v1";
}
// If you have not configured the environment variable, replace the following line with: apiKey="sk-xxx" using your Model Studio API key.
// API keys vary by region. Get an API key: https://www.alibabacloud.com/help/zh/model-studio/get-api-key
static String apiKey = System.getenv("DASHSCOPE_API_KEY");
public static void asyncCall() throws ApiException, NoApiKeyException, UploadFileException {
// Build a multi-image input message.
ImageGenerationMessage message = ImageGenerationMessage.builder()
.role("user")
.content(Arrays.asList(
// Supports multi-image input. Provide multiple reference images.
Collections.singletonMap("text", "Spray the graffiti from image 2 onto the car in image 1"),
Collections.singletonMap("image", "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20251229/pjeqdf/car.webp"),
Collections.singletonMap("image", "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20251229/xsunlm/paint.webp")
)).build();
ImageGenerationParam param = ImageGenerationParam.builder()
.apiKey(apiKey)
.model("wan2.7-image-pro")
.n(1)
.size("2K")
.messages(Arrays.asList(message))
.build();
ImageGeneration imageGeneration = new ImageGeneration();
ImageGenerationResult result = null;
try {
System.out.println("---Asynchronous call for image editing. Creating task.----");
result = imageGeneration.asyncCall(param);
} catch (ApiException | NoApiKeyException | UploadFileException e) {
throw new RuntimeException(e.getMessage());
}
System.out.println("Task creation result:");
System.out.println(JsonUtils.toJson(result));
String taskId = result.getOutput().getTaskId();
// Wait for task completion.
waitTask(taskId);
}
public static void waitTask(String taskId) throws ApiException, NoApiKeyException {
ImageGeneration imageGeneration = new ImageGeneration();
System.out.println("\n---Waiting for task completion----");
ImageGenerationResult result = imageGeneration.wait(taskId, apiKey);
System.out.println("Task completion result:");
System.out.println(JsonUtils.toJson(result));
}
public static void main(String[] args) {
try {
asyncCall();
} catch (ApiException | NoApiKeyException | UploadFileException e) {
System.out.println(e.getMessage());
}
}
}
Response Example
1. Response example for task creation
{
"requestId": "ccf4b2f4-bf30-9e13-9461-3a28c6a7bxxx",
"output": {
"task_id": "8811b4a4-00ac-4aa2-a2fd-017d3b90cxxx",
"task_status": "PENDING"
},
"status_code": 200,
"code": "",
"message": ""
}2. Response example for querying task results
The URL is valid for 24 hours. Save it promptly.
{
"requestId": "60a08540-f1c1-9e76-8cd3-d5949db8cxxx",
"usage": {
"input_tokens": 18711,
"output_tokens": 2,
"total_tokens": 18713,
"image_count": 1,
"size": "2985*1405"
},
"output": {
"choices": [
{
"finish_reason": "stop",
"message": {
"role": "assistant",
"content": [
{
"image": "https://dashscope-result-bj.oss-cn-beijing.aliyuncs.com/xxxxxx.png?Expires=xxxxxx",
"type": "image"
}
]
}
}
],
"task_id": "8811b4a4-00ac-4aa2-a2fd-017d3b90cxxx",
"task_status": "SUCCEEDED",
"finished": true,
"submit_time": "2026-03-31 19:57:58.840",
"scheduled_time": "2026-03-31 19:57:58.877",
"end_time": "2026-03-31 19:58:11.563"
},
"status_code": 200,
"code": "",
"message": ""
}curl
Step 1: Create a task to get the task ID
curl --location 'https://dashscope-intl.aliyuncs.com/api/v1/services/aigc/image-generation/generation' \
--header 'Content-Type: application/json' \
--header "Authorization: Bearer $DASHSCOPE_API_KEY" \
--header "X-DashScope-Async: enable" \
--data '{
"model": "wan2.7-image-pro",
"input": {
"messages": [
{
"role": "user",
"content": [
{"image": "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20251229/pjeqdf/car.webp"},
{"image": "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20251229/xsunlm/paint.webp"},
{"text": "Spray-paint the graffiti from image 2 onto the car in image 1"}
]
}
]
},
"parameters": {
"size": "2K",
"n": 1,
"watermark": false,
"thinking_mode": true
}
}'
Response example
{
"output": {
"task_status": "PENDING",
"task_id": "0385dc79-5ff8-4d82-bcb6-xxxxxx"
},
"request_id": "4909100c-7b5a-9f92-bfe5-xxxxxx"
}Step 2: Query the result by task ID
Use the task_id obtained in the previous step to poll the task status through the API until the task_status becomes SUCCEEDED or FAILED.
Replace {task_id} with the task_id value returned by the previous API call. task_id is valid for queries within 24 hours.
curl -X GET https://dashscope-intl.aliyuncs.com/api/v1/tasks/{task_id} \
--header "Authorization: Bearer $DASHSCOPE_API_KEY"Response example
The image URL is valid for 24 hours. Download the image promptly.
{
"request_id": "810fa5f5-334c-91f3-aaa4-ed89cf0caxxx",
"output": {
"task_id": "a81ee7cb-014c-473d-b842-76e98311cxxx",
"task_status": "SUCCEEDED",
"submit_time": "2026-03-26 17:16:01.663",
"scheduled_time": "2026-03-26 17:16:01.716",
"end_time": "2026-03-26 17:16:22.961",
"finished": true,
"choices": [
{
"finish_reason": "stop",
"message": {
"role": "assistant",
"content": [
{
"image": "https://dashscope-xxx.oss-xxx.aliyuncs.com/xxx.png?Expires=xxx",
"type": "image"
}
]
}
}
]
},
"usage": {
"size": "2976*1408",
"total_tokens": 11017,
"image_count": 1,
"output_tokens": 2,
"input_tokens": 11015
}
}The wan2.5-i2i-preview model uses different API endpoints and parameter input methods. The following are examples of how to call the model:
Model selection
wan2.7-image-pro and wan2.7-image (Recommended): Suitable for scenarios that require high editing precision or the generation of multiple coherent images.
Precise local editing: Select a specific area in an image to move, replace, or add new elements. This capability is ideal for e-commerce image retouching and design draft adjustments.
Multi-panel image generation: Generate multiple images with a consistent style in a single call. This capability is ideal for comic storyboards, product series images, and storyboards.
wan2.6-image: Suitable for stylized editing scenarios that involve mixed text and images or multiple reference images. This model supports generating corresponding text content when creating images and accepts up to four reference images.
wan2.5-i2i-preview: Suitable for simple image editing and multi-image fusion.
For the input and output specifications of each model, see Input image specifications and Output image resolution.
Demo gallery
Image-to-image set
Input image | Output image |
|
|
|
|
Interactive editing
Input image | Output image |
|
Edit based on image 1. Replace the raspberry selected in box 1 with a lemon, the raspberry in box 2 with a strawberry, and the raspberry in box 3 with a blueberry. The result should be harmoniously integrated with the original image, without the reference boxes and numbers, and keep the rest of the content unchanged. |
|
Place the selected pattern from image 1 into the selected area in image 2. |
Multi-image fusion
Input image | Output image |
|
Take a portrait of the boy from image 1 and the dog from image 2. The boy is hugging the dog, and both are very happy. Studio soft lighting, blue textured background. |
|
Recolor the dress from image 1 using the colors of the bird in image 2. Make it artistic, but keep the style of the dress and the model unchanged. |
Subject feature preservation
Input image | Output image |
|
Keep the person's facial features and hairstyle unchanged. The person is wearing an off-white camisole. A transparent fish tank fills the entire screen, with goldfish swimming and bubbles in the water. The person's face is visible through the transparent fish tank and water. A dim yellow light shines on the person's face from the bottom right. The swimming fish randomly obscure the person, creating an interplay of light and shadow. |
|
Please generate a set of four Polaroid photos with the theme "Seasonal Changes". Each photo is taken at the same location, under a tree in a park, but shows the scenes of spring, summer, autumn, and winter respectively. The person's attire should also match the season: a light jacket in spring, a short-sleeved shirt in summer, a trench coat in autumn, and a scarf and thick coat in winter. Place this set of photos on a dining table. |
Detection and segmentation
Input image | Output image |
|
Detect the laptop and alarm clock in the image, draw bounding boxes, and label them "laptop" and "clock". |
|
Segment the glass cup in the image. |
Extract elements
Input image | Output image |
|
|
|
Cut out the main person and place them on a pure white background. |
Text editing
Input image | Output image |
|
Remove all watermarks from the image. |
|
Casually write "Time for Holiday?" on the sand with a hand. |
|
Change 18 to 29 and JUNE to SEPTEMBER. |
Camera and perspective editing
Input image | Output image |
|
Keep the person's features unchanged and generate front, side, and back views. |
|
Reshoot this photo with a fisheye lens. |
Input description
Input image specifications
Specification | wan2.7-image-pro, wan2.7-image | wan2.6-image | wan2.5-i2i-preview |
Number of input images | 0 to 9 (0 corresponds to text-to-image mode) | Image editing: 1 to 4 / Mixed text and image: 0 to 1 | 1 to 3 |
Image format | JPEG, JPG, PNG (alpha channel not supported), BMP, WEBP | JPEG, JPG, PNG (alpha channel not supported), BMP, WEBP | JPEG, JPG, PNG (alpha channel not supported), BMP, WEBP |
Image width and height range | [240, 8000] pixels | [240, 8000] pixels | [384, 5000] pixels |
File size | ≤ 20 MB | ≤ 10 MB | ≤ 10 MB |
Aspect ratio | [1:8, 8:1] | Unlimited | [1:4, 4:1] |
Image input order
When you input multiple images, their order is determined by their position in the array. Therefore, the image numbers referenced in the prompt must correspond one-to-one with the order in the image array. For example, the first image in the array is "image 1", and the second is "image 2". You can also use markers such as "[image 1]" and "[image 2]".
{
"content": [
{"text": "Editing instruction, for example: Place the alarm clock from image 1 next to the vase on the dining table in image 2"},
{"image": "https://example.com/image1.png"},
{"image": "https://example.com/image2.png"}
]
}Input image | Output image | ||
Image 1 |
Image 2 |
Prompt: Move image 1 onto image 2 |
Prompt: Move image 2 onto image 1 |
Image input methods
You can pass images using the following methods:
Key features
1. Instruction following (prompts)
Parameters: messages.content.text or input.prompt (required), negative_prompt (optional).
text or prompt (Positive prompt): Describe the content, subjects, scenes, styles, lighting, and composition you want to see in the image.
negative_prompt (Negative prompt): Describe the content you do not want to appear in the image, such as "blurry" or "extra fingers". This parameter helps optimize the quality of the generated image.
Parameter | wan2.7-image-pro, wan2.7-image | wan2.6-image | wan2.5-i2i-preview |
text | Required, up to 5,000 characters | Required, up to 2,000 characters | Not supported |
prompt | Not supported | Not supported | Required, up to 2,000 characters |
negative_prompt | Not supported | Supported, up to 500 characters | Supported, up to 500 characters |
2. Enable intelligent prompt rewriting
Parameter: parameters.prompt_extend (bool, defaults to true).
This feature automatically expands and optimizes shorter prompts to improve image quality, but it also increases the response time.
Best practices:
Enable: Recommended when the input prompt is concise or broad, because this feature can enhance image quality.
Disable: Recommended if you want to control fine details, have already provided a detailed description, or are sensitive to response latency. Explicitly set the
prompt_extendparameter tofalse.
Parameter | wan2.7-image-pro, wan2.7-image | wan2.6-image | wan2.5-i2i-preview |
prompt_extend | Not supported | Supported (image editing mode only) | Supported |
3. Set the output image resolution
Parameter: parameters.size (string), in the format "width*height".
Parameter | wan2.7-image-pro, wan2.7-image | wan2.6-image | wan2.5-i2i-preview |
size | Method 1: Specify the output image resolution (Recommended) In editing mode (with at least one image passed), the optional output resolution tiers are:
Method 2: Specify the width and height pixel values of the generated image
Only wan2.7-image-pro in text-to-image scenarios supports 4K resolution. | Method 1: Reference the input image ratio (Recommended) In editing mode (
Method 2: Specify the width and height pixel values of the generated image
The actual output image pixel values will be the closest multiple of 16 to the specified value. | Only supports specifying the width and height pixel values of the generated image
|
4. Interactive precise editing
The parameters.bbox_list parameter specifies the bounding box area for interactive editing. The format is List[List[List[int]]]. You can select items or positions in the image to edit for more accurate results. This is only supported by wan2.7-image-pro and wan2.7-image.
List length: The length of the list must match the number of input images. If an image does not require editing, use an empty list
[]at the corresponding position.Coordinate format:
[x1, y1, x2, y2](top-left x, top-left y, bottom-right x, bottom-right y). The coordinates are absolute pixel values from the original image, with the top-left corner as (0, 0).Condition: A single image supports a maximum of 2 bounding boxes.
Example: For an input of three images, where the first has two bounding boxes and the second has none:
[
[[0, 0, 12, 12], [25, 25, 100, 100]], # Image 1 (2 boxes)
[], # Image 2 (no box)
[[10, 10, 50, 50]] # Image 3 (1 box)
]Billing and rate limits
For free quota and unit price, see Model list and pricing.
For rate limits, see Wan.
Billing:
You are charged based on the number of successfully generated images. You are charged only when the API returns a
task_statusofSUCCEEDED, indicating a successfully generated image.Failed model calls or processing errors do not incur any fees or consume the free quota.
API reference
Different models use different endpoints and request structures:
Model | Endpoint (Example for the Singapore region) |
| Sync API: Async API: |
| Async API: |
wan2.7/wan2.6: Use themessagesformat. In themessages[].contentarray, pass the image in theimageparameter and the prompt in thetextparameter.wan2.5: Pass the image in theinput.imagesarray and the prompt in theinput.promptparameter.
wan2.7-image-pro, wan2.7-image, wan2.6-image | wan2.5-i2i-preview |
| |
For more information about the input and output parameters, see Wan - Image generation and editing (wan2.7-image, wan2.6-image) and Wan - General image editing 2.5 API reference.
























Extract the clothing items from the uploaded photo and arrange them in a flat-lay display on a pure white background. Maintain realistic details and material textures. Fashion e-commerce style, suitable for clothing display.














