The Wan general image editing model can perform various image editing tasks, such as outpainting, watermark removal, style transfer, instruction-based editing, local inpainting, and image restoration, based on text prompts.
This document applies only to the China (Beijing) region. To use the model, you must use a Model Studio API key from the China (Beijing) region.
Model overview
Performance showcase
Original image |
Change the girl's hair to red |
Add a pair of sunglasses to the girl |
Convert to French picture book style |
For more information, see Key features.
Model and pricing
Beijing region
The Beijing region does not offer a free quota. All calls are billable. Please confirm the charges before you proceed.
Model | Unit price | Rate limit (shared by Alibaba Cloud account and RAM users) | |
RPS limit for task submission | Number of concurrent tasks | ||
wanx2.1-imageedit | $0.020070/image | 2 | 2 |
Getting started
Prerequisites
Before you start, obtain and configure an API key and set the API key as an environment variable (This method is being deprecated). If you use the DashScope SDK to make calls, you also need to install the SDK.
Sample code
This section shows how to call the general image editing API to perform a local inpainting task.
The SDK encapsulates the asynchronous processing logic, which makes the upper-level interface behave like a synchronous call where a single request waits for the final result. In contrast, the curl example shows two separate asynchronous API operations: one to submit a task and another to query the result.
Python
This example supports three image input methods: public URL, Base64 encoding, and local file path.
Sample request
import base64
import os
from http import HTTPStatus
from dashscope import ImageSynthesis
import mimetypes
"""
Environment requirements:
dashscope python SDK >= 1.23.8
Install/Upgrade the SDK:
pip install -U dashscope
"""
# If you have not configured environment variables, replace the following line with your Model Studio API key: api_key="sk-xxx"
api_key = os.getenv("DASHSCOPE_API_KEY")
# --- Helper function: for Base64 encoding ---
# Format: data:{MIME_type};base64,{base64_data}
def encode_file(file_path):
mime_type, _ = mimetypes.guess_type(file_path)
if not mime_type or not mime_type.startswith("image/"):
raise ValueError("Unsupported or unrecognizable image format")
with open(file_path, "rb") as image_file:
encoded_string = base64.b64encode(image_file.read()).decode('utf-8')
return f"data:{mime_type};base64,{encoded_string}"
"""
Description of image input methods:
The following three image input methods are provided. You can choose one of them.
1. Use a public URL - suitable for publicly accessible images.
2. Use a local file - suitable for local development and testing.
3. Use Base64 encoding - suitable for private images or scenarios that require encrypted transmission.
"""
# [Method 1] Use a public image URL
mask_image_url = "http://wanx.alicdn.com/material/20250318/description_edit_with_mask_3_mask.png"
base_image_url = "http://wanx.alicdn.com/material/20250318/description_edit_with_mask_3.jpeg"
# [Method 2] Use a local file (absolute and relative paths are supported)
# Format requirement: file:// + file path
# Example (absolute path):
# mask_image_url = "file://" + "/path/to/your/mask_image.png" # Linux/macOS
# base_image_url = "file://" + "C:/path/to/your/base_image.jpeg" # Windows
# Example (relative path):
# mask_image_url = "file://" + "./mask_image.png" # Subject to the actual path
# base_image_url = "file://" + "./base_image.jpeg" # Subject to the actual path
# [Method 3] Use a Base64-encoded image
# mask_image_url = encode_file("./mask_image.png") # Subject to the actual path
# base_image_url = encode_file("./base_image.jpeg") # Subject to the actual path
def sample_sync_call_imageedit():
print('please wait...')
rsp = ImageSynthesis.call(api_key=api_key,
model="wanx2.1-imageedit",
function="description_edit_with_mask",
prompt="A ceramic rabbit holding a ceramic flower",
mask_image_url=mask_image_url,
base_image_url=base_image_url,
n=1)
assert rsp.status_code == HTTPStatus.OK
print('response: %s' % rsp)
if rsp.status_code == HTTPStatus.OK:
for result in rsp.output.results:
print("---------------------------")
print(result.url)
else:
print('sync_call Failed, status_code: %s, code: %s, message: %s' %
(rsp.status_code, rsp.code, rsp.message))
if __name__ == '__main__':
sample_sync_call_imageedit()Sample response
The URL is valid for 24 hours. Download the image promptly.
{
"status_code": 200,
"request_id": "dc41682c-4e4a-9010-bc6f-xxxxxx",
"code": null,
"message": "",
"output": {
"task_id": "6e319d88-a07a-420c-9493-xxxxxx",
"task_status": "SUCCEEDED",
"results": [
{
"url": "https://dashscope-result-wlcb-acdr-1.oss-cn-wulanchabu-acdr-1.aliyuncs.com/xxx.png?xxxxxx"
}
],
"submit_time": "2025-05-26 14:58:27.320",
"scheduled_time": "2025-05-26 14:58:27.339",
"end_time": "2025-05-26 14:58:39.170",
"task_metrics": {
"TOTAL": 1,
"SUCCEEDED": 1,
"FAILED": 0
}
},
"usage": {
"image_count": 1
}
}Java
This example supports three image input methods: public URL, Base64 encoding, and local file path.
Sample request
// Copyright (c) Alibaba, Inc. and its affiliates.
import com.alibaba.dashscope.aigc.imagesynthesis.ImageSynthesis;
import com.alibaba.dashscope.aigc.imagesynthesis.ImageSynthesisParam;
import com.alibaba.dashscope.aigc.imagesynthesis.ImageSynthesisResult;
import com.alibaba.dashscope.exception.ApiException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import com.alibaba.dashscope.utils.JsonUtils;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.Base64;
import java.util.HashMap;
import java.util.Map;
/**
* Environment requirements
* dashscope java SDK >=2.20.9
* Update Maven dependencies:
* https://mvnrepository.com/artifact/com.alibaba/dashscope-sdk-java
*/
public class ImageEditSync {
// If you have not configured environment variables, replace the following line with your Model Studio API key: apiKey="sk-xxx"
static String apiKey = System.getenv("DASHSCOPE_API_KEY");
/**
* Description of image input methods: Select one of the three methods
*
* 1. Use a public URL - suitable for publicly accessible images.
* 2. Use a local file - suitable for local development and testing.
* 3. Use Base64 encoding - suitable for private images or scenarios that require encrypted transmission.
*/
//[Method 1] Public URL
static String maskImageUrl = "http://wanx.alicdn.com/material/20250318/description_edit_with_mask_3_mask.png";
static String baseImageUrl = "http://wanx.alicdn.com/material/20250318/description_edit_with_mask_3.jpeg";
//[Method 2] Local file path (file://+absolute path or file:///+absolute path)
// static String maskImageUrl = "file://" + "/your/path/to/mask_image.png"; // Linux/macOS
// static String baseImageUrl = "file:///" + "C:/your/path/to/base_image.png"; // Windows
//[Method 3] Base64 encoding
// static String maskImageUrl = encodeFile("/your/path/to/mask_image.png");
// static String baseImageUrl = encodeFile("/your/path/to/base_image.png");
public static void syncCall() {
// Set the parameters parameter
Map<String, Object> parameters = new HashMap<>();
parameters.put("prompt_extend", true);
ImageSynthesisParam param =
ImageSynthesisParam.builder()
.apiKey(apiKey)
.model("wanx2.1-imageedit")
.function(ImageSynthesis.ImageEditFunction.DESCRIPTION_EDIT_WITH_MASK)
.prompt("A ceramic rabbit holding a ceramic flower")
.maskImageUrl(maskImageUrl)
.baseImageUrl(baseImageUrl)
.n(1)
.size("1024*1024")
.parameters(parameters)
.build();
ImageSynthesis imageSynthesis = new ImageSynthesis();
ImageSynthesisResult result = null;
try {
System.out.println("---sync call, please wait a moment----");
result = imageSynthesis.call(param);
} catch (ApiException | NoApiKeyException e){
throw new RuntimeException(e.getMessage());
}
System.out.println(JsonUtils.toJson(result));
}
/**
* Encode a file into a Base64 string
* @param filePath File path
* @return Base64 string in the format of data:{MIME_type};base64,{base64_data}
*/
public static String encodeFile(String filePath) {
Path path = Paths.get(filePath);
if (!Files.exists(path)) {
throw new IllegalArgumentException("File does not exist: " + filePath);
}
// Detect the MIME type
String mimeType = null;
try {
mimeType = Files.probeContentType(path);
} catch (IOException e) {
throw new IllegalArgumentException("Cannot detect the file type: " + filePath);
}
if (mimeType == null || !mimeType.startsWith("image/")) {
throw new IllegalArgumentException("Unsupported or unrecognizable image format");
}
// Read the file content and encode it
byte[] fileBytes = null;
try{
fileBytes = Files.readAllBytes(path);
} catch (IOException e) {
throw new IllegalArgumentException("Cannot read the file content: " + filePath);
}
String encodedString = Base64.getEncoder().encodeToString(fileBytes);
return "data:" + mimeType + ";base64," + encodedString;
}
public static void main(String[] args) {
syncCall();
}
}
Sample response
The URL is valid for 24 hours. Download the image promptly.
{
"request_id": "bf6c6361-f0fc-949c-9d60-xxxxxx",
"output": {
"task_id": "958db858-153b-4c81-b243-xxxxxx",
"task_status": "SUCCEEDED",
"results": [
{
"url": "https://dashscope-result-wlcb-acdr-1.oss-cn-wulanchabu-acdr-1.aliyuncs.com/xxx.png?xxxxxx"
}
],
"task_metrics": {
"TOTAL": 1,
"SUCCEEDED": 1,
"FAILED": 0
}
},
"usage": {
"image_count": 1
}
}curl
This example covers the entire process: creating a task, polling its status, and retrieving and saving the result.
For asynchronous calls, you must set the header parameter
X-DashScope-Asynctoenable.The
task_idfor an asynchronous task is valid for 24 hours. After it expires, the task status changes toUNKNOWN.
Step 1: Create a task
This request returns a task ID (task_id).
Sample request
curl --location 'https://dashscope.aliyuncs.com/api/v1/services/aigc/image2image/image-synthesis' \
--header 'X-DashScope-Async: enable' \
--header "Authorization: Bearer $DASHSCOPE_API_KEY" \
--header 'Content-Type: application/json' \--data '{
"model": "wanx2.1-imageedit",
"input": {
"function": "description_edit_with_mask",
"prompt": "A ceramic rabbit holding a ceramic flower.",
"base_image_url": "http://wanx.alicdn.com/material/20250318/description_edit_with_mask_3.jpeg",
"mask_image_url": "http://wanx.alicdn.com/material/20250318/description_edit_with_mask_3_mask.png"
},
"parameters": {
"n": 1
}
}'Sample response
{
"output": {
"task_status": "PENDING",
"task_id": "0385dc79-5ff8-4d82-bcb6-xxxxxx"
},
"request_id": "4909100c-7b5a-9f92-bfe5-xxxxxx"
}Step 2: Query the result by task ID
Use the task_id from the previous step to poll the task status until task_status becomes SUCCEEDED or FAILED.
Sample request
Replace 86ecf553-d340-4e21-xxxxxxxxx with the actual task ID.
The API keys for the Singapore and Beijing regions are different. Obtain an API key.
The following code provides the base_url for the Singapore region. If you use a model in the Beijing region, replace the base_url with https://dashscope.aliyuncs.com/api/v1/tasks/{task_id}
curl -X GET https://dashscope-intl.aliyuncs.com/api/v1/tasks/86ecf553-d340-4e21-xxxxxxxxx \
--header "Authorization: Bearer $DASHSCOPE_API_KEY"Sample response
The image URL is valid for 24 hours. Download the image promptly.
{
"request_id": "eeef0935-02e9-9742-bb55-xxxxxx",
"output": {
"task_id": "a425c46f-dc0a-400f-879e-xxxxxx",
"task_status": "SUCCEEDED",
"submit_time": "2025-02-21 17:56:31.786",
"scheduled_time": "2025-02-21 17:56:31.821",
"end_time": "2025-02-21 17:56:42.530",
"results": [
{
"url": "https://dashscope-result-sh.oss-cn-shanghai.aliyuncs.com/aaa.png"
}
],
"task_metrics": {
"TOTAL": 1,
"SUCCEEDED": 1,
"FAILED": 0
}
},
"usage": {
"image_count": 1
}
}Key features
The general image editing API uses the function parameter to specify different image editing features. All features use the same calling method described in Getting started.
The following examples use curl calls and list only the input and parameters JSON snippets specific to each feature to illustrate what to configure in the request body.
Note: A complete curl request must include top-level fields such as model, input, and parameters. For more information about the request structure, see General image editing API reference.
Global stylization
This feature transfers a specified artistic style to the entire image. It is useful for scenarios such as creating picture books or generating social media images, including backgrounds or concept images that conform to a specific visual style.
How to use: Set
functiontostylization_all.Supported styles:
French picture book style
Gold foil art style
Associated parameter:
parameters.strength(0.0 to 1.0, defaults to 0.5) controls the degree of modification. A smaller value produces a result that is closer to the original image.Prompt tip: Use the "Convert to xx style" format, such as "Convert to French picture book style".
Input image | Output image | |
Prompt: Convert to French picture book style | Prompt: Convert to gold foil art style | |
|
|
|
|
|
|
Control the image modification degree with strength
Use the optional parameters.strength parameter to control the degree of image modification. The value ranges from 0.0 to 1.0, and the default is 0.5. A value closer to 0 means the output image is more similar to the original. A value closer to 1 indicates a greater degree of modification.
Input prompt: Convert to French picture book style.
Input image | Output image | ||
strength=0.0 (minimum value) | strength=0.5 (default value) | strength=1.0 (maximum value) | |
|
|
|
|
Local stylization
Performs style transfer only on a local area of an image. Scenarios include personalized customization (such as stylizing a character's clothing) and ad design (highlighting a product).
How to use: Set
functiontostylization_local.Supported styles: The supported styles and their corresponding values are:
Ice sculpture: ice
Cloud
Chinese festive lantern: chinese festive lantern
Plank: wooden
Blue and white porcelain
Fluffy: fluffy
Yarn: weaving
Balloon: balloon
Prompt tip: Use the "Change xx to xx style" format, such as "Change the house to wooden style".
Input image | Output image | |||
|
Ice sculpture |
Cloud |
Chinese festive lantern |
Board |
Blue and white porcelain |
Fluffy |
|
Balloon | |
Instruction-based editing
Adds or modifies image content using only text instructions without specifying an area. Scenarios include simple edits that do not require precise positioning, such as adding accessories to a character or changing hair color.
How to use: Set
functiontodescription_edit.Associated parameter:
parameters.strength(0.0–1.0, default: 0.5) controls the degree of modification. A smaller value produces a result that is closer to the original image.Prompt tip: Explicitly include action descriptions such as "add" or "modify". For delete operations, we recommend that you use Local inpainting.
Capabilities | Input image | Output image |
Add an element |
|
Add a pair of sunglasses to the kitten. |
Modify an element |
|
Change the girl's hair to red. |
Control the image modification degree with strength
Use the optional parameters.strength parameter to control the degree of image modification. The value ranges from 0.0 to 1.0, and the default is 0.5. A value closer to 0 means the output image is more similar to the original. A value closer to 1 indicates a greater degree of modification.
Input prompt: Change the girl's clothes to a colorful printed beach shirt.
Input image | Output image | ||
strength=0.0 (minimum value) | strength=0.5 (default value) | strength=1.0 (maximum value) | |
|
|
|
|
Local inpainting
Adds, modifies, or deletes content in a specified area by providing a mask image. Scenarios include edits that require precise control, such as changing clothes, replacing objects, and removing interfering objects.
How to use: Set the parameter
functiontodescription_edit_with_mask.Mask requirements: You must provide
mask_image_url. The white area in the mask image is the area to be edited, and the black area is the area to be retained.Prompt tip: Explicitly include action descriptions such as "add" or "modify", and describe the desired content for the edited area.
Add/Modify: Describe the action ("Add a hat to the puppy") or the final result ("A puppy wearing a hat").
Delete: If the object to be deleted is small, you can leave the
promptempty (""). If the object is large, thepromptmust describe the background content that should appear after deletion (such as "A transparent glass vase on the table"), instead of "Delete the bear".
Capabilities | Input image | Input mask image (White is the area to be edited) | Output image |
Add an element |
|
|
Add a hat to the puppy. You can also write the prompt as "A puppy wearing a hat" to describe the expected image content. |
Modify an element |
|
|
A ceramic rabbit holding a ceramic flower. You can also write the prompt as "Replace the carrot held by the ceramic rabbit with a ceramic flower" to describe the action. |
Delete an element |
|
|
A transparent glass vase on the table. The prompt needs to describe the content after deletion. Do not write it as "Delete a brown bear". |
Text and watermark removal
Legal risk warning
Using this feature to process copyrighted images (such as removing another brand's watermark) may constitute copyright infringement. Ensure that you have the legal right to use the processed image and assume all related legal responsibilities.
Removes Chinese and English characters or common watermarks from images. Scenarios include secondary processing of materials and ad image cleanup.
How to use: Set
functiontoremove_watermark.Prompt tip: Use a general instruction such as "Remove the text in the image", or specify a type such as "Remove the English text".
Input image | Output image (Remove the text in the image) |
|
|
|
|
Outpainting
Expands the image proportionally up, down, left, and right, and intelligently fills in the content. Scenarios include adjusting composition and expanding a vertical image into a horizontal one to fit different media sizes.
How to use: Set
functiontoexpand.Related parameters: The
top_scale,bottom_scale,left_scale, andright_scaleparameters inparameterscontrol the expansion ratios for the four directions, respectively (for example, setting all of them to 1.5 expands the content to 1.5 times its original size).Prompt tip: Describe the complete scene you expect to see after expansion.
Input image | Output image |
|
|
Image super resolution
This feature improves image definition and supports upscaling to clarify low-resolution or blurry images. It is useful for scenarios such as restoring old photos or increasing material resolution to meet high-definition printing or display requirements.
How to use: Set the parameter
functiontosuper_resolution.Associated parameter:
parameters.upscale_factor(1 to 4, defaults to 1) controls the upscaling factor. When the value is 1, it only improves the definition without upscaling.Prompt tip: Use "Image super resolution" or describe the image content.
Input image (blurry image) | Output image (clear image) |
|
|
Image colorization
Converts a black and white or grayscale image to a color image (black and white/grayscale → color). Scenarios include colorizing historical photos and adding color to line art or grayscale images.
How to use: Set
functiontocolorization.Prompt tip: You can leave the prompt empty to let the model colorize automatically, or specify the colors of key elements in the prompt (such as "blue background, yellow leaves").
Input image | Output image |
|
|
Line art to image (supports doodle-based drawing)
Generates a new image based on the outline (line art) of an input image and a prompt. Use cases include architectural concept design, illustration design, and doodle-based drawing.
How to use: Set
functiontodoodle.Related parameter:
parameters.is_sketch, which controls the model's generation result.false(default): The input is an RGB image. The model first extracts the line art and then generates an image (RGB image → line art → new image).true: The input is an RGB image (such as a doodle or line art). The model directly generates an image based on this input (RGB image → new image).
Prompt tip: Describe the expected image content. The more specific the description, the better the generated result.
Capabilities | Input image | Output image |
Line art to image (is_sketch=false) |
|
A living room in a minimalist Nordic style. |
Doodle-based drawing (is_sketch=true) |
|
A tree, in a two-dimensional anime style. |
Generate image based on a reference cartoon character
Legal risk warning
Using this feature to process copyrighted cartoon characters may constitute copyright infringement. You must have the legal right to use the referenced character or use your own original character. You must also assume all related legal responsibilities.
How to use: Set the parameter
functiontocontrol_cartoon_feature.Prompt tip: Use the "The cartoon character ..." format and describe the character's actions and environment in detail.
Input image | Output image |
|
|
Going live
Best practices
Asynchronous polling: When polling for the result of an asynchronous task, use a reasonable polling policy (such as polling every 3 seconds for the first 30 seconds, then increasing the interval) to avoid triggering rate limits.
Parameter tuning: For key parameters that affect the results, such as
strength, we recommend performing small-scale tests before going live to determine the optimal values for your scenario.Image storage: The image URL returned by the API is valid for 24 hours. You should download and transfer the generated images to your own persistent storage service, such as Alibaba Cloud Object Storage Service (OSS), before they expire.
Risk prevention
Error handling: Check the
task_statusin the task query result. If the status isFAILED, record thecodeandmessagefor troubleshooting. Some errors, such as system timeouts, may be transient, so you can implement a retry logic.Content moderation: The API performs a security review on all input and output content. If the content is non-compliant, the API returns a
DataInspectionFailederror.
API reference
For more information about the input and response parameters of the Wan - General Image Editing model, see General image editing API reference.
Billing and rate limiting
For model free quotas and pricing, see Model List and Prices.
For more information about model rate limiting, see Wan.
Billing details:
You are billed for the duration in seconds of each successfully generated video. Billing occurs only when the API returns a
task_statusofSUCCEEDED.Failed model calls or processing errors do not incur any fees or consume the new user free quota.
Error codes
If a call fails, see Error messages for troubleshooting.
FAQ
Q: Why did my task fail (FAILED)?
A: Common reasons for task failure include the following:
Content moderation failure: The input or generated image content triggered a security policy.
Parameter error: The parameters in the request are invalid, such as an incorrect
functionname or an inaccessible URL.Internal model error: The model encountered an unexpected issue during processing. You can check the
codeandmessagefields in the task query response to retrieve the specific error code and message for troubleshooting.




















Wool yarn






























