Qwen-Image-Edit supports multi-image input and output. It can accurately modify text in images, add, delete, or move objects, change subject actions, transfer image styles, and enhance details.
Getting started
This example shows how to use qwen-image-edit-plus to generate two edited images based on three input images and a prompt.
Input prompt: The girl in Image 1 wears the black dress from Image 2 and sits in the pose from Image 3.
Input image 1 | Input image 2 | Input image 3 | Output images (multiple images) | |
|
|
|
|
|
Before making a call, obtain an API key and set the API key as an environment variable.
To make calls using the SDK, install the DashScope SDK. The SDK is available for Python and Java.
The qwen-image-edit series of models supports one to three input images. qwen-image-edit-plus and qwen-image-edit-plus-2025-10-30 can generate one to six images, while qwen-image-edit can generate only one image. The URLs for the generated images are valid for 24 hours. Download the images to your local device promptly.
Python
import json
import os
import dashscope
from dashscope import MultiModalConversation
# The following is the URL for the Singapore region. If you use a model in the Beijing region, replace the URL with: https://dashscope.aliyuncs.com/api/v1
dashscope.base_http_api_url = 'https://dashscope-intl.aliyuncs.com/api/v1'
# The model supports one to three input images.
messages = [
{
"role": "user",
"content": [
{"image": "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20250925/thtclx/input1.png"},
{"image": "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20250925/iclsnx/input2.png"},
{"image": "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20250925/gborgw/input3.png"},
{"text": "Make the girl from Image 1 wear the black dress from Image 2 and sit in the pose from Image 3."}
]
}
]
# The API Keys for the Singapore and Beijing regions are different. Get an API Key: https://www.alibabacloud.com/help/en/model-studio/get-api-key
# If you have not configured the environment variable, replace the following line with your Model Studio API Key: api_key="sk-xxx"
api_key = os.getenv("DASHSCOPE_API_KEY")
# The model supports only single-turn conversations and reuses the multi-turn conversation API.
# qwen-image-edit-plus supports outputting 1 to 6 images. This example shows how to output 2 images.
response = MultiModalConversation.call(
api_key=api_key,
model="qwen-image-edit-plus",
messages=messages,
stream=False,
n=2,
watermark=False,
negative_prompt=" ",
prompt_extend=True,
# The size parameter is supported only when the number of output images n is 1. Otherwise, an error is reported.
# size="1024*2048",
)
if response.status_code == 200:
# To view the full response, uncomment the following line.
# print(json.dumps(response, ensure_ascii=False))
for i, content in enumerate(response.output.choices[0].message.content):
print(f"URL of output image {i+1}: {content['image']}")
else:
print(f"HTTP status code: {response.status_code}")
print(f"Error code: {response.code}")
print(f"Error message: {response.message}")
print("For more information, see the documentation: https://www.alibabacloud.com/help/en/model-studio/error-code")
Java
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversation;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationParam;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationResult;
import com.alibaba.dashscope.common.MultiModalMessage;
import com.alibaba.dashscope.common.Role;
import com.alibaba.dashscope.exception.ApiException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import com.alibaba.dashscope.exception.UploadFileException;
import com.alibaba.dashscope.utils.JsonUtils;
import com.alibaba.dashscope.utils.Constants;
import java.io.IOException;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashMap;
import java.util.Map;
import java.util.List;
public class QwenImageEdit {
static {
// The following URL is for the Singapore region. If you use a model in the Beijing region, replace the URL with https://dashscope.aliyuncs.com/api/v1.
Constants.baseHttpApiUrl = "https://dashscope-intl.aliyuncs.com/api/v1";
}
// The API keys for the Singapore and Beijing regions are different. To obtain an API key, visit https://www.alibabacloud.com/help/en/model-studio/get-api-key.
// If you have not configured the environment variable, replace the following line with your Model Studio API key: apiKey="sk-xxx"
static String apiKey = System.getenv("DASHSCOPE_API_KEY");
public static void call() throws ApiException, NoApiKeyException, UploadFileException, IOException {
MultiModalConversation conv = new MultiModalConversation();
// The model supports one to three input images.
MultiModalMessage userMessage = MultiModalMessage.builder().role(Role.USER.getValue())
.content(Arrays.asList(
Collections.singletonMap("image", "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20250925/thtclx/input1.png"),
Collections.singletonMap("image", "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20250925/iclsnx/input2.png"),
Collections.singletonMap("image", "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20250925/gborgw/input3.png"),
Collections.singletonMap("text", "The girl in image 1 is wearing the black dress from image 2 and sitting in the pose from image 3.")
)).build();
// qwen-image-edit-plus supports outputting 1 to 6 images. This example shows how to output 2 images.
Map<String, Object> parameters = new HashMap<>();
parameters.put("watermark", false);
parameters.put("negative_prompt", " ");
parameters.put("n", 2);
parameters.put("prompt_extend", true);
// The size parameter is supported only when the number of output images n is 1. Otherwise, an error is reported.
// parameters.put("size", "1024*2048");
MultiModalConversationParam param = MultiModalConversationParam.builder()
.apiKey(apiKey)
.model("qwen-image-edit-plus")
.messages(Collections.singletonList(userMessage))
.parameters(parameters)
.build();
MultiModalConversationResult result = conv.call(param);
// To view the complete response, uncomment the following line.
// System.out.println(JsonUtils.toJson(result));
List<Map<String, Object>> contentList = result.getOutput().getChoices().get(0).getMessage().getContent();
int imageIndex = 1;
for (Map<String, Object> content : contentList) {
if (content.containsKey("image")) {
System.out.println("URL of output image " + imageIndex + ": " + content.get("image"));
imageIndex++;
}
}
}
public static void main(String[] args) {
try {
call();
} catch (ApiException | NoApiKeyException | UploadFileException | IOException e) {
System.out.println(e.getMessage());
}
}
}curl
The following command uses the URL for the Singapore region. If you use a model in the China (Beijing) region, replace the URL with: https://dashscope.aliyuncs.com/api/v1/services/aigc/multimodal-generation/generation
curl --location 'https://dashscope-intl.aliyuncs.com/api/v1/services/aigc/multimodal-generation/generation' \
--header 'Content-Type: application/json' \
--header "Authorization: Bearer $DASHSCOPE_API_KEY" \
--data '{
"model": "qwen-image-edit-plus",
"input": {
"messages": [
{
"role": "user",
"content": [
{
"image": "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20250925/thtclx/input1.png"
},
{
"image": "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20250925/iclsnx/input2.png"
},
{
"image": "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20250925/gborgw/input3.png"
},
{
"text": "The girl in Image 1 wears the black dress from Image 2 and sits down in the pose from Image 3."
}
]
}
]
},
"parameters": {
"n": 2,
"negative_prompt": " ",
"prompt_extend": true,
"watermark": false
}
}'Showcase
Multi-image fusion
Input image 1 | Input image 2 | Input image 3 | Output image |
|
|
|
The girl in Image 1 wears the necklace from Image 2 and carries the bag from Image 3 on her left shoulder. |
Subject consistency
Input image | Output image 1 | Output image 2 | Output image 3 |
|
Change to a certificate photo with a blue background. The person wears a white shirt, a black suit, and a striped tie. |
The person wears a white shirt, a gray suit, and a striped tie, with one hand on the tie, against a light-colored background. |
The person wears a black hoodie with "Qwen Image" in a thick brush font, leans against a guardrail with sunlight on their hair, and a bridge and the sea are in the background. |
|
Place this air conditioner in the living room, next to the sofa. |
Add mist coming from the air conditioner's vent, extending to the sofa, and add green leaves. |
Add the white handwritten text "Natural Fresh Air, Enjoy Breathing" at the top. |
Sketch creation
Input image | Output image | |
|
Generate an image that matches the detailed shape outlined in Image 1 and follows this description: A young woman smiles on a sunny day. She wears round brown sunglasses with a leopard print frame. Her hair is neatly tied up, she wears pearl earrings, a dark blue scarf with white star patterns, and a black leather jacket. |
Generate an image that matches the detailed shape outlined in Image 1 and follows this description: An elderly man smiles at the camera. His face is wrinkled, his hair is messy in the wind, and he wears round-framed reading glasses. He has a worn-out red scarf with star patterns around his neck and is wearing a cotton-padded jacket. |
Creative product generation
Input image | Output image | ||
|
Make this bear sit under the moon (represented by a light gray crescent outline on a white background), holding a guitar, with small stars and speech bubbles with phrases such as "Be Kind" floating around. |
Print this design on a T-shirt and a paper tote bag. A female model is displaying these items. She is also wearing a baseball cap with "Be kind" written on it. |
A hyper-realistic 1/7 scale character model, designed as a commercial finished product, is placed on an iMac computer with a white keyboard. The model stands on a clean, round, transparent acrylic base with no labels or text. Professional studio lighting highlights the sculpted details. The ZBrush modeling process for the same model is displayed on the iMac screen in the background. Next to the model, place a packaging box with a transparent window on the front, showing only the clear plastic shell inside, which is slightly taller than the model and reasonably sized to hold it. |
This bear is wearing an astronaut suit and pointing into the distance. |
This bear is wearing a gorgeous ball gown, with its arms spread in an elegant dance pose. |
This bear is wearing sportswear, holding a basketball, with one leg bent. | |
Generate image from depth map
Input image | Output image | |
|
Generate an image that matches the depth map outlined in Image 1 and follows this description: A blue bicycle is parked in a side alley, with a few weeds growing from cracks in the stone in the background. |
Generate an image that matches the depth map outlined in Image 1 and follows this description: A worn-out red bicycle is parked on a muddy path, with a dense primeval forest in the background. |
Generate image from keypoints
Input image | Output image | |
|
Generate an image that matches the human pose outlined in Image 1 and follows this description: A Chinese woman in a Hanfu is holding an oil-paper umbrella in the rain, with a Suzhou garden in the background. |
Generate an image that matches the human pose outlined in Image 1 and follows this description: A young man stands on a subway platform. He wears a baseball cap, a T-shirt, and jeans. A train is speeding by behind him. |
Text editing
Input image | Output image | Input image | Output image |
|
Replace 'HEALTH INSURANCE' on the Scrabble tiles with 'Tomorrow will be better'. |
|
Change the phrase "Take a Breather" on the note to "Relax and Recharge". |
Input image | Output image | ||
|
Change "Qwen-Image" to a black ink-drip font. |
Change "Qwen-Image" to a black handwriting font. |
Change "Qwen-Image" to a black pixel font. |
Change "Qwen-Image" to red. |
Change "Qwen-Image" to a blue-purple gradient. |
Change "Qwen-Image" to candy colors. | |
Change the material of "Qwen-Image" to metal. |
Change the material of "Qwen-Image" to clouds. |
Change the material of "Qwen-Image" to glass. | |
Add, delete, modify, and replace
Capability | Input image | Output image |
Add element |
|
Add a small wooden sign in front of the penguin that says "Welcome to Penguin Beach". |
Delete element |
|
Remove the hair from the plate. |
Replace element |
|
Change the peaches to apples. |
Portrait modification |
|
Make her close her eyes. |
Pose modification |
|
She raises her hands with palms facing the camera and fingers spread in a playful pose. |
Viewpoint transformation
Input image | Output image | Input image | Output image |
|
Get a front view. |
|
Face left. |
|
Get a rear view. |
|
Face right. |
Background replacement
Input image | Output image | |
|
Change the background to a beach. |
Replace the original background with a realistic modern classroom scene. In the center of the background is a traditional dark green or black blackboard. The Chinese characters "Qwen" are neatly written on the blackboard in white chalk. |
Old photo processing
Capability | Input image | Output image |
Old photo restoration and colorization |
|
Restore the old photo, remove scratches, reduce noise, enhance details, high resolution, realistic image, natural skin tone, clear facial features, no distortion. |
|
Intelligently colorize the image based on its content to make it more vivid. |
Input instructions
Input parameter structure (messages)
The input is a messages array. Currently, only single-turn conversations are supported. Therefore, the array must contain only one object. This object contains the role and content properties. The role must be set to user. The content must include an image (one to three images) and text (one editing instruction).
"messages": [
{
"role": "user",
"content": [
{ "image": "Public URL or Base64 data of Image 1" },
{ "image": "Public URL or Base64 data of Image 2" },
{ "image": "Public URL or Base64 data of Image 3" },
{ "text": "Your editing instruction, for example: 'The girl in Image 1 wears the black dress from Image 2 and sits in the pose from Image 3.'" }
]
}
]Image input order
When you edit multiple images, the editing instruction must correspond to the order in which the images are provided in the content field, such as 'Image 1' and 'Image 2'. Otherwise, the results may be unexpected.
Input image 1 | Input image 2 | Output image | |
|
|
Replace the clothes of the girl in Image 1 with the clothes of the girl in Image 2. |
Replace the clothes of the girl in Image 2 with the clothes of the girl in Image 1. |
Image input methods
Public URL
You can provide a publicly accessible image URL that supports the HTTP or HTTPS protocol.
Example value:
https://xxxx/img.png.
Base64 encoding
You can convert the image file to a Base64-encoded string and concatenate it in the format: data:{mime_type};base64,{base64_data}.
{mime_type}: The media type of the image, which must correspond to the file format.{base64_data}: The Base64-encoded string of the file.Example value:
data:image/jpeg;base64,GDU7MtCZz...(The example is truncated for demonstration purposes.)
For complete code examples, see Python SDK and Java SDK.
More parameters
You can adjust the generation results using the following optional parameters:
n: The number of images to generate. The default value is 1.
qwen-image-edit-plusandqwen-image-edit-plus-2025-10-30support 1 to 6 images.qwen-image-editsupports only 1 image.negative_prompt: Describes content to exclude from the image, such as "blur" or "extra fingers". This parameter helps optimize the quality of the generated image.
watermark: Specifies whether to add a "Qwen-Image" watermark to the bottom-right corner of the image. The default value is
false. The following image shows the watermark style:
seed: Specifies the random number seed. The value can be an integer from
[0, 2147483647]. If this parameter is not specified, the algorithm generates a random number to use as the seed. Using the same seed value helps ensure that the generated content is relatively consistent.
The following optional parameters are available only for qwen-image-edit-plus and qwen-image-edit-plus-2025-10-30:
size: Specifies the resolution of the output image. The format is a
width*heightstring, such as"1024*2048". The width and height can range from 512 to 2048 pixels. This parameter is effective only when the number of output images, n, is 1. Otherwise, an error is returned. If this parameter is not set, the output image retains an aspect ratio similar to the original image, with a resolution close to1024*1024.prompt_extend: Specifies whether to enable the prompt rewriting feature. This feature is enabled by default. When enabled, the service uses a large model to optimize the prompt. This can significantly improve the results for simple or less descriptive prompts.
For a complete list of parameters, see Qwen-Image-Edit API reference.
Billing and rate limiting
Singapore region
Model | Unit price | Rate limit (shared by Alibaba Cloud account and RAM users) | Free quota (View) | |
Requests per second (RPS) limit | Number of concurrent tasks | |||
qwen-image-edit-plus | $0.03/image | 2 | No limit for sync APIs | 100 images |
qwen-image-edit-plus-2025-10-30 | $0.03/image | 2 | No limit for sync APIs | 100 images |
qwen-image-edit | $0.045/image | 2 | No limit for sync APIs | 100 images |
Beijing region
Model | Unit price | Rate limit (shared by Alibaba Cloud account and RAM users) | Free quota (View) | |
Requests per second (RPS) limit | Number of concurrent tasks | |||
qwen-image-edit-plus | $0.028671/image | 2 | No limit for sync APIs | No free quota |
qwen-image-edit-plus-2025-10-30 | $0.028671/image | 2 | No limit for sync APIs | |
qwen-image-edit | $0.043/image | 2 | No limit for sync APIs | |
Billing description:
Billing is based on the number of successfully generated images. Failed model calls or processing errors do not incur fees or consume your free quota.
You can enable the 'Stop on Free Quota Exhaustion' feature to avoid extra charges after your free quota is exhausted. For more information, see Free Quota.
API reference
For information about the input and output parameters, see Qwen - image editing.
Error codes
If a call fails, see Error messages for troubleshooting.
FAQ
Q: Does qwen-image-edit support multi-turn conversational editing?
A: No, it does not. qwen-image-edit is designed for single-turn execution. Each call is an independent, stateless editing task, and the model does not store your editing history. To perform continuous edits, you can use the output image from a previous edit as the new input image and call the service again.
Q: How do I view my model usage?
A: Model call information has a latency of up to one hour. One hour after a model call, you can go to the Model Observation (Singapore or Beijing) page to view metrics such as call usage, number of calls, and success rate. For more information, see How to view model call records.
For more information, see Image generation FAQ.











































































