All Products
Search
Document Center

Alibaba Cloud Model Studio:Qwen-Image-Edit API reference

Last Updated:Oct 31, 2025

The Qwen-Image-Edit model (qwen-image-edit-plus) supports multi-image input and output. You can use it to accurately modify text, add or remove objects, change subject actions, transfer styles, and enhance details in images.

Examples

Multi-image fusion

image99

image98

image89

image100

imageout2

Image 1

Image 2

Image 3

Make the girl from Image 1 wear the black dress from Image 2 and sit in the pose from Image 3.

image83

image103

1

2

imageout2

Image 1

Image 2

Image 3

Make the girl from Image 1 wear the necklace from Image 2 and carry the bag from Image 3 on her left shoulder.

Single-image editing

image36

image38

image

image

Original image

Generate an image that matches the depth map, following this description: A dilapidated red bicycle is parked on a muddy path with a dense primeval forest in the background.

Original image

Replace the words "HEALTH INSURANCE" on the letter blocks with "Tomorrow will be better".

5

5out

6

6out

Original image

Replace the dotted shirt with a light blue shirt.

Original image

Change the background of the image to Antarctica.

HTTP call

Before making a call, obtain an API key and set the API key as an environment variable.

To make calls using the SDK, install the DashScope SDK. The SDK is available for Python and Java.

Important

The Beijing and Singapore regions have separate API keys and request endpoints. Do not use them interchangeably. Cross-region calls cause authentication failures or service errors.

Singapore region:POST https://dashscope-intl.aliyuncs.com/api/v1/services/aigc/multimodal-generation/generation

Beijing region:POST https://dashscope.aliyuncs.com/api/v1/services/aigc/multimodal-generation/generation

Request parameters

Single-image editing

This example uses the qwen-image-edit-plus model to generate two images.

The following is the URL for the Singapore region. If you use a model in the Beijing region, replace the URL with: https://dashscope.aliyuncs.com/api/v1/services/aigc/multimodal-generation/generation

curl --location 'https://dashscope-intl.aliyuncs.com/api/v1/services/aigc/multimodal-generation/generation' \
--header 'Content-Type: application/json' \
--header "Authorization: Bearer $DASHSCOPE_API_KEY" \
--data '{
    "model": "qwen-image-edit-plus",
    "input": {
        "messages": [
            {
                "role": "user",
                "content": [
                    {
                        "image": "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20250925/fpakfo/image36.webp"
                    },
                    {
                        "text": "Generate an image that matches the depth map, following this description: A dilapidated red bicycle is parked on a muddy path with a dense primeval forest in the background."
                    }
                ]
            }
        ]
    },
    "parameters": {
        "n": 2,
        "negative_prompt": " ",
        "watermark": false
    }
}'

Multi-image fusion

This example uses the qwen-image-edit-plus model to generate two images.

The following is the URL for the Singapore region. If you use a model in the Beijing region, replace the URL with: https://dashscope.aliyuncs.com/api/v1/services/aigc/multimodal-generation/generation

curl --location 'https://dashscope-intl.aliyuncs.com/api/v1/services/aigc/multimodal-generation/generation' \
--header 'Content-Type: application/json' \
--header "Authorization: Bearer $DASHSCOPE_API_KEY" \
--data '{
    "model": "qwen-image-edit",
    "input": {
        "messages": [
            {
                "role": "user",
                "content": [
                    {
                        "image": "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20250925/thtclx/input1.png"
                    },
                    {
                        "image": "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20250925/iclsnx/input2.png"
                    },
                    {
                        "image": "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20250925/gborgw/input3.png"
                    },
                    {
                        "text": "The girl in Image 1 wears the black dress from Image 2 and sits down in the pose from Image 3."
                    }
                ]
            }
        ]
    },
    "parameters": {
        "n": 2,
        "negative_prompt": " ",
        "watermark": false
    }
}'
Headers

Content-Type string (Required)

The content type of the request. Set this parameter to application/json.

Authorization string (Required)

The identity authentication credentials for the request. This API uses an Model Studio API key for identity authentication. Example: Bearer sk-xxxx.

Request body

model string (Required)

The model to use. The following models are available:

qwen-image-edit-plus and qwen-image-edit-plus-2025-10-30: Support generating 1 to 6 images.

qwen-image-edit: Supports generating only one image.

input object (Required)

The input parameter object. It contains the following fields:

Properties

messages array (Required)

The content of the request, which is an array. Currently, only single-turn conversations are supported. Therefore, the array must contain exactly one object. This object contains the role and content properties.

Properties

role string (Required)

The role of the message sender. This must be set to user.

content array (Required)

The content of the message. It includes one to three images in the {"image": "..."} format and a single editing instruction in the {"text": "..."} format.

Properties

image string (Required)

The URL or Base64-encoded data of the input image. You can provide one to three images.

Image requirements:

  • Image format: JPG, JPEG, PNG, BMP, TIFF, or WEBP.

  • Image resolution: The width and height of the image must both be between 384 and 3,072 pixels.

  • Image size: No larger than 10 MB.

  • If a URL contains non-ASCII characters, such as Chinese characters, you must encode it before passing it in the request.

    URL encoding

    from urllib.parse import quote
    
    # Replace the following URL with the URL you want to encode.
    url = "https://example.com/search?q=test&page=1"
    encoded_url = quote(url, safe=':/?#[]@!$&\'()*+,;=%')
    print(f"Original URL: {url}")
    print(f"Encoded URL: {encoded_url}")

Method 1: Public URL: An HTTP or HTTPS image address that is accessible from the internet.

  • Example: https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20250925/fpakfo/image36.webp

Method 2: Base64 encoding: The string must be in the data:{mime_type};base64,{base64_data} format.

  • {mime_type}: The media type of the image. It must correspond to the file format.

  • {base64_data}: The Base64-encoded string of the file.

  • Mapping between image formats and {mime_type} types:

    • JPEG/JPG: image/jpeg

    • PNG: image/png

    • BMP: image/bmp

    • TIFF: image/tiff

    • WEBP: image/webp

  • Click to view a code sample for image Base64 encoding

    import os
    import base64
    import mimetypes
    
    # Format: data:{mime_type};base64,{base64_data}
    def encode_file(file_path):
        mime_type, _ = mimetypes.guess_type(file_path)
        with open(file_path, "rb") as image_file:
            encoded_string = base64.b64encode(image_file.read()).decode('utf-8')
        return f"data:{mime_type};base64,{encoded_string}"
            
            
    # Call the encoding function. Replace "/path/to/your/image.png" with the path to your local image file. Otherwise, the code will not run.
    image = encode_file("/path/to/your/image.png")
  • Example value: data:image/jpeg;base64,GDU7MtCZz... (This example is truncated for demonstration purposes.)

For complete code samples, see Python SDK or Java SDK.

text string (Required)

The image editing instruction, also known as the positive prompt. It describes the elements and visual features you want in the generated image.

When you edit multiple images, you must use descriptions such as "Image 1", "Image 2", and "Image 3" in the editing instruction to refer to the corresponding images. Otherwise, the editing results may not meet your expectations.

This parameter supports Chinese and English. The maximum length is 800 characters. Each Chinese character or letter is counted as one character. Content that exceeds the limit is automatically truncated.

Example: Make the girl from Image 1 wear the black dress from Image 2 and sit in the pose from Image 3. Keep her clothing, hairstyle, and expression unchanged, and ensure the action is natural and smooth.

parameters object (Optional)

Additional parameters to control image generation.

Properties

n integer (Optional)

The number of images to generate. The default value is 1.

For qwen-image-edit-plus and qwen-image-edit-plus-2025-10-30, you can generate 1 to 6 images.

For qwen-image-edit, only one image can be generated.

negative_prompt string (Optional)

The negative prompt. It describes the content you do not want to see in the image and can be used to constrain the output.

This parameter supports Chinese and English. The maximum length is 500 characters. Each Chinese character or letter is counted as one character. Content that exceeds the limit is automatically truncated.

Example: low resolution, error, worst quality, low quality, disfigured, extra fingers, bad proportions.

watermark boolean (Optional)

Specifies whether to add a "Qwen-Image" watermark1 to the bottom-right corner of the image. The default value is false.

seed integer (Optional)

The random number seed. The value must be an integer in the range of [0, 2147483647].

Using the same seed parameter value helps ensure the stability of the generated content. If you do not specify this parameter, the algorithm uses a random number seed.

Note: The model generation process is probabilistic. Even if you use the same seed, the results are not guaranteed to be identical for each request.

Response parameters

Successful task execution

Task data, such as the task status and image URLs, is retained for only 24 hours and is automatically purged after this period. You must save the generated images promptly.

{
    "output": {
        "choices": [
            {
                "finish_reason": "stop",
                "message": {
                    "role": "assistant",
                    "content": [
                        {
                            "image": "https://dashscope-result-sz.oss-cn-shenzhen.aliyuncs.com/xxx.png?Expires=xxx"
                        },
                        {
                            "image": "https://dashscope-result-sz.oss-cn-shenzhen.aliyuncs.com/xxx.png?Expires=xxx"
                        }
                    ]
                }
            }
        ]
    },
    "usage": {
        "width": 1248,
        "image_count": 2,
        "height": 832
    },
    "request_id": "bf37ca26-0abe-98e4-8065-xxxxxx"
}

Abnormal task execution

If a task fails, relevant information is returned. You can identify the cause of the failure from the `code` and `message` fields. For more information about how to resolve errors, see Error codes.

{
    "request_id": "31f808fd-8eef-9004-xxxxx",
    "code": "InvalidApiKey",
    "message": "Invalid API-key provided."
}

output object

The results generated by the model.

Properties

choices array

A list of generated results.

Properties

finish_reason string

The reason the generation task stopped. A value of stop indicates a natural stop.

message object

The message returned by the model.

Properties

role string

The role of the message sender. The value is assistant.

content array

The content of the message, which contains information about the generated image.

Properties

image string

The URL of the generated image. The link is valid for 24 hours. Download and save the image promptly.

The generated image is in PNG format. It maintains the same aspect ratio as the original image and has a resolution of approximately 1024 × 1024 pixels.

usage object

The resource usage for this request. This parameter is returned only when the request is successful.

Properties

image_count integer

The number of generated images. This value is the same as the `n` parameter in the request.

width integer

The width of the generated image in pixels.

height integer

The height of the generated image in pixels.

request_id string

The unique request ID. You can use this ID to trace and troubleshoot issues.

code string

The error code for a failed request. This parameter is not returned if the request is successful. For more information, see Error messages.

message string

The detailed information about a failed request. This parameter is not returned if the request is successful. For more information, see Error messages.

DashScope SDK call

The SDK parameter names are mostly consistent with the HTTP API. The parameter structure is encapsulated based on the features of each programming language. For a complete list of parameters, see the Qwen API Reference.

Python SDK call

Note
  • We recommend that you install the latest version of the DashScope Python SDK. Otherwise, runtime errors may occur. For more information, see Install or upgrade the SDK.

  • Asynchronous APIs are not supported.

Request examples

This example uses the qwen-image-edit-plus model to generate two images.

Pass an image using a public URL

import json
import os
import dashscope
from dashscope import MultiModalConversation

# The following is the URL for the Singapore region. If you use a model in the Beijing region, replace the URL with: https://dashscope.aliyuncs.com/api/v1
dashscope.base_http_api_url = 'https://dashscope-intl.aliyuncs.com/api/v1'

# The model supports one to three input images.
messages = [
    {
        "role": "user",
        "content": [
            {"image": "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20250925/thtclx/input1.png"},
            {"image": "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20250925/iclsnx/input2.png"},
            {"image": "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20250925/gborgw/input3.png"},
            {"text": "Make the girl from Image 1 wear the black dress from Image 2 and sit in the pose from Image 3."}
        ]
    }
]

# The API Keys for the Singapore and Beijing regions are different. Get an API Key: https://www.alibabacloud.com/help/en/model-studio/get-api-key
# If you have not configured the environment variable, replace the following line with your Model Studio API Key: api_key="sk-xxx"
api_key = os.getenv("DASHSCOPE_API_KEY")

# The model supports only single-turn conversations and reuses the multi-turn conversation API.
# qwen-image-edit-plus supports outputting 1 to 6 images. This example shows how to output 2 images.
response = MultiModalConversation.call(
    api_key=api_key,
    model="qwen-image-edit-plus",
    messages=messages,
    stream=False,
    n=2,
    watermark=False,
    negative_prompt=" "
)

if response.status_code == 200:
    # To view the full response, uncomment the following line.
    # print(json.dumps(response, ensure_ascii=False))
    for i, content in enumerate(response.output.choices[0].message.content):
        print(f"URL of output image {i+1}: {content['image']}")
else:
    print(f"HTTP status code: {response.status_code}")
    print(f"Error code: {response.code}")
    print(f"Error message: {response.message}")
    print("For more information, see the documentation: https://www.alibabacloud.com/help/en/model-studio/error-code")

Pass an image using Base64 encoding

import json
import os
import dashscope
from dashscope import MultiModalConversation
import base64
import mimetypes

# The following is the URL for the Singapore region. If you use a model in the Beijing region, replace the URL with: https://dashscope.aliyuncs.com/api/v1
dashscope.base_http_api_url = 'https://dashscope-intl.aliyuncs.com/api/v1'


# --- For Base64 encoding ---
# Format: data:{mime_type};base64,{base64_data}
def encode_file(file_path):
    mime_type, _ = mimetypes.guess_type(file_path)
    if not mime_type or not mime_type.startswith("image/"):
        raise ValueError("Unsupported or unrecognized image format")

    try:
        with open(file_path, "rb") as image_file:
            encoded_string = base64.b64encode(
                image_file.read()).decode('utf-8')
        return f"data:{mime_type};base64,{encoded_string}"
    except IOError as e:
        raise IOError(f"Error reading file: {file_path}, Error: {str(e)}")


# Get the Base64 encoding of the image.
# Call the encoding function. Replace "/path/to/your/image.png" with the path to your local image file. Otherwise, the code will not run.
image = encode_file("/path/to/your/image.png")

messages = [
    {
        "role": "user",
        "content": [
            {"image": image},
            {"text": "Generate an image that matches the depth map, following this description: A dilapidated red bicycle is parked on a muddy path with a dense primeval forest in the background."}
        ]
    }
]

# The API Keys for the Singapore and Beijing regions are different. Get an API Key: https://www.alibabacloud.com/help/en/model-studio/get-api-key
# If you have not configured the environment variable, replace the following line with your Model Studio API Key: api_key="sk-xxx"
api_key = os.getenv("DASHSCOPE_API_KEY")


# The model supports only single-turn conversations and reuses the multi-turn conversation API.
# qwen-image-edit-plus supports outputting 1 to 6 images. This example shows how to output 2 images.
response = MultiModalConversation.call(
    api_key=api_key,
    model="qwen-image-edit-plus",
    messages=messages,
    stream=False,
    n=2,
    watermark=False,
    negative_prompt=" "
)

if response.status_code == 200:
    # To view the full response, uncomment the following line.
    # print(json.dumps(response, ensure_ascii=False))
    for i, content in enumerate(response.output.choices[0].message.content):
        print(f"URL of output image {i+1}: {content['image']}")
else:
    print(f"HTTP status code: {response.status_code}")
    print(f"Error code: {response.code}")
    print(f"Error message: {response.message}")
    print("For more information, see the documentation: https://www.alibabacloud.com/help/en/model-studio/error-code")

Download an image from a URL

# You need to install requests to download the image: pip install requests
import requests


def download_image(image_url, save_path='output.png'):
    try:
        response = requests.get(image_url, stream=True, timeout=300)  # Set timeout
        response.raise_for_status()  # Raise an exception if the HTTP status code is not 200.
        with open(save_path, 'wb') as f:
            for chunk in response.iter_content(chunk_size=8192):
                f.write(chunk)
        print(f"Image downloaded successfully to: {save_path}")

    except requests.exceptions.RequestException as e:
        print(f"Image download failed: {e}")


image_url = "https://dashscope-result-sz.oss-cn-shenzhen.aliyuncs.com/xxx.png?Expires=xxx"
download_image(image_url, save_path='output.png')

Response example

The image link is valid for 24 hours. Download the image promptly.

input_tokens and output_tokens are compatibility fields. Their values are currently fixed at 0.
{
    "status_code": 200,
    "request_id": "121d8c7c-360b-4d22-a976-6dbb8bxxxxxx",
    "code": "",
    "message": "",
    "output": {
        "text": null,
        "finish_reason": null,
        "choices": [
            {
                "finish_reason": "stop",
                "message": {
                    "role": "assistant",
                    "content": [
                        {
                            "image": "https://dashscope-result-sz.oss-cn-shenzhen.aliyuncs.com/xxx.png?Expires=xxx"
                        },
                        {
                            "image": "https://dashscope-result-sz.oss-cn-shenzhen.aliyuncs.com/xxx.png?Expires=xxx"
                        }
                    ]
                }
            }
        ]
    },
    "usage": {
        "input_tokens": 0,
        "output_tokens": 0,
        "height": 1248,
        "image_count": 2,
        "width": 832
    }
}

Java SDK call

Note

Install the latest version of the DashScope Java SDK. Otherwise, a runtime error may occur. For more information, see Install or upgrade the SDK.

Request examples

The following example shows how to use the qwen-image-edit-plus model to generate two images.

Pass an image using a public URL

import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversation;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationParam;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationResult;
import com.alibaba.dashscope.common.MultiModalMessage;
import com.alibaba.dashscope.common.Role;
import com.alibaba.dashscope.exception.ApiException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import com.alibaba.dashscope.exception.UploadFileException;
import com.alibaba.dashscope.utils.JsonUtils;
import com.alibaba.dashscope.utils.Constants;

import java.io.IOException;
import java.util.Arrays;
import java.util.Collections;
import java.util.HashMap;
import java.util.Map;
import java.util.List;

public class QwenImageEdit {

    static {
        // The following URL is for the Singapore region. If you use a model in the Beijing region, replace the URL with https://dashscope.aliyuncs.com/api/v1.
        Constants.baseHttpApiUrl = "https://dashscope-intl.aliyuncs.com/api/v1";
    }
    
    // The API keys for the Singapore and Beijing regions are different. To obtain an API key, visit https://www.alibabacloud.com/help/en/model-studio/get-api-key.
    // If you have not configured the environment variable, replace the following line with your Model Studio API key: apiKey="sk-xxx"
    static String apiKey = System.getenv("DASHSCOPE_API_KEY");

    public static void call() throws ApiException, NoApiKeyException, UploadFileException, IOException {

        MultiModalConversation conv = new MultiModalConversation();

        // The model supports one to three input images.
        MultiModalMessage userMessage = MultiModalMessage.builder().role(Role.USER.getValue())
                .content(Arrays.asList(
                        Collections.singletonMap("image", "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20250925/thtclx/input1.png"),
                        Collections.singletonMap("image", "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20250925/iclsnx/input2.png"),
                        Collections.singletonMap("image", "https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20250925/gborgw/input3.png"),
                        Collections.singletonMap("text", "The girl in image 1 is wearing the black dress from image 2 and sitting in the pose from image 3.")
                )).build();
        // qwen-image-edit-plus supports outputting 1 to 6 images. This example shows how to output 2 images.
        Map<String, Object> parameters = new HashMap<>();
        parameters.put("watermark", false);
        parameters.put("negative_prompt", " ");
        parameters.put("n", 2);

        MultiModalConversationParam param = MultiModalConversationParam.builder()
                .apiKey(apiKey)
                .model("qwen-image-edit-plus")
                .messages(Collections.singletonList(userMessage))
                .parameters(parameters)
                .build();

        MultiModalConversationResult result = conv.call(param);
        // To view the complete response, uncomment the following line.
        // System.out.println(JsonUtils.toJson(result));
        List<Map<String, Object>> contentList = result.getOutput().getChoices().get(0).getMessage().getContent();
        int imageIndex = 1;
        for (Map<String, Object> content : contentList) {
            if (content.containsKey("image")) {
                System.out.println("URL of output image " + imageIndex + ": " + content.get("image"));
                imageIndex++;
            }
        }
    }

    public static void main(String[] args) {
        try {
            call();
        } catch (ApiException | NoApiKeyException | UploadFileException | IOException e) {
            System.out.println(e.getMessage());
        }
    }
}

Pass an image using Base64 encoding

import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversation;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationParam;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationResult;
import com.alibaba.dashscope.common.MultiModalMessage;
import com.alibaba.dashscope.common.Role;
import com.alibaba.dashscope.exception.ApiException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import com.alibaba.dashscope.exception.UploadFileException;
import com.alibaba.dashscope.utils.JsonUtils;
import com.alibaba.dashscope.utils.Constants;

import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.Arrays;
import java.util.Base64;
import java.util.Collections;
import java.util.HashMap;
import java.util.Map;
import java.util.List;

public class QwenImageEdit {

    static {
        // The following URL is for the Singapore region. If you use a model in the Beijing region, replace the URL with https://dashscope.aliyuncs.com/api/v1.
        Constants.baseHttpApiUrl = "https://dashscope-intl.aliyuncs.com/api/v1";
    }
    
    // The API keys for the Singapore and Beijing regions are different. To obtain an API key, visit https://www.alibabacloud.com/help/en/model-studio/get-api-key.
    // If you have not configured the environment variable, replace the following line with your Model Studio API key: apiKey="sk-xxx"
    static String apiKey = System.getenv("DASHSCOPE_API_KEY");

    public static void call() throws ApiException, NoApiKeyException, UploadFileException, IOException {

        // Replace "/path/to/your/image.png" with the path to your local image file. Otherwise, the code will not run.
        String image = encodeFile("/path/to/your/image.png");

        MultiModalConversation conv = new MultiModalConversation();

        MultiModalMessage userMessage = MultiModalMessage.builder().role(Role.USER.getValue())
                .content(Arrays.asList(
                        Collections.singletonMap("image", image),
                        Collections.singletonMap("text", "Generate an image that matches the depth map and follows this description: A dilapidated red bicycle is parked on a muddy path, with a dense primeval forest in the background.")
                )).build();
        // qwen-image-edit-plus supports outputting 1 to 6 images. This example shows how to output 2 images.
        Map<String, Object> parameters = new HashMap<>();
        parameters.put("watermark", false);
        parameters.put("negative_prompt", " ");
        parameters.put("n", 2);

        MultiModalConversationParam param = MultiModalConversationParam.builder()
                .apiKey(apiKey)
                .model("qwen-image-edit-plus")
                .messages(Collections.singletonList(userMessage))
                .parameters(parameters)
                .build();

        MultiModalConversationResult result = conv.call(param);
        // To view the complete response, uncomment the following line.
        // System.out.println(JsonUtils.toJson(result));
        List<Map<String, Object>> contentList = result.getOutput().getChoices().get(0).getMessage().getContent();
        int imageIndex = 1;
        for (Map<String, Object> content : contentList) {
            if (content.containsKey("image")) {
                System.out.println("URL of output image " + imageIndex + ": " + content.get("image"));
                imageIndex++;
            }
        }
    }

    /**
     * Encodes a file into a Base64 string.
     * @param filePath The file path.
     * @return A Base64 string in the format: data:{mime_type};base64,{base64_data}
     */
    public static String encodeFile(String filePath) {
        Path path = Paths.get(filePath);
        if (!Files.exists(path)) {
            throw new IllegalArgumentException("File does not exist: " + filePath);
        }
        // Detect the MIME type.
        String mimeType = null;
        try {
            mimeType = Files.probeContentType(path);
        } catch (IOException e) {
            throw new IllegalArgumentException("Cannot detect the file type: " + filePath);
        }
        if (mimeType == null || !mimeType.startsWith("image/")) {
            throw new IllegalArgumentException("Unsupported or unrecognized image format.");
        }
        // Read the file content and encode it.
        byte[] fileBytes = null;
        try{
            fileBytes = Files.readAllBytes(path);
        } catch (IOException e) {
            throw new IllegalArgumentException("Cannot read the file content: " + filePath);
        }

        String encodedString = Base64.getEncoder().encodeToString(fileBytes);
        return "data:" + mimeType + ";base64," + encodedString;
    }

    public static void main(String[] args) {
        try {
            call();
        } catch (ApiException | NoApiKeyException | UploadFileException | IOException e) {
            System.out.println(e.getMessage());
        }
    }
}

Download an image from a URL

import java.io.FileOutputStream;
import java.io.InputStream;
import java.net.HttpURLConnection;
import java.net.URL;
 
public class ImageDownloader {
    public static void downloadImage(String imageUrl, String savePath) {
        try {
            URL url = new URL(imageUrl);
            HttpURLConnection connection = (HttpURLConnection) url.openConnection();
            connection.setConnectTimeout(5000);
            connection.setReadTimeout(300000);
            connection.setRequestMethod("GET");
            InputStream inputStream = connection.getInputStream();
            FileOutputStream outputStream = new FileOutputStream(savePath);
            byte[] buffer = new byte[8192];
            int bytesRead;
            while ((bytesRead = inputStream.read(buffer)) != -1) {
                outputStream.write(buffer, 0, bytesRead);
            }
            inputStream.close();
            outputStream.close();
 
            System.out.println("Image downloaded successfully to: " + savePath);
        } catch (Exception e) {
            System.err.println("Image download failed: " + e.getMessage());
        }
    }
 
    public static void main(String[] args) {
        String imageUrl = "http://dashscope-result-bj.oss-cn-beijing.aliyuncs.com/xxx?Expires=xxx";
        String savePath = "output.png";
        downloadImage(imageUrl, savePath);
    }
}

Response example

The image link is valid for 24 hours. Download the image promptly.

{
    "requestId": "46281da9-9e02-941c-ac78-be88b8xxxxxx",
    "usage": {
        "image_count": 2,
        "width": 1216,
        "height": 864
    },
    "output": {
        "choices": [
            {
                "finish_reason": "stop",
                "message": {
                    "role": "assistant",
                    "content": [
                        {
                            "image": "https://dashscope-result-sz.oss-cn-shenzhen.aliyuncs.com/xxx.png?Expires=xxx"
                        },
                        {
                            "image": "https://dashscope-result-sz.oss-cn-shenzhen.aliyuncs.com/xxx.png?Expires=xxx"
                        }
                    ]
                }
            }
        ]
    }
}

Billing and rate limiting

Singapore region

Model name

Unit price

Rate limit (shared by Alibaba Cloud account and RAM users)

Free quota (View)

Requests per second (RPS) limit

Number of concurrent tasks

qwen-image-edit-plus

$0.03/image

2

No limit for sync APIs

100 images

qwen-image-edit-plus-2025-10-30

$0.03/image

2

No limit for sync APIs

100 images

qwen-image-edit

$0.045/image

2

No limit for sync APIs

100 images

Beijing region

Model name

Unit price

Rate limit (shared by Alibaba Cloud account and RAM users)

Free quota (View)

Requests per second (RPS) limit

Number of concurrent tasks

qwen-image-edit-plus

$0.028671/image

2

No limit for sync APIs

No free quota

qwen-image-edit-plus-2025-10-30

$0.028671/image

2

No limit for sync APIs

qwen-image-edit

$0.043/image

2

No limit for sync APIs

Billing description:

  • You are charged based on the number of images that are successfully generated. For example, if a single request returns n images, the charge for that request is n × the unit price. A charge is applied only when the API response for a query returns a task_status of SUCCEEDED and the images are successfully generated.

  • Failed model calls or processing errors do not incur fees or consume the free quota.

  • You can enable the "Free quota only" feature to avoid additional charges after your free quota is exhausted. For more information, see Free Quota.

Configure image access permission

Images generated by the model are stored in Alibaba Cloud Object Storage Service (OSS). Each image is assigned a publicly accessible OSS link, such as https://dashscope-result-xx.oss-cn-xxxx.aliyuncs.com/xxx.png. You can use this link to view or download the image. The link is valid for only 24 hours.

If your business has high security requirements and cannot access public OSS links, you can configure an access whitelist. Add the following domain names to your whitelist to ensure that you can access the image links.

dashscope-result-bj.oss-cn-beijing.aliyuncs.com
dashscope-result-hz.oss-cn-hangzhou.aliyuncs.com
dashscope-result-sh.oss-cn-shanghai.aliyuncs.com
dashscope-result-wlcb.oss-cn-wulanchabu.aliyuncs.com
dashscope-result-zjk.oss-cn-zhangjiakou.aliyuncs.com
dashscope-result-sz.oss-cn-shenzhen.aliyuncs.com
dashscope-result-hy.oss-cn-heyuan.aliyuncs.com
dashscope-result-cd.oss-cn-chengdu.aliyuncs.com
dashscope-result-gz.oss-cn-guangzhou.aliyuncs.com
dashscope-result-wlcb-acdr-1.oss-cn-wulanchabu-acdr-1.aliyuncs.com

Error codes

If a call fails, see Error messages for troubleshooting.

FAQ

Q: Does qwen-image-edit support multi-turn conversational editing?

A: No, it does not. The qwen-image-edit model is designed for single-turn execution. Each call is an independent, stateless editing task, and the model does not store your editing history. To perform continuous edits, you can use the output image from a previous edit as the input image for a new request.

Q: How do I view model usage?

A: Model call data has a latency of one hour. One hour after a model call, you can go to the Model Observation (Singapore or Beijing) page to view metrics such as call usage, number of calls, and success rate. For more information, see How to view model call records.