All Products
Search
Document Center

Alibaba Cloud Model Studio:Qwen-MT API reference

Last Updated:Jan 30, 2026

This topic describes the input and output parameters for calling Qwen-MT through the OpenAI compatible interface or the DashScope API.

References: Machine translation (Qwen-MT)

OpenAI compatible

Singapore region

base_url for SDK: https://dashscope-intl.aliyuncs.com/compatible-mode/v1

HTTP endoint: POST https://dashscope-intl.aliyuncs.com/compatible-mode/v1/chat/completions

Virginia region

base_url for SDK: https://dashscope-us.aliyuncs.com/compatible-mode/v1

HTTP endoint: POST https://dashscope-us.aliyuncs.com/compatible-mode/v1/chat/completions

Beijing region

base_url for SDK: https://dashscope.aliyuncs.com/compatible-mode/v1

HTTP endoint: POST https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions

You must first create an API key and configure the API key as an environment variable. If you use the OpenAI SDK to make calls, you need to install the SDK.

Request body

Basic usage

Python

import os
from openai import OpenAI

client = OpenAI(
    # If you have not configured the environment variable, replace the following line with your Model Studio API key: api_key="sk-xxx",
    api_key=os.getenv("DASHSCOPE_API_KEY"),
    # The following is the base_url for the Singapore region.
    base_url="https://dashscope-intl.aliyuncs.com/compatible-mode/v1",
)
messages = [
    {
        "role": "user",
        "content": "No me reí después de ver este video"
    }
]
translation_options = {
    "source_lang": "auto",
    "target_lang": "English"
}

completion = client.chat.completions.create(
    model="qwen-mt-plus",
    messages=messages,
    extra_body={
        "translation_options": translation_options
    }
)
print(completion.choices[0].message.content)

Node.js

// Node.js v18 or later is required. Run the code in an ES Module environment.
import OpenAI from "openai";

const openai = new OpenAI(
    {
        // If you have not configured the environment variable, replace the following line with your Model Studio API key: apiKey: "sk-xxx",
        apiKey: process.env.DASHSCOPE_API_KEY,
        // The following is the base_url for the Singapore region.
        baseURL: "https://dashscope-intl.aliyuncs.com/compatible-mode/v1"
    }
);
const completion = await openai.chat.completions.create({
    model: "qwen-mt-plus", 
    messages: [
        { role: "user", content: "No me reí después de ver este video" }
    ],
    translation_options: {
        source_lang: "auto",
        target_lang: "English"
    }
});
console.log(JSON.stringify(completion));

curl

Each region has a different request endpoint and API key. The following is the request endpoint for the Singapore region.

curl -X POST https://dashscope-intl.aliyuncs.com/compatible-mode/v1/chat/completions \
-H "Authorization: Bearer $DASHSCOPE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
    "model": "qwen-mt-plus",
    "messages": [{"role": "user", "content": "No me reí después de ver este video"}],
    "translation_options": {
      "source_lang": "auto",
      "target_lang": "English"
      }
}'

Term intervention

Python

import os
from openai import OpenAI

client = OpenAI(
    # If you have not configured the environment variable, replace the following line with your Model Studio API key: api_key="sk-xxx",
    api_key=os.getenv("DASHSCOPE_API_KEY"),
    # The following is the base_url for the Singapore region.
    base_url="https://dashscope-intl.aliyuncs.com/compatible-mode/v1",
)
messages = [
    {
        "role": "user",
        "content": "Este conjunto de biosensores utiliza grafeno, un material novedoso. Su objetivo son los elementos químicos. Su agudo «sentido del olfato» le permite reflejar el estado de salud del cuerpo de forma más profunda y precisa."
    }
]
translation_options = {
    "source_lang": "auto",
    "target_lang": "English",
    "terms": [
        {
            "source": "biosensor",
            "target": "biological sensor"
        },
        {
            "source": "grafeno",
            "target": "graphene"
        },
        {
            "source": "elementos químicos",
            "target": "chemical elements"
        },
        {
            "source": "estado de salud del cuerpo",
            "target": "health status of the body"
        }
    ]
}

completion = client.chat.completions.create(
    model="qwen-mt-plus",  # This example uses qwen-mt-plus. You can replace the model name as needed.
    messages=messages,
    extra_body={
        "translation_options": translation_options
    }
)
print(completion.choices[0].message.content)

Node.js

// Node.js v18 or later is required. Run the code in an ES Module environment.
import OpenAI from "openai";

const openai = new OpenAI(
    {
        // If you have not configured the environment variable, replace the following line with your Model Studio API key: apiKey: "sk-xxx",
        apiKey: process.env.DASHSCOPE_API_KEY,
        // The following is the base_url for the Singapore region.
        baseURL: "https://dashscope-intl.aliyuncs.com/compatible-mode/v1"
    }
);
const completion = await openai.chat.completions.create({
    model: "qwen-mt-plus",
    messages: [
        { role: "user", content: "Este conjunto de biosensores utiliza grafeno, un material novedoso. Su objetivo son los elementos químicos. Su agudo «sentido del olfato» le permite reflejar el estado de salud del cuerpo de forma más profunda y precisa." }
    ],
    translation_options: {
        source_lang: "auto",
        target_lang: "English",
        terms: [
            {
                "source": "biosensor",
                "target": "biological sensor"
            },
            {
                "source": "grafeno",
                "target": "graphene"
            },
            {
                "source": "elementos químicos",
                "target": "chemical elements"
            },
            {
                "source": "estado de salud del cuerpo",
                "target": "health status of the body"
            }
        ]
    }
});
console.log(JSON.stringify(completion));

curl

Each region has a different request endpoint and API key. The following is the request endpoint for the Singapore region.

curl -X POST https://dashscope.aliyuncs.com/compatible-mode/v1/chat/completions \
-H "Authorization: Bearer $DASHSCOPE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
  "model": "qwen-mt-plus",
  "messages": [
    {
      "role": "user",
      "content": "Este conjunto de biosensores utiliza grafeno, un material novedoso. Su objetivo son los elementos químicos. Su agudo «sentido del olfato» le permite reflejar el estado de salud del cuerpo de forma más profunda y precisa."
    }
  ],
  "translation_options": {
    "source_lang": "auto",
    "target_lang": "English",
    "terms": [
      {
        "source": "biosensor",
        "target": "biological sensor"
      },
      {
        "source": "grafeno",
        "target": "graphene"
      },
      {
        "source": "elementos químicos",
        "target": "chemical elements"
      },
      {
        "source": "estado de salud del cuerpo",
        "target": "health status of the body"
      }
    ]
  }
}'

Translation memory

Python

import os
from openai import OpenAI

client = OpenAI(
    # If you have not configured the environment variable, replace the following line with your Model Studio API key: api_key="sk-xxx",
    api_key=os.getenv("DASHSCOPE_API_KEY"),
    # The following is the base_url for the Singapore region.
    base_url="https://dashscope-intl.aliyuncs.com/compatible-mode/v1",
)
messages = [
    {
        "role": "user",
        "content": "El siguiente comando muestra la información de la versión de Thrift instalada."
    }
]
translation_options = {
    "source_lang": "auto",
    "target_lang": "English",
    "tm_list": [
        {
            "source": "Puede utilizar uno de los siguientes métodos para consultar la versión del motor de un clúster:",
            "target": "You can use one of the following methods to query the engine version of a cluster:"
        },
        {
            "source": "La versión de Thrift utilizada por nuestro HBase en la nube es la 0.9.0. Por lo tanto, recomendamos que la versión del cliente también sea la 0.9.0. Puede descargar Thrift 0.9.0 desde aquí. El paquete de código fuente descargado se utilizará posteriormente. Primero debe instalar el entorno de compilación de Thrift. Para la instalación desde el código fuente, puede consultar el sitio web oficial de Thrift.",
            "target": "The version of Thrift used by ApsaraDB for HBase is 0.9.0. Therefore, we recommend that you use Thrift 0.9.0 to create a client. Click here to download Thrift 0.9.0. The downloaded source code package will be used later. You must install the Thrift compiling environment first. For more information, see Thrift official website."
        },
        {
            "source": "Puede instalar el SDK a través de PyPI. El comando de instalación es el siguiente:",
            "target": "You can run the following command in Python Package Index (PyPI) to install Elastic Container Instance SDK for Python:"
        }
    ]
}

completion = client.chat.completions.create(
    model="qwen-mt-plus",  # This example uses qwen-mt-plus. You can replace the model name as needed.
    messages=messages,
    extra_body={
        "translation_options": translation_options
    }
)
print(completion.choices[0].message.content)

Node.js

// Node.js v18 or later is required. Run the code in an ES Module environment.
import OpenAI from "openai";

const openai = new OpenAI(
    {
        // If you have not configured the environment variable, replace the following line with your Model Studio API key: apiKey: "sk-xxx",
        apiKey: process.env.DASHSCOPE_API_KEY,
        // The following is the base_url for the Singapore region.
        baseURL: "https://dashscope-intl.aliyuncs.com/compatible-mode/v1"
    }
);
const completion = await openai.chat.completions.create({
    model: "qwen-mt-plus",
    messages: [
        { role: "user", content: "El siguiente comando muestra la información de la versión de Thrift instalada." }
    ],
    translation_options: {
        source_lang: "auto",
        target_lang: "English",
        tm_list: [
            {
                "source": "Puede utilizar uno de los siguientes métodos para consultar la versión del motor de un clúster:",
                "target": "You can use one of the following methods to query the engine version of a cluster:"
            },
            {
                "source": "La versión de Thrift utilizada por nuestro HBase en la nube es la 0.9.0. Por lo tanto, recomendamos que la versión del cliente también sea la 0.9.0. Puede descargar Thrift 0.9.0 desde aquí. El paquete de código fuente descargado se utilizará posteriormente. Primero debe instalar el entorno de compilación de Thrift. Para la instalación desde el código fuente, puede consultar el sitio web oficial de Thrift.",
                "target": "The version of Thrift used by ApsaraDB for HBase is 0.9.0. Therefore, we recommend that you use Thrift 0.9.0 to create a client. Click here to download Thrift 0.9.0. The downloaded source code package will be used later. You must install the Thrift compiling environment first. For more information, see Thrift official website."
            },
            {
                "source": "Puede instalar el SDK a través de PyPI. El comando de instalación es el siguiente:",
                "target": "You can run the following command in Python Package Index (PyPI) to install Elastic Container Instance SDK for Python:"
            }
        ]
    }
});
console.log(JSON.stringify(completion));

curl

Each region has a different request endpoint and API key. The following is the request endpoint for the Singapore region.

curl -X POST https://dashscope-intl.aliyuncs.com/compatible-mode/v1/chat/completions \
-H "Authorization: Bearer $DASHSCOPE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
  "model": "qwen-mt-plus",
  "messages": [
    {
      "role": "user",
      "content": "El siguiente comando muestra la información de la versión de Thrift instalada."
    }
  ],
  "translation_options": {
    "source_lang": "auto",
    "target_lang": "English",
    "tm_list":[
          {"source": "Puede utilizar uno de los siguientes métodos para consultar la versión del motor de un clúster:", "target": "You can use one of the following methods to query the engine version of a cluster:"},
          {"source": "La versión de Thrift utilizada por nuestro HBase en la nube es la 0.9.0. Por lo tanto, recomendamos que la versión del cliente también sea la 0.9.0. Puede descargar Thrift 0.9.0 desde aquí. El paquete de código fuente descargado se utilizará posteriormente. Primero debe instalar el entorno de compilación de Thrift. Para la instalación desde el código fuente, puede consultar el sitio web oficial de Thrift.", "target": "The version of Thrift used by ApsaraDB for HBase is 0.9.0. Therefore, we recommend that you use Thrift 0.9.0 to create a client. Click here to download Thrift 0.9.0. The downloaded source code package will be used later. You must install the Thrift compiling environment first. For more information, see Thrift official website."},
          {"source": "Puede instalar el SDK a través de PyPI. El comando de instalación es el siguiente:", "target": "You can run the following command in Python Package Index (PyPI) to install Elastic Container Instance SDK for Python:"}
    ]
  }
}'

Domain prompting

Python

import os
from openai import OpenAI

client = OpenAI(
    # If you have not configured the environment variable, replace the following line with your Model Studio API key: api_key="sk-xxx",
    api_key=os.getenv("DASHSCOPE_API_KEY"),
    # The following is the base_url for the Singapore region.
    base_url="https://dashscope-intl.aliyuncs.com/compatible-mode/v1",
)
messages = [
    {
        "role": "user",
        "content": "La segunda instrucción SELECT devuelve un número que indica la cantidad de filas que habría devuelto la primera instrucción SELECT si no se hubiera utilizado la cláusula LIMIT."
    }
]
translation_options = {
    "source_lang": "auto",
    "target_lang": "English",
    "domains": "The sentence is from Ali Cloud IT domain. It mainly involves computer-related software development and usage methods, including many terms related to computer software and hardware. Pay attention to professional troubleshooting terminologies and sentence patterns when translating. Translate into this IT domain style."
}

completion = client.chat.completions.create(
    model="qwen-mt-plus",  
    messages=messages,
    extra_body={
        "translation_options": translation_options
    }
)
print(completion.choices[0].message.content)

Node.js

// Node.js v18 or later is required. Run the code in an ES Module environment.
import OpenAI from "openai";

const openai = new OpenAI(
    {
        // If you have not configured the environment variable, replace the following line with your Model Studio API key: apiKey: "sk-xxx",
        apiKey: process.env.DASHSCOPE_API_KEY,
        // The following is the base_url for the Singapore region.
        baseURL: "https://dashscope-intl.aliyuncs.com/compatible-mode/v1"
    }
);
const completion = await openai.chat.completions.create({
    
    model: "qwen-mt-plus",
    messages: [
        { role: "user", content: "La segunda instrucción SELECT devuelve un número que indica la cantidad de filas que habría devuelto la primera instrucción SELECT si no se hubiera utilizado la cláusula LIMIT." }
    ],
    translation_options: {
        source_lang: "auto",
        target_lang: "English",
        domains: "The sentence is from Ali Cloud IT domain. It mainly involves computer-related software development and usage methods, including many terms related to computer software and hardware. Pay attention to professional troubleshooting terminologies and sentence patterns when translating. Translate into this IT domain style."
    }
});
console.log(JSON.stringify(completion));

curl

Each region has a different request endpoint and API key. The following is the request endpoint for the Singapore region.

curl -X POST https://dashscope-intl.aliyuncs.com/compatible-mode/v1/chat/completions \
-H "Authorization: Bearer $DASHSCOPE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
  "model": "qwen-mt-plus",
  "messages": [
    {
      "role": "user",
      "content": "La segunda instrucción SELECT devuelve un número que indica la cantidad de filas que habría devuelto la primera instrucción SELECT si no se hubiera utilizado la cláusula LIMIT."
    }
  ],
  "translation_options": {
    "source_lang": "auto",
    "target_lang": "English",
    "domains": "The sentence is from Ali Cloud IT domain. It mainly involves computer-related software development and usage methods, including many terms related to computer software and hardware. Pay attention to professional troubleshooting terminologies and sentence patterns when translating. Translate into this IT domain style."
  }
}'

model string (Required)

The model name. Supported models: qwen-mt-plus, qwen-mt-flash, qwen-mt-lite, and qwen-mt-turbo.

messages array (Required)

An array of messages that provides context to the model. Only user messages are supported.

Message type

User Message object (Required)

A user message that contains the sentence to be translated.

Properties

content string (Required)

The sentence to be translated.

role string (Required)

The role of the user message. This must be set to user.

stream boolean (Optional) Defaults to false.

Specifies whether to return the response in streaming output mode.

Valid values:

  • false: The entire response is returned after it is fully generated.

  • true: The response is returned in data chunks as it is generated. The client must read the chunks to reconstruct the complete response.

Note

Currently, only the qwen-mt-flash and qwen-mt-lite models support returning data incrementally. Each returned data chunk contains only the newly generated content. The qwen-mt-plus and qwen-mt-turbo models return data non-incrementally. Each returned data chunk contains the entire sequence generated so far. This behavior cannot be changed. For example:

I

I didn

I didn't

I didn't laugh

I didn't laugh after

...

stream_options object (Optional)

The configuration items for streaming output. This parameter takes effect only when stream is set to true.

Properties

include_usage boolean (Optional) Defaults to false.

Specifies whether to include token consumption information in the last data chunk.

Valid values:

  • true

  • false

max_tokens integer (Optional)

The maximum number of tokens to generate. If the generated content exceeds this value, the response is truncated.

The default and maximum values are the maximum output length of the model. For more information, see Model selection.

seed integer (Optional)

The random number seed. This ensures that results are reproducible with the same input and parameters. If you use the same seed and other parameters in a call, the model returns the same result as consistently as possible.

Value range: [0, 231-1].

temperature float (Optional) Defaults to 0.65.

The sampling temperature, which controls the diversity of the generated text.

A higher temperature value results in more diverse text. A lower temperature value results in more deterministic text.

Value range: [0, 2)

Both temperature and top_p control the diversity of the generated text. Set only one of them.

top_p float (Optional) Defaults to 0.8.

The probability threshold for nucleus sampling, which controls the diversity of the generated text.

A higher top_p value results in more diverse text. A lower top_p value results in more deterministic text.

Value range: (0, 1.0]

Both temperature and top_p control the diversity of the generated text. Set only one of them.

top_k integer (Optional) Defaults to 1.

The size of the candidate set for sampling during generation. For example, if you set this parameter to 50, only the 50 tokens with the highest scores in a single generation are used to form the candidate set for random sampling. A larger value increases randomness. A smaller value increases determinism. If the value is None or greater than 100, the top_k policy is disabled and only the top_p policy takes effect.

The value must be greater than or equal to 0.

This parameter is not a standard OpenAI parameter. When you use the Python SDK, place this parameter in the extra_body object. For example: extra_body={"top_k": xxx}. When you use the Node.js SDK or make an HTTP call, pass it as a top-level parameter.

repetition_penalty float (Optional) Defaults to 1.0.

The penalty for repetition in consecutive sequences during model generation. A higher repetition_penalty value reduces repetition. A value of 1.0 indicates no penalty. The value must be greater than 0, but there is no strict value range.

This parameter is not a standard OpenAI parameter. When you use the Python SDK, place this parameter in the extra_body object. For example: extra_body={"repetition_penalty": xxx}. When you use the Node.js SDK or make an HTTP call, pass it as a top-level parameter.

translation_options object (Required)

The translation parameters to configure.

Properties

source_lang string (Required)

The full English name of the source language. For more information, see Supported languages. If you set this to auto, the model automatically detects the input language.

target_lang string (Required)

The full English name of the target language. For more information, see Supported languages.

terms arrays (Optional)

The array of terms to set when you use the Term intervention feature.

Properties

source string (Required)

The term in the source language.

target string (Required)

The term in the target language.

tm_list arrays (Optional)

The array of translation memories to set when you use the Translation memory feature.

Properties

source string (Required)

The statement in the source language.

target string (Required)

The statement in the target language.

domains string (Optional)

The domain prompt to set when you use the Domain prompting feature.

Domain prompts must be in English.

This parameter is not a standard OpenAI parameter. When you use the Python SDK, place this parameter in the extra_body object. For example: extra_body={"translation_options": xxx}. When you use the Node.js SDK or make an HTTP call, pass it as a top-level parameter.

Chat response object (non-streaming output)

{
  "id": "chatcmpl-999a5d8a-f646-4039-968a-167743ae0f22",
  "choices": [
    {
      "finish_reason": "stop",
      "index": 0,
      "logprobs": null,
      "message": {
        "content": "I didn't laugh after watching this video.",
        "refusal": null,
        "role": "assistant",
        "annotations": null,
        "audio": null,
        "function_call": null,
        "tool_calls": null
      }
    }
  ],
  "created": 1762346157,
  "model": "qwen-mt-plus",
  "object": "chat.completion",
  "service_tier": null,
  "system_fingerprint": null,
  "usage": {
    "completion_tokens": 9,
    "prompt_tokens": 53,
    "total_tokens": 62,
    "completion_tokens_details": null,
    "prompt_tokens_details": null
  }
}

id string

The unique ID of the request.

choices array

An array of content generated by the model.

Properties

finish_reason string

The reason why the model stopped generating content.

The following two scenarios apply:

  • The value is stop when the output is complete.

  • length: The generation stopped because the output length limit was reached.

index integer

The index of the current object in the choices array.

message object

The message output by the model.

Properties

content string

The translation result from the model.

refusal string

This parameter is currently fixed to null.

role string

The role of the message. This is fixed to assistant.

audio object

This parameter is currently fixed to null.

function_call object

This parameter is currently fixed to null.

tool_calls array

This parameter is currently fixed to null.

created integer

The UNIX timestamp when the request was created.

model string

The model used for the request.

object string

This is always chat.completion.

service_tier string

This parameter is currently fixed to null.

system_fingerprint string

This parameter is currently fixed to null.

usage object

The token consumption information for the request.

Properties

completion_tokens integer

The number of tokens in the model output.

prompt_tokens integer

The number of tokens in the input.

total_tokens integer

The total number of tokens consumed. This is the sum of prompt_tokens and completion_tokens.

completion_tokens_details object

This parameter is currently fixed to null.

prompt_tokens_details object

This parameter is currently fixed to null.

Chat response chunk object (streaming output)

Incremental output

{"id": "chatcmpl-d8aa6596-b366-4ed0-9f6d-2e89247f554e", "choices": [{"delta": {"content": "", "function_call": null, "refusal": null, "role": "assistant", "tool_calls": null}, "finish_reason": null, "index": 0, "logprobs": null}], "created": 1762504029, "model": "qwen-mt-flash", "object": "chat.completion.chunk", "service_tier": null, "system_fingerprint": null, "usage": null}
{"id": "chatcmpl-d8aa6596-b366-4ed0-9f6d-2e89247f554e", "choices": [{"delta": {"content": "I", "function_call": null, "refusal": null, "role": null, "tool_calls": null}, "finish_reason": null, "index": 0, "logprobs": null}], "created": 1762504029, "model": "qwen-mt-flash", "object": "chat.completion.chunk", "service_tier": null, "system_fingerprint": null, "usage": null}
{"id": "chatcmpl-d8aa6596-b366-4ed0-9f6d-2e89247f554e", "choices": [{"delta": {"content": " didn", "function_call": null, "refusal": null, "role": null, "tool_calls": null}, "finish_reason": null, "index": 0, "logprobs": null}], "created": 1762504029, "model": "qwen-mt-flash", "object": "chat.completion.chunk", "service_tier": null, "system_fingerprint": null, "usage": null}
{"id": "chatcmpl-d8aa6596-b366-4ed0-9f6d-2e89247f554e", "choices": [{"delta": {"content": "'t", "function_call": null, "refusal": null, "role": null, "tool_calls": null}, "finish_reason": null, "index": 0, "logprobs": null}], "created": 1762504029, "model": "qwen-mt-flash", "object": "chat.completion.chunk", "service_tier": null, "system_fingerprint": null, "usage": null}
{"id": "chatcmpl-d8aa6596-b366-4ed0-9f6d-2e89247f554e", "choices": [{"delta": {"content": " laugh", "function_call": null, "refusal": null, "role": null, "tool_calls": null}, "finish_reason": null, "index": 0, "logprobs": null}], "created": 1762504029, "model": "qwen-mt-flash", "object": "chat.completion.chunk", "service_tier": null, "system_fingerprint": null, "usage": null}
{"id": "chatcmpl-d8aa6596-b366-4ed0-9f6d-2e89247f554e", "choices": [{"delta": {"content": " after", "function_call": null, "refusal": null, "role": null, "tool_calls": null}, "finish_reason": null, "index": 0, "logprobs": null}], "created": 1762504029, "model": "qwen-mt-flash", "object": "chat.completion.chunk", "service_tier": null, "system_fingerprint": null, "usage": null}
{"id": "chatcmpl-d8aa6596-b366-4ed0-9f6d-2e89247f554e", "choices": [{"delta": {"content": " watching", "function_call": null, "refusal": null, "role": null, "tool_calls": null}, "finish_reason": null, "index": 0, "logprobs": null}], "created": 1762504029, "model": "qwen-mt-flash", "object": "chat.completion.chunk", "service_tier": null, "system_fingerprint": null, "usage": null}
{"id": "chatcmpl-d8aa6596-b366-4ed0-9f6d-2e89247f554e", "choices": [{"delta": {"content": " this", "function_call": null, "refusal": null, "role": null, "tool_calls": null}, "finish_reason": null, "index": 0, "logprobs": null}], "created": 1762504029, "model": "qwen-mt-flash", "object": "chat.completion.chunk", "service_tier": null, "system_fingerprint": null, "usage": null}
{"id": "chatcmpl-d8aa6596-b366-4ed0-9f6d-2e89247f554e", "choices": [{"delta": {"content": " video", "function_call": null, "refusal": null, "role": null, "tool_calls": null}, "finish_reason": null, "index": 0, "logprobs": null}], "created": 1762504029, "model": "qwen-mt-flash", "object": "chat.completion.chunk", "service_tier": null, "system_fingerprint": null, "usage": null}
{"id": "chatcmpl-d8aa6596-b366-4ed0-9f6d-2e89247f554e", "choices": [{"delta": {"content": ".", "function_call": null, "refusal": null, "role": null, "tool_calls": null}, "finish_reason": null, "index": 0, "logprobs": null}], "created": 1762504029, "model": "qwen-mt-flash", "object": "chat.completion.chunk", "service_tier": null, "system_fingerprint": null, "usage": null}
{"id": "chatcmpl-d8aa6596-b366-4ed0-9f6d-2e89247f554e", "choices": [{"delta": {"content": "", "function_call": null, "refusal": null, "role": null, "tool_calls": null}, "finish_reason": "stop", "index": 0, "logprobs": null}], "created": 1762504029, "model": "qwen-mt-flash", "object": "chat.completion.chunk", "service_tier": null, "system_fingerprint": null, "usage": null}
{"id": "chatcmpl-d8aa6596-b366-4ed0-9f6d-2e89247f554e", "choices": [{"delta": {"content": "", "function_call": null, "refusal": null, "role": null, "tool_calls": null}, "finish_reason": "stop", "index": 0, "logprobs": null}], "created": 1762504029, "model": "qwen-mt-flash", "object": "chat.completion.chunk", "service_tier": null, "system_fingerprint": null, "usage": null}
{"id": "chatcmpl-d8aa6596-b366-4ed0-9f6d-2e89247f554e", "choices": [], "created": 1762504029, "model": "qwen-mt-flash", "object": "chat.completion.chunk", "service_tier": null, "system_fingerprint": null, "usage": {"completion_tokens": 9, "prompt_tokens": 56, "total_tokens": 65, "completion_tokens_details": null, "prompt_tokens_details": null}}

Non-incremental output

{"id":"chatcmpl-478e183e-cbdc-4ea0-aeae-4c2ba1d03e4d","choices":[{"delta":{"content":"","function_call":null,"refusal":null,"role":"assistant","tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1762346453,"model":"qwen-mt-plus","object":"chat.completion.chunk","service_tier":null,"system_fingerprint":null,"usage":null}
{"id":"chatcmpl-478e183e-cbdc-4ea0-aeae-4c2ba1d03e4d","choices":[{"delta":{"content":"I","function_call":null,"refusal":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1762346453,"model":"qwen-mt-plus","object":"chat.completion.chunk","service_tier":null,"system_fingerprint":null,"usage":null}
{"id":"chatcmpl-478e183e-cbdc-4ea0-aeae-4c2ba1d03e4d","choices":[{"delta":{"content":"I didn","function_call":null,"refusal":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1762346453,"model":"qwen-mt-plus","object":"chat.completion.chunk","service_tier":null,"system_fingerprint":null,"usage":null}
{"id":"chatcmpl-478e183e-cbdc-4ea0-aeae-4c2ba1d03e4d","choices":[{"delta":{"content":"I didn’t","function_call":null,"refusal":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1762346453,"model":"qwen-mt-plus","object":"chat.completion.chunk","service_tier":null,"system_fingerprint":null,"usage":null}
{"id":"chatcmpl-478e183e-cbdc-4ea0-aeae-4c2ba1d03e4d","choices":[{"delta":{"content":"I didn’t laugh","function_call":null,"refusal":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1762346453,"model":"qwen-mt-plus","object":"chat.completion.chunk","service_tier":null,"system_fingerprint":null,"usage":null}
{"id":"chatcmpl-478e183e-cbdc-4ea0-aeae-4c2ba1d03e4d","choices":[{"delta":{"content":"I didn’t laugh after","function_call":null,"refusal":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1762346453,"model":"qwen-mt-plus","object":"chat.completion.chunk","service_tier":null,"system_fingerprint":null,"usage":null}
{"id":"chatcmpl-478e183e-cbdc-4ea0-aeae-4c2ba1d03e4d","choices":[{"delta":{"content":"I didn’t laugh after watching","function_call":null,"refusal":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1762346453,"model":"qwen-mt-plus","object":"chat.completion.chunk","service_tier":null,"system_fingerprint":null,"usage":null}
{"id":"chatcmpl-478e183e-cbdc-4ea0-aeae-4c2ba1d03e4d","choices":[{"delta":{"content":"I didn’t laugh after watching this","function_call":null,"refusal":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1762346453,"model":"qwen-mt-plus","object":"chat.completion.chunk","service_tier":null,"system_fingerprint":null,"usage":null}
{"id":"chatcmpl-478e183e-cbdc-4ea0-aeae-4c2ba1d03e4d","choices":[{"delta":{"content":"I didn’t laugh after watching this video","function_call":null,"refusal":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1762346453,"model":"qwen-mt-plus","object":"chat.completion.chunk","service_tier":null,"system_fingerprint":null,"usage":null}
{"id":"chatcmpl-478e183e-cbdc-4ea0-aeae-4c2ba1d03e4d","choices":[{"delta":{"content":"I didn’t laugh after watching this video.","function_call":null,"refusal":null,"role":null,"tool_calls":null},"finish_reason":null,"index":0,"logprobs":null}],"created":1762346453,"model":"qwen-mt-plus","object":"chat.completion.chunk","service_tier":null,"system_fingerprint":null,"usage":null}
{"id":"chatcmpl-478e183e-cbdc-4ea0-aeae-4c2ba1d03e4d","choices":[{"delta":{"content":"I didn’t laugh after watching this video.","function_call":null,"refusal":null,"role":null,"tool_calls":null},"finish_reason":"stop","index":0,"logprobs":null}],"created":1762346453,"model":"qwen-mt-plus","object":"chat.completion.chunk","service_tier":null,"system_fingerprint":null,"usage":null}
{"id":"chatcmpl-478e183e-cbdc-4ea0-aeae-4c2ba1d03e4d","choices":[{"delta":{"content":"I didn’t laugh after watching this video.","function_call":null,"refusal":null,"role":null,"tool_calls":null},"finish_reason":"stop","index":0,"logprobs":null}],"created":1762346453,"model":"qwen-mt-plus","object":"chat.completion.chunk","service_tier":null,"system_fingerprint":null,"usage":null}
{"id":"chatcmpl-478e183e-cbdc-4ea0-aeae-4c2ba1d03e4d","choices":[],"created":1762346453,"model":"qwen-mt-plus","object":"chat.completion.chunk","service_tier":null,"system_fingerprint":null,"usage":{"completion_tokens":9,"prompt_tokens":56,"total_tokens":65,"completion_tokens_details":null,"prompt_tokens_details":null}}

id string

The unique ID of the call. Each chunk object has the same ID.

choices array

An array of content generated by the model. If include_usage is set to true, this array is empty in the last chunk.

Properties

delta object

The output content returned in streaming mode.

Properties

content string

The translation result. qwen-mt-flash and qwen-mt-lite provide incremental updates. qwen-mt-plus and qwen-mt-turbo provide non-incremental updates.

function_call object

This parameter is currently fixed to null.

refusal object

This parameter is currently fixed to null.

role string

The role of the message object. This has a value only in the first chunk.

finish_reason string

The model stops generating for one of three reasons:

  • When the output is complete, the value is stop.

  • The value is null during generation.

  • length: The generation stopped because the output length limit was reached.

index integer

The index of the current response in the choices array.

created integer

The UNIX timestamp when the request was created. Each chunk has the same timestamp.

model string

The model used for the request.

object string

This is always chat.completion.chunk.

service_tier string

This parameter is currently fixed to null.

system_fingerprint string

This parameter is currently fixed to null.

usage object

The tokens consumed by the request. This is returned in the last chunk only when include_usage is true.

Properties

completion_tokens integer

The number of tokens in the model output.

prompt_tokens integer

The number of input tokens.

total_tokens integer

The total number of tokens. This is the sum of prompt_tokens and completion_tokens.

completion_tokens_details object

This parameter is currently fixed to null.

prompt_tokens_details object

This parameter is currently fixed to null.

DashScope

Singapore

HTTP endoint: POST https://dashscope-intl.aliyuncs.com/api/v1/services/aigc/text-generation/generation

Set base_url to:

Python code

dashscope.base_http_api_url = 'https://dashscope-intl.aliyuncs.com/api/v1'

Java code

  • Method 1:

    import com.alibaba.dashscope.protocol.Protocol;
    Generation gen = new Generation(Protocol.HTTP.getValue(), "https://dashscope-intl.aliyuncs.com/api/v1");
  • Method 2:

    import com.alibaba.dashscope.utils.Constants;
    Constants.baseHttpApiUrl="https://dashscope-intl.aliyuncs.com/api/v1";

Virginia

HTTP endoint: POST https://dashscope-us.aliyuncs.com/api/v1/services/aigc/text-generation/generation

Set base_url to:

Python code

dashscope.base_http_api_url = 'https://dashscope-us.aliyuncs.com/api/v1'

Java code

  • Method 1:

    import com.alibaba.dashscope.protocol.Protocol;
    Generation gen = new Generation(Protocol.HTTP.getValue(), "https://dashscope-us.aliyuncs.com/api/v1");
  • Method 2:

    import com.alibaba.dashscope.utils.Constants;
    Constants.baseHttpApiUrl="https://dashscope-us.aliyuncs.com/api/v1";

Beijing

HTTP endoint: POST https://dashscope.aliyuncs.com/api/v1/services/aigc/text-generation/generation

You do not need to configure base_url for SDK calls. The default value is https://dashscope.aliyuncs.com/api/v1.

You must create an API key and export the API key as an environment variable. If using the DashScope SDK, install the DashScope SDK.

Request body

Basic usage

Python

import os
import dashscope

# The following is the base_url for the Singapore region.
dashscope.base_http_api_url = 'https://dashscope-intl.aliyuncs.com/api/v1'
messages = [
    {
        "role": "user",
        "content": "No me reí después de ver este video"
    }
]
translation_options = {
    "source_lang": "auto",
    "target_lang": "English",
}
response = dashscope.Generation.call(
    # If you have not configured the environment variable, replace the following line with your Model Studio API key: api_key="sk-xxx",
    api_key=os.getenv('DASHSCOPE_API_KEY'),
    model="qwen-mt-plus",  # This example uses qwen-mt-plus. You can replace the model name as needed.
    messages=messages,
    result_format='message',
    translation_options=translation_options
)
print(response.output.choices[0].message.content)

Java

// DashScope SDK 2.20.6 or later is required.
import java.lang.System;
import java.util.Collections;
import com.alibaba.dashscope.aigc.generation.Generation;
import com.alibaba.dashscope.aigc.generation.GenerationParam;
import com.alibaba.dashscope.aigc.generation.GenerationResult;
import com.alibaba.dashscope.aigc.generation.TranslationOptions;
import com.alibaba.dashscope.common.Message;
import com.alibaba.dashscope.common.Role;
import com.alibaba.dashscope.exception.ApiException;
import com.alibaba.dashscope.exception.InputRequiredException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import com.alibaba.dashscope.protocol.Protocol;

public class Main {
    public static GenerationResult callWithMessage() throws ApiException, NoApiKeyException, InputRequiredException {
        // The following is the base_url for the Singapore region.
        Generation gen = new Generation(Protocol.HTTP.getValue(), "https://dashscope-intl.aliyuncs.com/api/v1");
        Message userMsg = Message.builder()
                .role(Role.USER.getValue())
                .content("No me reí después de ver este video")
                .build();
        TranslationOptions options = TranslationOptions.builder()
                .sourceLang("auto")
                .targetLang("English")
                .build();
        GenerationParam param = GenerationParam.builder()
                // If you have not configured the environment variable, replace the following line with your Model Studio API key: .apiKey("sk-xxx")
                .apiKey(System.getenv("DASHSCOPE_API_KEY"))
                .model("qwen-mt-plus")
                .messages(Collections.singletonList(userMsg))
                .resultFormat(GenerationParam.ResultFormat.MESSAGE)
                .translationOptions(options)
                .build();
        return gen.call(param);
    }
    public static void main(String[] args) {
        try {
            GenerationResult result = callWithMessage();
            System.out.println(result.getOutput().getChoices().get(0).getMessage().getContent());
        } catch (ApiException | NoApiKeyException | InputRequiredException e) {
            System.err.println("Error message: "+e.getMessage());
            e.printStackTrace();
        } finally {
            System.exit(0);
        }
    }
}

curl

Each region has a different request endpoint and API key. The following is the request endpoint for the Singapore region.

curl -X POST https://dashscope-intl.aliyuncs.com/api/v1/services/aigc/text-generation/generation \
-H "Authorization: $DASHSCOPE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
  "model": "qwen-mt-plus",
  "input": {
    "messages": [
      {
        "content": "No me reí después de ver este video",
        "role": "user"
      }
    ]
  },
  "parameters": {
    "translation_options": {
      "source_lang": "auto",
      "target_lang": "English"
    }
  }
}'

Term intervention

Python

import os
import dashscope

# The following is the base_url for the Singapore region.
dashscope.base_http_api_url = 'https://dashscope-intl.aliyuncs.com/api/v1'
messages = [
    {
        "role": "user",
        "content": "Este conjunto de biosensores utiliza grafeno, un material novedoso. Su objetivo son los elementos químicos. Su agudo «sentido del olfato» le permite reflejar el estado de salud del cuerpo de forma más profunda y precisa."
    }
]
translation_options = {
    "source_lang": "auto",
    "target_lang": "English",
    "terms": [
        {
            "source": "biosensor",
            "target": "biological sensor"
        },
        {
            "source": "grafeno",
            "target": "graphene"
        },
        {
            "source": "elementos químicos",
            "target": "chemical elements"
        },
        {
            "source": "estado de salud del cuerpo",
            "target": "health status of the body"
        }
    ]
}
response = dashscope.Generation.call(
    # If you have not configured the environment variable, replace the following line with your Model Studio API key: api_key="sk-xxx",
    api_key=os.getenv('DASHSCOPE_API_KEY'),
    model="qwen-mt-plus",  # This example uses qwen-mt-plus. You can replace the model name as needed.
    messages=messages,
    result_format='message',
    translation_options=translation_options
)
print(response.output.choices[0].message.content)

Java

// DashScope SDK 2.20.6 or later is required.
import java.lang.System;
import java.util.Collections;
import java.util.Arrays;
import com.alibaba.dashscope.aigc.generation.Generation;
import com.alibaba.dashscope.aigc.generation.GenerationParam;
import com.alibaba.dashscope.aigc.generation.GenerationResult;
import com.alibaba.dashscope.aigc.generation.TranslationOptions;
import com.alibaba.dashscope.aigc.generation.TranslationOptions.Term;
import com.alibaba.dashscope.common.Message;
import com.alibaba.dashscope.common.Role;
import com.alibaba.dashscope.exception.ApiException;
import com.alibaba.dashscope.exception.InputRequiredException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import com.alibaba.dashscope.protocol.Protocol;

public class Main {
    public static GenerationResult callWithMessage() throws ApiException, NoApiKeyException, InputRequiredException {
        // The following is the base_url for the Singapore region.
        Generation gen = new Generation(Protocol.HTTP.getValue(), "https://dashscope-intl.aliyuncs.com/api/v1");
        Message userMsg = Message.builder()
                .role(Role.USER.getValue())
                .content("Este conjunto de biosensores utiliza grafeno, un material novedoso. Su objetivo son los elementos químicos. Su agudo «sentido del olfato» le permite reflejar el estado de salud del cuerpo de forma más profunda y precisa.")
                .build();
        Term term1 = Term.builder()
                .source("biosensor")
                .target("biological sensor")
                .build();
        Term term2 = Term.builder()
                .source("health status of the body")
                .target("health status of the body")
                .build();
        TranslationOptions options = TranslationOptions.builder()
                .sourceLang("auto")
                .targetLang("English")
                .terms(Arrays.asList(term1, term2))
                .build();
        GenerationParam param = GenerationParam.builder()
                .apiKey(System.getenv("DASHSCOPE_API_KEY"))
                .model("qwen-mt-plus")
                .messages(Collections.singletonList(userMsg))
                .resultFormat(GenerationParam.ResultFormat.MESSAGE)
                .translationOptions(options)
                .build();
        return gen.call(param);
    }
    public static void main(String[] args) {
        try {
            GenerationResult result = callWithMessage();
            System.out.println(result.getOutput().getChoices().get(0).getMessage().getContent());
        } catch (ApiException | NoApiKeyException | InputRequiredException e) {
            System.err.println("Error message: "+e.getMessage());
        }
        System.exit(0);
    }
}

curl

Each region has a different request endpoint and API key. The following is the request endpoint for the Singapore region.

curl -X POST https://dashscope-intl.aliyuncs.com/api/v1/services/aigc/text-generation/generation \
-H "Authorization: $DASHSCOPE_API_KEY" \
-H 'Content-Type: application/json' \
-d '{
  "model": "qwen-mt-plus",
  "input": {
    "messages": [
      {
        "content": "Este conjunto de biosensores utiliza grafeno, un material novedoso. Su objetivo son los elementos químicos. Su agudo «sentido del olfato» le permite reflejar el estado de salud del cuerpo de forma más profunda y precisa.",
        "role": "user"
      }
    ]
  },
  "parameters": {
    "translation_options": {
      "source_lang": "auto",
      "target_lang": "English",
      "terms": [
        {
          "source": "biosensor",
          "target": "biological sensor"
        },
        {
          "source": "estado de salud del cuerpo",
          "target": "health status of the body"
        }
      ]
  }
}'

Translation memory

Python

import os
import dashscope

# The following is the base_url for the Singapore region.
dashscope.base_http_api_url = 'https://dashscope-intl.aliyuncs.com/api/v1'
messages = [
    {
        "role": "user",
        "content": "El siguiente comando muestra la información de la versión de Thrift instalada."
    }
]
translation_options = {
    "source_lang": "auto",
    "target_lang": "English",
    "tm_list": [
        {
            "source": "Puede utilizar uno de los siguientes métodos para consultar la versión del motor de un clúster:",
            "target": "You can use one of the following methods to query the engine version of a cluster:"
        },
        {
            "source": "La versión de Thrift utilizada por nuestro HBase en la nube es la 0.9.0. Por lo tanto, recomendamos que la versión del cliente también sea la 0.9.0. Puede descargar Thrift 0.9.0 desde aquí. El paquete de código fuente descargado se utilizará posteriormente. Primero debe instalar el entorno de compilación de Thrift. Para la instalación desde el código fuente, puede consultar el sitio web oficial de Thrift.",
            "target": "The version of Thrift used by ApsaraDB for HBase is 0.9.0. Therefore, we recommend that you use Thrift 0.9.0 to create a client. Click here to download Thrift 0.9.0. The downloaded source code package will be used later. You must install the Thrift compiling environment first. For more information, see Thrift official website."
        },
        {
            "source": "Puede instalar el SDK a través de PyPI. El comando de instalación es el siguiente:",
            "target": "You can run the following command in Python Package Index (PyPI) to install Elastic Container Instance SDK for Python:"
        }
    ]}
response = dashscope.Generation.call(
    # If you have not configured the environment variable, replace the following line with your Model Studio API key: api_key="sk-xxx",
    api_key=os.getenv('DASHSCOPE_API_KEY'),
    model="qwen-mt-plus",  # This example uses qwen-mt-plus. You can replace the model name as needed.
    messages=messages,
    result_format='message',
    translation_options=translation_options
)
print(response.output.choices[0].message.content)

Java

// DashScope SDK 2.20.6 or later is required.
import java.lang.System;
import java.util.Collections;
import java.util.Arrays;
import com.alibaba.dashscope.aigc.generation.Generation;
import com.alibaba.dashscope.aigc.generation.GenerationParam;
import com.alibaba.dashscope.aigc.generation.GenerationResult;
import com.alibaba.dashscope.aigc.generation.TranslationOptions;
import com.alibaba.dashscope.aigc.generation.TranslationOptions.Tm;
import com.alibaba.dashscope.common.Message;
import com.alibaba.dashscope.common.Role;
import com.alibaba.dashscope.exception.ApiException;
import com.alibaba.dashscope.exception.InputRequiredException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import com.alibaba.dashscope.protocol.Protocol;

public class Main {
    public static GenerationResult callWithMessage() throws ApiException, NoApiKeyException, InputRequiredException {
        // The following is the base_url for the Singapore region.
        Generation gen = new Generation(Protocol.HTTP.getValue(), "https://dashscope-intl.aliyuncs.com/api/v1");
        Message userMsg = Message.builder()
                .role(Role.USER.getValue())
                .content("El siguiente comando muestra la información de la versión de Thrift instalada.")
                .build();
        Tm tm1 = Tm.builder()
                .source("Puede utilizar uno de los siguientes métodos para consultar la versión del motor de un clúster:")
                .target("You can use one of the following methods to query the engine version of a cluster:")
                .build();
        Tm tm2 = Tm.builder()
                .source("La versión de Thrift utilizada por nuestro HBase en la nube es la 0.9.0. Por lo tanto, recomendamos que la versión del cliente también sea la 0.9.0. Puede descargar Thrift 0.9.0 desde aquí. El paquete de código fuente descargado se utilizará posteriormente. Primero debe instalar el entorno de compilación de Thrift. Para la instalación desde el código fuente, puede consultar el sitio web oficial de Thrift.")
                .target("The version of Thrift used by ApsaraDB for HBase is 0.9.0. Therefore, we recommend that you use Thrift 0.9.0 to create a client. Click here to download Thrift 0.9.0. The downloaded source code package will be used later. You must install the Thrift compiling environment first. For more information, see Thrift official website.")
                .build();
        Tm tm3 = Tm.builder()
                .source("Puede instalar el SDK a través de PyPI. El comando de instalación es el siguiente:")
                .target("You can run the following command in Python Package Index (PyPI) to install Elastic Container Instance SDK for Python:")
                .build();
        TranslationOptions options = TranslationOptions.builder()
                .sourceLang("auto")
                .targetLang("English")
                .tmList(Arrays.asList(tm1, tm2, tm3))
                .build();
        GenerationParam param = GenerationParam.builder()
                .apiKey(System.getenv("DASHSCOPE_API_KEY"))
                .model("qwen-mt-plus")
                .messages(Collections.singletonList(userMsg))
                .resultFormat(GenerationParam.ResultFormat.MESSAGE)
                .translationOptions(options)
                .build();
        return gen.call(param);
    }
    public static void main(String[] args) {
        try {
            GenerationResult result = callWithMessage();
            System.out.println(result.getOutput().getChoices().get(0).getMessage().getContent());
        } catch (ApiException | NoApiKeyException | InputRequiredException e) {
            System.err.println("Error message: "+e.getMessage());
        }
        System.exit(0);
    }
}

curl

Each region has a different request endpoint and API key. The following is the request endpoint for the Singapore region.

curl -X POST https://dashscope-intl.aliyuncs.com/api/v1/services/aigc/text-generation/generation \
-H "Authorization: $DASHSCOPE_API_KEY" \
-H 'Content-Type: application/json' \
-d '{
  "model": "qwen-mt-plus",
  "input": {
    "messages": [
      {
        "content": "El siguiente comando muestra la información de la versión de Thrift instalada.",
        "role": "user"
      }
    ]
  },
  "parameters": {
    "translation_options": {
      "source_lang": "auto",
      "target_lang": "English",
      "tm_list":[
          {"source": "Puede utilizar uno de los siguientes métodos para consultar la versión del motor de un clúster:", "target": "You can use one of the following methods to query the engine version of a cluster:"},
          {"source": "La versión de Thrift utilizada por nuestro HBase en la nube es la 0.9.0. Por lo tanto, recomendamos que la versión del cliente también sea la 0.9.0. Puede descargar Thrift 0.9.0 desde aquí. El paquete de código fuente descargado se utilizará posteriormente. Primero debe instalar el entorno de compilación de Thrift. Para la instalación desde el código fuente, puede consultar el sitio web oficial de Thrift.", "target": "The version of Thrift used by ApsaraDB for HBase is 0.9.0. Therefore, we recommend that you use Thrift 0.9.0 to create a client. Click here to download Thrift 0.9.0. The downloaded source code package will be used later. You must install the Thrift compiling environment first. For more information, see Thrift official website."},
          {"source": "Puede instalar el SDK a través de PyPI. El comando de instalación es el siguiente:", "target": "You can run the following command in Python Package Index (PyPI) to install Elastic Container Instance SDK for Python:"}
      ]
  }
}'

Domain prompting

Python

import os
import dashscope

# The following is the base_url for the Singapore region.
dashscope.base_http_api_url = 'https://dashscope-intl.aliyuncs.com/api/v1'
messages = [
    {
        "role": "user",
        "content": "La segunda instrucción SELECT devuelve un número que indica la cantidad de filas que habría devuelto la primera instrucción SELECT si no se hubiera utilizado la cláusula LIMIT."
    }
]
translation_options = {
    "source_lang": "auto",
    "target_lang": "English",
    "domains": "The sentence is from Ali Cloud IT domain. It mainly involves computer-related software development and usage methods, including many terms related to computer software and hardware. Pay attention to professional troubleshooting terminologies and sentence patterns when translating. Translate into this IT domain style."
}
response = dashscope.Generation.call(
    # If you have not configured the environment variable, replace the following line with your Model Studio API key: api_key="sk-xxx",
    api_key=os.getenv('DASHSCOPE_API_KEY'),
    model="qwen-mt-plus",  
    messages=messages,
    result_format='message',
    translation_options=translation_options
)
print(response.output.choices[0].message.content)

Java

// DashScope SDK 2.20.6 or later is required.
import java.lang.System;
import java.util.Collections;
import com.alibaba.dashscope.aigc.generation.Generation;
import com.alibaba.dashscope.aigc.generation.GenerationParam;
import com.alibaba.dashscope.aigc.generation.GenerationResult;
import com.alibaba.dashscope.aigc.generation.TranslationOptions;
import com.alibaba.dashscope.common.Message;
import com.alibaba.dashscope.common.Role;
import com.alibaba.dashscope.exception.ApiException;
import com.alibaba.dashscope.exception.InputRequiredException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import com.alibaba.dashscope.protocol.Protocol;

public class Main {
    public static GenerationResult callWithMessage() throws ApiException, NoApiKeyException, InputRequiredException {
        // The following is the base_url for the Singapore region.
        Generation gen = new Generation(Protocol.HTTP.getValue(), "https://dashscope-intl.aliyuncs.com/api/v1");
        Message userMsg = Message.builder()
                .role(Role.USER.getValue())
                .content("La segunda instrucción SELECT devuelve un número que indica la cantidad de filas que habría devuelto la primera instrucción SELECT si no se hubiera utilizado la cláusula LIMIT.")
                .build();
        TranslationOptions options = TranslationOptions.builder()
                .sourceLang("auto")
                .targetLang("English")
                .domains("The sentence is from Ali Cloud IT domain. It mainly involves computer-related software development and usage methods, including many terms related to computer software and hardware. Pay attention to professional troubleshooting terminologies and sentence patterns when translating. Translate into this IT domain style.")
                .build();
        GenerationParam param = GenerationParam.builder()
                .apiKey(System.getenv("DASHSCOPE_API_KEY"))
                
                .model("qwen-mt-plus")
                .messages(Collections.singletonList(userMsg))
                .resultFormat(GenerationParam.ResultFormat.MESSAGE)
                .translationOptions(options)
                .build();
        return gen.call(param);
    }
    public static void main(String[] args) {
        try {
            GenerationResult result = callWithMessage();
            System.out.println(result.getOutput().getChoices().get(0).getMessage().getContent());
        } catch (ApiException | NoApiKeyException | InputRequiredException e) {
            System.err.println("Error message: "+e.getMessage());
        }
        System.exit(0);
    }
}

curl

Each region has a different request endpoint and API key. The following is the request endpoint for the Singapore region.

curl -X POST https://dashscope-intl.aliyuncs.com/api/v1/services/aigc/text-generation/generation \
-H "Authorization: $DASHSCOPE_API_KEY" \
-H 'Content-Type: application/json' \
-d '{
  "model": "qwen-mt-plus",
  "input": {
    "messages": [
      {
        "content": "La segunda instrucción SELECT devuelve un número que indica la cantidad de filas que habría devuelto la primera instrucción SELECT si no se hubiera utilizado la cláusula LIMIT.",
        "role": "user"
      }
    ]
  },
  "parameters": {
    "translation_options": {
      "source_lang": "auto",
      "target_lang": "English",
      "domains": "The sentence is from Ali Cloud IT domain. It mainly involves computer-related software development and usage methods, including many terms related to computer software and hardware. Pay attention to professional troubleshooting terminologies and sentence patterns when translating. Translate into this IT domain style."}
  }
}'

model string (Required)

The model name. Supported models: qwen-mt-plus, qwen-mt-flash, qwen-mt-lite, and qwen-mt-turbo.

messages array (Required)

An array of messages that provides context to the model. Only user messages are supported.

Message type

User Message object (Required)

A user message that contains the sentence to be translated.

Properties

content string (Required)

The sentence to be translated.

role string (Required)

The role of the user message. This must be set to user.

max_tokens integer (Optional)

The maximum number of tokens to generate. If the generated content exceeds this value, the response is truncated.

The default and maximum values are the maximum output length of the model. For more information, see Model selection.

In the Java SDK, the parameter is maxTokens. For HTTP calls, place max_tokens in the parameters object.

seed integer (Optional)

The random number seed. This ensures that results are reproducible with the same input and parameters. If you use the same seed and other parameters in a call, the model returns the same result as consistently as possible.

Value range: [0, 231-1].

When you make an HTTP call, place seed in the parameters object.

temperature float (Optional) Defaults to 0.65.

The sampling temperature, which controls the diversity of the generated text.

A higher temperature value results in more diverse text. A lower temperature value results in more deterministic text.

Value range: [0, 2)

Both temperature and top_p control the diversity of the generated text. Set only one of them.

When you make an HTTP call, place temperature in the parameters object.

top_p float (Optional) Defaults to 0.8.

The probability threshold for nucleus sampling, which controls the diversity of the generated text.

A higher top_p value results in more diverse text. A lower top_p value results in more deterministic text.

Value range: (0, 1.0]

Both temperature and top_p control the diversity of the generated text. Set only one of them.

In the Java SDK, the parameter is topPparameters object.

repetition_penalty float (Optional) Defaults to 1.0.

The penalty for repetition in consecutive sequences during model generation. A higher repetition_penalty value reduces repetition. A value of 1.0 indicates no penalty. The value must be greater than 0, but there is no strict value range.

In the Java SDK, the parameter is repetitionPenalty. For HTTP calls, add repetition_penalty to the parameters object.

top_k integer (Optional) Defaults to 1.

The size of the candidate set for sampling during generation. For example, if you set this parameter to 50, only the 50 tokens with the highest scores in a single generation are used to form the candidate set for random sampling. A larger value increases randomness. A smaller value increases determinism. If the value is None or greater than 100, the top_k policy is disabled and only the top_p policy takes effect.

The value must be greater than or equal to 0.

In the Java SDK, the parameter is topK. When you make an HTTP call, set top_k in the parameters object.

stream boolean (Optional)

Specifies whether to return the response in streaming output mode.

Valid values:

  • false: The entire response is returned after it is fully generated.

  • true: The response is returned in data chunks as it is generated. The client must read the chunks to reconstruct the complete response.

Note

Currently, only the qwen-mt-flash and qwen-mt-lite models support returning data incrementally. Each returned data chunk contains only the newly generated content. The qwen-mt-plus and qwen-mt-turbo models return data non-incrementally. Each returned data chunk contains the entire sequence generated so far. This behavior cannot be changed. For example:

I

I didn

I didn't

I didn't laugh

I didn't laugh after

...

This parameter is supported only by the Python SDK. To implement streaming output with the Java SDK, call the streamCall interface. To implement streaming output with an HTTP call, set X-DashScope-SSE to enable in the header.

translation_options object (Required)

The translation parameters to configure.

Properties

source_lang string (Required)

The full English name of the source language. For more information, see Supported languages. If you set this to auto, the model automatically detects the input language.

target_lang string (Required)

The full English name of the target language. For more information, see Supported languages.

terms arrays (Optional)

The array of terms to set when you use the Term intervention feature.

Properties

source string (Required)

The term in the source language.

target string (Required)

The term in the target language.

tm_list arrays (Optional)

The array of translation memories to set when you use the Translation memory feature.

Properties

source string (Required)

The statement in the source language.

target string (Required)

The statement in the target language.

domains string (Optional)

The domain prompt to set when you use the Domain prompting feature.

Domain prompts must be in English.
In the Java SDK, the parameter is translationOptions. When you make an HTTP call, place translation_options in the parameters object.

Chat response object (same for streaming and non-streaming output)

{
  "status_code": 200,
  "request_id": "9b4ec3b2-6d29-40a6-a08b-7e3c9a51c289",
  "code": "",
  "message": "",
  "output": {
    "text": null,
    "finish_reason": "stop",
    "choices": [
      {
        "finish_reason": "stop",
        "message": {
          "role": "assistant",
          "content": "I didn't laugh after watching this video."
        }
      }
    ],
    "model_name": "qwen-mt-plus"
  },
  "usage": {
    "input_tokens": 53,
    "output_tokens": 9,
    "total_tokens": 62
  }
}

status_code string

The status code of the request. A value of 200 indicates that the request is successful. Otherwise, the request failed.

The Java SDK does not return this parameter. If the call fails, an exception is thrown. The exception message contains the content of status_code and message.

request_id string

The unique ID of the call.

In the Java SDK, the returned parameter is requestId.

code string

The error code. This is empty if the call is successful.

Only the Python SDK returns this parameter.

output object

The information about the call result.

Properties

text string

This parameter is currently fixed to null.

finish_reason string

The reason why the model stopped generating content. Valid values:

  • The value is null during generation.

  • stop: The model stopped generating content naturally.

  • length: The generation stopped because the output length limit was reached.

choices array

The output information from the model.

Properties

finish_reason string

Consider the following situations:

  • The value is null during generation.

  • stop: The model stopped generating content naturally.

  • length: The generation stopped because the output length limit was reached.

message object

The message object output by the model.

Properties

role string

The role of the output message. This is fixed to assistant.

content string

The translation result.

model_name string

The name of the model used for this request.

usage object

The token usage information for the request.

Properties

input_tokens integer

The number of input tokens.

output_tokens integer

The number of output tokens.

total_tokens integer

The total number of tokens. This value is the sum of input_tokens and output_tokens.

Error codes

If the model call fails and an error message is returned, see Error messages to resolve the issue.