Qwen-MT is a large language model for machine translation built based on Qwen. It specializes in Chinese-English translation and multilingual translation between Chinese/English and 24 other languages. Qwen-MT also provides capabilities such as terminology intervention, domain prompting, and translation memory to enhance translation quality in complex scenarios.
If you need to use this model, submit a ticket and make sure to mention "request to access the Qwen-MT model" in the ticket.
Supported models
For higher translation quality, we recommend qwen-mt-plus. For faster speed or lower cost, we recommend qwen-mt-turbo.
Name | Context | Maximum input | Maximum output | Input price | Output price | Free quota |
(Tokens) | (Million tokens) | |||||
qwen-mt-plus | 2,048 | 1,024 | 1,024 | $2.46 | $7.37 | 500,000 tokens each Valid for 180 days after activation |
qwen-mt-turbo | $0.16 | $0.49 |
Usage notes
Because of the unique translation scenario, Qwen-MT differs from general text generation models:
You need to pass parameters such as source language (
source_lang
) and target language (target_lang
) throughtranslation_options
. If you are not sure about the source language or when the source text contains multiple languages, you can setsource_lang
to"auto"
, and the model will automatically detect the source language.For specific usage, see the code in the following sections.
System Message and multi-round conversation are not supported. The messages array must contain only one User Message, which is the source text.
Currently, DashScope SDK for Java does not support Qwen-MT.
Prerequisites
You must first obtain an API key and set the API key as an environment variable. If you need to use OpenAI SDK or DashScope SDK, you must install the SDK.
Simple example
OpenAI compatible
Here is a simple example of translating from Chinese to English. By referring to Supported languages, you can set source_lang
to "Chinese"
and target_lang
to "English"
. Then, pass the source text "我看到这个视频后没有笑"
in the User Message.
Sample request
import os
from openai import OpenAI
client = OpenAI(
# If environment variables are not configured, replace the following line with: api_key="sk-xxx",
api_key=os.getenv("DASHSCOPE_API_KEY"),
base_url="https://dashscope-intl.aliyuncs.com/compatible-mode/v1",
)
messages = [
{
"role": "user",
"content": "我看到这个视频后没有笑"
}
]
translation_options = {
"source_lang": "auto",
"target_lang": "English"
}
completion = client.chat.completions.create(
model="qwen-mt-turbo", # Using qwen-mt-turbo as an example, you can change the model name as needed
messages=messages,
extra_body={
"translation_options": translation_options
}
)
print(completion.choices[0].message.content)
curl -X POST https://dashscope-intl.aliyuncs.com/compatible-mode/v1/chat/completions \
-H "Authorization: Bearer $DASHSCOPE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "qwen-mt-turbo",
"messages": [{"role": "user", "content": "看完这个视频我没有笑"}],
"translation_options": {
"source_lang": "auto",
"target_lang": "English"
}
}'
Sample response
I didn't laugh after watching this video.
DashScope
Here is a simple example of translating from Chinese to English. By referring to Supported languages, you can set source_lang
to "Chinese"
and target_lang
to "English"
. Then, pass the source text "我看到这个视频后没有笑"
in the User Message.
Sample request
import os
import dashscope
dashscope.base_http_api_url = 'https://dashscope-intl.aliyuncs.com/api/v1'
messages = [
{
"role": "user",
"content": "我看到这个视频后没有笑"
}
]
translation_options = {
"source_lang": "auto",
"target_lang": "English",
}
response = dashscope.Generation.call(
# If environment variables are not configured, replace the following line with: api_key="sk-xxx",
api_key=os.getenv('DASHSCOPE_API_KEY'),
model="qwen-mt-turbo", # Using qwen-mt-turbo as an example, you can change the model name as needed
messages=messages,
result_format='message',
translation_options=translation_options
)
print(response.output.choices[0].message.content)
curl -X POST https://dashscope-intl.aliyuncs.com/api/v1/services/aigc/text-generation/generation \
-H "Authorization: $DASHSCOPE_API_KEY" \
-H 'Content-Type: application/json' \
-d '{
"model": "qwen-mt-turbo",
"input": {
"messages": [
{
"content": "我看到这个视频后没有笑",
"role": "user"
}
]
},
"parameters": {
"translation_options": {
"source_lang": "auto",
"target_lang": "English"
}
}'
Sample response
I didn't laugh after watching this video.
Streaming output
In streaming output mode, the model will not generate final response at once. Instead, it gradually generates intermediate results, which need to be joined to form the final response. You can read as the model outputs, thereby shortening the wait for the model's response.
OpenAI compatible
Sample request
import os
from openai import OpenAI
client = OpenAI(
# If environment variables are not configured, replace the following line with: api_key="sk-xxx",
api_key=os.getenv("DASHSCOPE_API_KEY"),
base_url="https://dashscope-intl.aliyuncs.com/compatible-mode/v1",
)
messages = [
{
"role": "user",
"content": "我看到这个视频后没有笑"
}
]
translation_options = {
"source_lang": "Chinese",
"target_lang": "English"
}
completion = client.chat.completions.create(
model="qwen-mt-turbo", # Using qwen-mt-turbo as an example, you can change the model name as needed
messages=messages,
stream=True,
extra_body={
"translation_options": translation_options
}
)
for chunk in completion:
print(chunk.choices[0].delta.content)
curl -X POST https://dashscope-intl.aliyuncs.com/compatible-mode/v1/chat/completions \
-H "Authorization: Bearer $DASHSCOPE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "qwen-mt-turbo",
"messages": [{"role": "user", "content": "看完这个视频我没有笑"}],
"stream": true,
"translation_options": {
"source_lang": "Chinese",
"target_lang": "English"
}
}'
Sample response
I
I didn
I didn't
I didn't laugh after watching this video
I didn't laugh after watching this video.
The Qwen-MT model currently does not support incremental streaming output.
DashScope
Sample request
import os
import dashscope
dashscope.base_http_api_url = 'https://dashscope-intl.aliyuncs.com/api/v1'
messages = [
{
"role": "user",
"content": "我看到这个视频后没有笑"
}
]
translation_options = {
"source_lang": "Chinese",
"target_lang": "English",
}
response = dashscope.Generation.call(
# If environment variables are not configured, replace the following line with: api_key="sk-xxx",
api_key=os.getenv('DASHSCOPE_API_KEY'),
model="qwen-mt-turbo", # Using qwen-mt-turbo as an example, you can change the model name as needed
messages=messages,
result_format='message',
stream=True,
translation_options=translation_options
)
for chunk in response:
print(chunk.output.choices[0].message.content)
curl -X POST https://dashscope-intl.aliyuncs.com/api/v1/services/aigc/text-generation/generation \
-H "Authorization: $DASHSCOPE_API_KEY" \
-H "Content-Type: application/json" \
-H "X-DashScope-SSE: enable" \
-d '{
"model": "qwen-mt-turbo",
"input": {
"messages": [
{
"content": "我看到这个视频后没有笑",
"role": "user"
}
]
},
"parameters": {
"translation_options": {
"source_lang": "Chinese",
"target_lang": "English"
}
}'
Sample response
I
I didn
I didn't
I didn't laugh after watching this video
I didn't laugh after watching this video.
The Qwen-MT model currently does not support incremental streaming output.
Terminology intervention
If your source text contains many terms, the simple example code may not provide accurate translations. You can translate these terms in advance and provide them to the model as a reference:
Define the term array
Create a JSON array
terms
that contains your terms. Each term is a JSON object containing the term and its translation:{ "source": "Term", "target": "Pre-translated term" }
Pass the array
Pass the
terms
array defined in step 1 throughtranslation_options
.
After defining and passing the terms, refer to the following code to translate with terminology intervention.
OpenAI compatible
Sample request
import os
from openai import OpenAI
client = OpenAI(
# If environment variables are not configured, replace the following line with: api_key="sk-xxx",
api_key=os.getenv("DASHSCOPE_API_KEY"),
base_url="https://dashscope-intl.aliyuncs.com/compatible-mode/v1",
)
messages = [
{
"role": "user",
"content": "而这套生物传感器运用了石墨烯这种新型材料,它的目标物是化学元素,敏锐的"嗅觉"让它能更深度、准确地体现身体健康状况。"
}
]
translation_options = {
"source_lang": "Chinese",
"target_lang": "English",
"terms": [
{
"source": "生物传感器",
"target": "biological sensor"
},
{
"source": "石墨烯",
"target": "graphene"
},
{
"source": "化学元素",
"target": "chemical elements"
},
{
"source": "身体健康状况",
"target": "health status of the body"
}
]
}
completion = client.chat.completions.create(
model="qwen-mt-turbo", # Using qwen-mt-turbo as an example, you can change the model name as needed
messages=messages,
extra_body={
"translation_options": translation_options
}
)
print(completion.choices[0].message.content)
curl -X POST https://dashscope-intl.aliyuncs.com/compatible-mode/v1/chat/completions \
-H "Authorization: Bearer $DASHSCOPE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "qwen-mt-turbo",
"messages": [
{
"role": "user",
"content": "而这套生物传感器运用了石墨烯这种新型材料,它的目标物是化学元素,敏锐的"嗅觉"让它能更深度、准确地体现身体健康状况。"
}
],
"translation_options": {
"source_lang": "Chinese",
"target_lang": "English",
"terms": [
{
"source": "生物传感器",
"target": "biological sensor"
},
{
"source": "石墨烯",
"target": "graphene"
},
{
"source": "化学元素",
"target": "chemical elements"
},
{
"source": "身体健康状况",
"target": "health status of the body"
}
]
}
}'
Sample response
This biological sensor uses graphene, a new material, and its target is chemical elements. Its sensitive "nose" can more deeply and accurately reflect the health status of the body.
DashScope
Sample request
import os
import dashscope
dashscope.base_http_api_url = 'https://dashscope-intl.aliyuncs.com/api/v1'
messages = [
{
"role": "user",
"content": "而这套生物传感器运用了石墨烯这种新型材料,它的目标物是化学元素,敏锐的"嗅觉"让它能更深度、准确地体现身体健康状况。"
}
]
translation_options = {
"source_lang": "Chinese",
"target_lang": "English",
"terms": [
{
"source": "生物传感器",
"target": "biological sensor"
},
{
"source": "石墨烯",
"target": "graphene"
},
{
"source": "化学元素",
"target": "chemical elements"
},
{
"source": "身体健康状况",
"target": "health status of the body"
}
]
}
response = dashscope.Generation.call(
# If environment variables are not configured, replace the following line with: api_key="sk-xxx",
api_key=os.getenv('DASHSCOPE_API_KEY'),
model="qwen-mt-turbo", # Using qwen-mt-turbo as an example, you can change the model name as needed
messages=messages,
result_format='message',
translation_options=translation_options
)
print(response.output.choices[0].message.content)
curl -X POST https://dashscope-intl.aliyuncs.com/api/v1/services/aigc/text-generation/generation \
-H "Authorization: $DASHSCOPE_API_KEY" \
-H 'Content-Type: application/json' \
-d '{
"model": "qwen-mt-turbo",
"input": {
"messages": [
{
"content": "而这套生物传感器运用了石墨烯这种新型材料,它的目标物是化学元素,敏锐的"嗅觉"让它能更深度、准确地体现身体健康状况。",
"role": "user"
}
]
},
"parameters": {
"translation_options": {
"source_lang": "Chinese",
"target_lang": "English",
"terms": [
{
"source": "生物传感器",
"target": "biological sensor"
},
{
"source": "石墨烯",
"target": "graphene"
},
{
"source": "化学元素",
"target": "chemical elements"
},
{
"source": "身体健康状况",
"target": "health status of the body"
}
]
}
}'
Sample response
This biological sensor uses graphene, a new material, and its target is chemical elements. Its sensitive "nose" can more deeply and accurately reflect the health status of the human body.
Translation memory
If you have standard bilingual pairs and need the model to reference these standard translations, you can use the translation memory feature:
Define the translation memory array
Create a JSON array
tm_list
. Each JSON object contains the source sentence and its standard translation in the following format:{ "source": "Source sentence", "target": "Translated sentence" }
Pass the array
Pass the
tm_list
array throughtranslation_options
.
After defining and passing translation memory, refer to the following code to translate with translation memory.
OpenAI compatible
Sample request
import os
from openai import OpenAI
client = OpenAI(
# If environment variables are not configured, replace the following line with: api_key="sk-xxx",
api_key=os.getenv("DASHSCOPE_API_KEY"),
base_url="https://dashscope-intl.aliyuncs.com/compatible-mode/v1",
)
messages = [
{
"role": "user",
"content": "通过如下命令可以看出安装thrift的版本信息;"
}
]
translation_options = {
"source_lang": "Chinese",
"target_lang": "English",
"tm_list":[
{"source": "您可以通过如下方式查看集群的内核版本信息:", "target": "You can use one of the following methods to query the engine version of a cluster:"},
{"source": "我们云HBase的thrift环境是0.9.0,所以建议客户端的版本也为 0.9.0,可以从这里下载thrift的0.9.0 版本,下载的源码包我们后面会用到,这里需要先安装thrift编译环境,对于源码安装可以参考thrift官网;", "target": "The version of Thrift used by ApsaraDB for HBase is 0.9.0. Therefore, we recommend that you use Thrift 0.9.0 to create a client. Click here to download Thrift 0.9.0. The downloaded source code package will be used later. You must install the Thrift compiling environment first. For more information, see Thrift official website."},
{"source": "您可以通过PyPI来安装SDK,安装命令如下:", "target": "You can run the following command in Python Package Index (PyPI) to install Elastic Container Instance SDK for Python:"}
]
}
completion = client.chat.completions.create(
model="qwen-mt-turbo", # Using qwen-mt-turbo as an example, you can change the model name as needed
messages=messages,
extra_body={
"translation_options": translation_options
}
)
print(completion.choices[0].message.content)
curl -X POST https://dashscope-intl.aliyuncs.com/compatible-mode/v1/chat/completions \
-H "Authorization: Bearer $DASHSCOPE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "qwen-mt-turbo",
"messages": [
{
"role": "user",
"content": "通过如下命令可以看出安装thrift的版本信息;"
}
],
"translation_options": {
"source_lang": "Chinese",
"target_lang": "English",
"tm_list":[
{"source": "您可以通过如下方式查看集群的内核版本信息:", "target": "You can use one of the following methods to query the engine version of a cluster:"},
{"source": "我们云HBase的thrift环境是0.9.0,所以建议客户端的版本也为 0.9.0,可以从这里下载thrift的0.9.0 版本,下载的源码包我们后面会用到,这里需要先安装thrift编译环境,对于源码安装可以参考thrift官网;", "target": "The version of Thrift used by ApsaraDB for HBase is 0.9.0. Therefore, we recommend that you use Thrift 0.9.0 to create a client. Click here to download Thrift 0.9.0. The downloaded source code package will be used later. You must install the Thrift compiling environment first. For more information, see Thrift official website."},
{"source": "您可以通过PyPI来安装SDK,安装命令如下:", "target": "You can run the following command in Python Package Index (PyPI) to install Elastic Container Instance SDK for Python:"}
]
}
}'
Sample response
You can use the following commands to check the version information of thrift installed;
DashScope
Sample request
import os
import dashscope
dashscope.base_http_api_url = 'https://dashscope-intl.aliyuncs.com/api/v1'
messages = [
{
"role": "user",
"content": "通过如下命令可以看出安装thrift的版本信息;"
}
]
translation_options = {
"source_lang": "Chinese",
"target_lang": "English",
"tm_list":[
{"source": "您可以通过如下方式查看集群的内核版本信息:", "target": "You can use one of the following methods to query the engine version of a cluster:"},
{"source": "我们云HBase的thrift环境是0.9.0,所以建议客户端的版本也为 0.9.0,可以从这里下载thrift的0.9.0 版本,下载的源码包我们后面会用到,这里需要先安装thrift编译环境,对于源码安装可以参考thrift官网;", "target": "The version of Thrift used by ApsaraDB for HBase is 0.9.0. Therefore, we recommend that you use Thrift 0.9.0 to create a client. Click here to download Thrift 0.9.0. The downloaded source code package will be used later. You must install the Thrift compiling environment first. For more information, see Thrift official website."},
{"source": "您可以通过PyPI来安装SDK,安装命令如下:", "target": "You can run the following command in Python Package Index (PyPI) to install Elastic Container Instance SDK for Python:"}
]}
response = dashscope.Generation.call(
# If environment variables are not configured, replace the following line with: api_key="sk-xxx",
api_key=os.getenv('DASHSCOPE_API_KEY'),
model="qwen-mt-turbo", # Using qwen-mt-turbo as an example, you can change the model name as needed
messages=messages,
result_format='message',
translation_options=translation_options
)
print(response.output.choices[0].message.content)
curl -X POST https://dashscope-intl.aliyuncs.com/api/v1/services/aigc/text-generation/generation \
-H "Authorization: $DASHSCOPE_API_KEY" \
-H 'Content-Type: application/json' \
-d '{
"model": "qwen-mt-turbo",
"input": {
"messages": [
{
"content": "通过如下命令可以看出安装thrift的版本信息;",
"role": "user"
}
]
},
"parameters": {
"translation_options": {
"source_lang": "Chinese",
"target_lang": "English",
"tm_list":[
{"source": "您可以通过如下方式查看集群的内核版本信息:", "target": "You can use one of the following methods to query the engine version of a cluster:"},
{"source": "我们云HBase的thrift环境是0.9.0,所以建议客户端的版本也为 0.9.0,可以从这里下载thrift的0.9.0 版本,下载的源码包我们后面会用到,这里需要先安装thrift编译环境,对于源码安装可以参考thrift官网;", "target": "The version of Thrift used by ApsaraDB for HBase is 0.9.0. Therefore, we recommend that you use Thrift 0.9.0 to create a client. Click here to download Thrift 0.9.0. The downloaded source code package will be used later. You must install the Thrift compiling environment first. For more information, see Thrift official website."},
{"source": "您可以通过PyPI来安装SDK,安装命令如下:", "target": "You can run the following command in Python Package Index (PyPI) to install Elastic Container Instance SDK for Python:"}
]
}
}'
Sample response
You can use the following commands to check the version information of thrift installed;
Domain prompting
If you want the translation to match the style of a specific domain, such as formal legal language or casual language, provide a natural language description of the domain as a prompt.
Currently, the domain prompt must be in English.
Pass the defined domains
prompt through translation_options
.
OpenAI compatible
Sample request
import os
from openai import OpenAI
client = OpenAI(
# If environment variables are not configured, replace the following line with: api_key="sk-xxx",
api_key=os.getenv("DASHSCOPE_API_KEY"),
base_url="https://dashscope-intl.aliyuncs.com/compatible-mode/v1",
)
messages = [
{
"role": "user",
"content": "第二个SELECT语句返回一个数字,表示在没有LIMIT子句的情况下,第一个SELECT语句返回了多少行。"
}
]
translation_options = {
"source_lang": "Chinese",
"target_lang": "English",
"domains": "The sentence is from Alibaba Cloud IT domain. It mainly involves computer-related software development and usage methods, including many terms related to computer software and hardware. Pay attention to professional troubleshooting terminologies and sentence patterns when translating. Translate into this IT domain style."
}
completion = client.chat.completions.create(
model="qwen-mt-turbo", # Using qwen-mt-turbo as an example, you can change the model name as needed
messages=messages,
extra_body={
"translation_options": translation_options
}
)
print(completion.choices[0].message.content)
curl -X POST https://dashscope-intl.aliyuncs.com/compatible-mode/v1/chat/completions \
-H "Authorization: Bearer $DASHSCOPE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "qwen-mt-turbo",
"messages": [
{
"role": "user",
"content": "第二个SELECT语句返回一个数字,表示在没有LIMIT子句的情况下,第一个SELECT语句返回了多少行。"
}
],
"translation_options": {
"source_lang": "Chinese",
"target_lang": "English",
"domains": "The sentence is from Alibaba Cloud IT domain. It mainly involves computer-related software development and usage methods, including many terms related to computer software and hardware. Pay attention to professional troubleshooting terminologies and sentence patterns when translating. Translate into this IT domain style."
}
}'
Sample response
The second SELECT statement returns a number that indicates how many rows were returned by the first SELECT statement without LIMIT clause.
DashScope
Sample request
import os
import dashscope
dashscope.base_http_api_url = 'https://dashscope-intl.aliyuncs.com/api/v1'
messages = [
{
"role": "user",
"content": "第二个SELECT语句返回一个数字,表示在没有LIMIT子句的情况下,第一个SELECT语句返回了多少行。"
}
]
translation_options = {
"source_lang": "Chinese",
"target_lang": "English",
"domains": "The sentence is from Alibaba Cloud IT domain. It mainly involves computer-related software development and usage methods, including many terms related to computer software and hardware. Pay attention to professional troubleshooting terminologies and sentence patterns when translating. Translate into this IT domain style."
}
response = dashscope.Generation.call(
# If environment variables are not configured, replace the following line with: api_key="sk-xxx",
api_key=os.getenv('DASHSCOPE_API_KEY'),
model="qwen-mt-turbo", # Using qwen-mt-turbo as an example, you can change the model name as needed
messages=messages,
result_format='message',
translation_options=translation_options
)
print(response.output.choices[0].message.content)
curl -X POST https://dashscope-intl.aliyuncs.com/api/v1/services/aigc/text-generation/generation \
-H "Authorization: $DASHSCOPE_API_KEY" \
-H 'Content-Type: application/json' \
-d '{
"model": "qwen-mt-turbo",
"input": {
"messages": [
{
"content": "第二个SELECT语句返回一个数字,表示在没有LIMIT子句的情况下,第一个SELECT语句返回了多少行。",
"role": "user"
}
]
},
"parameters": {
"translation_options": {
"source_lang": "Chinese",
"target_lang": "English",
"domains": "The sentence is from Alibaba Cloud IT domain. It mainly involves computer-related software development and usage methods, including many terms related to computer software and hardware. Pay attention to professional troubleshooting terminologies and sentence patterns when translating. Translate into this IT domain style."}
}
}'
Sample response
The second SELECT statement returns a number that indicates how many rows were returned by the first SELECT statement without a LIMIT clause.
Supported languages
Here are the supported languages and their full names. Use these full names to specify languages in requests.
Language | Full name |
Chinese | Chinese |
English | English |
Japanese | Japanese |
Korean | Korean |
Thai | Thai |
French | French |
German | German |
Spanish | Spanish |
Arabic | Arabic |
Indonesian | Indonesian |
Vietnamese | Vietnamese |
Brazilian Portuguese | Portuguese |
Italian | Italian |
Dutch | Dutch |
Russian | Russian |
Khmer | Khmer |
Cebuano | Cebuano |
Filipino | Filipino |
Czech | Czech |
Polish | Polish |
Persian | Persian |
Hebrew | Hebrew |
Turkish | Turkish |
Hindi | Hindi |
Bengali | Bengali |
Urdu | Urdu |
API reference
For input and output parameters of Qwen-TM, see Qwen.
Error code
If the call failed and an error message is returned, see Error messages.