Request body | Text inputThis example uses a single-turn conversation. You can also have a multi-turn conversation. Pythonimport os
import dashscope
dashscope.base_http_api_url = 'https://dashscope-intl.aliyuncs.com/api/v1'
# The preceding URL is the base_url for the Singapore region. If you use a model in the Beijing region, change the base_url to: https://dashscope.aliyuncs.com/api/v1
messages = [
{'role': 'system', 'content': 'You are a helpful assistant.'},
{'role': 'user', 'content': 'Who are you?'}
]
response = dashscope.Generation.call(
# If you have not configured the environment variable, replace the following line with your Model Studio API key: api_key="sk-xxx",
# The API keys for the Singapore and Beijing regions are different. Get an API key: https://www.alibabacloud.com/help/en/model-studio/get-api-key
api_key=os.getenv('DASHSCOPE_API_KEY'),
model="qwen-plus", # This example uses qwen-plus. You can change the model name as needed. For a list of models, see https://www.alibabacloud.com/help/en/model-studio/getting-started/models
messages=messages,
result_format='message'
)
print(response)
Java// We recommend using DashScope SDK v2.12.0 or later.
import java.util.Arrays;
import java.lang.System;
import com.alibaba.dashscope.aigc.generation.Generation;
import com.alibaba.dashscope.aigc.generation.GenerationParam;
import com.alibaba.dashscope.aigc.generation.GenerationResult;
import com.alibaba.dashscope.common.Message;
import com.alibaba.dashscope.common.Role;
import com.alibaba.dashscope.exception.ApiException;
import com.alibaba.dashscope.exception.InputRequiredException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import com.alibaba.dashscope.utils.JsonUtils;
import com.alibaba.dashscope.protocol.Protocol;
public class Main {
public static GenerationResult callWithMessage() throws ApiException, NoApiKeyException, InputRequiredException {
Generation gen = new Generation(Protocol.HTTP.getValue(), "https://dashscope-intl.aliyuncs.com/api/v1");
// The preceding URL is the base_url for the Singapore region. If you use a model in the Beijing region, change the base_url to: https://dashscope.aliyuncs.com/api/v1
Message systemMsg = Message.builder()
.role(Role.SYSTEM.getValue())
.content("You are a helpful assistant.")
.build();
Message userMsg = Message.builder()
.role(Role.USER.getValue())
.content("Who are you?")
.build();
GenerationParam param = GenerationParam.builder()
// If you have not configured the environment variable, replace the following line with your Model Studio API key: .apiKey("sk-xxx")
// The API keys for the Singapore and Beijing regions are different. Get an API key: https://www.alibabacloud.com/help/en/model-studio/get-api-key
.apiKey(System.getenv("DASHSCOPE_API_KEY"))
// This example uses qwen-plus. You can change the model name as needed. For a list of models, see https://www.alibabacloud.com/help/en/model-studio/getting-started/models
.model("qwen-plus")
.messages(Arrays.asList(systemMsg, userMsg))
.resultFormat(GenerationParam.ResultFormat.MESSAGE)
.build();
return gen.call(param);
}
public static void main(String[] args) {
try {
GenerationResult result = callWithMessage();
System.out.println(JsonUtils.toJson(result));
} catch (ApiException | NoApiKeyException | InputRequiredException e) {
// Use a logging framework to record the exception information.
System.err.println("An error occurred while calling the generation service: " + e.getMessage());
}
System.exit(0);
}
}
PHP (HTTP)<?php
// The following is the URL for the Singapore region. If you use a model in the Beijing region, change the URL to: https://dashscope.aliyuncs.com/api/v1/services/aigc/text-generation/generation
$url = "https://dashscope-intl.aliyuncs.com/api/v1/services/aigc/text-generation/generation";
// The API keys for the Singapore and Beijing regions are different. Get an API key: https://www.alibabacloud.com/help/en/model-studio/get-api-key
$apiKey = getenv('DASHSCOPE_API_KEY');
$data = [
// This example uses qwen-plus. You can change the model name as needed. For a list of models, see https://www.alibabacloud.com/help/en/model-studio/getting-started/models
"model" => "qwen-plus",
"input" => [
"messages" => [
[
"role" => "system",
"content" => "You are a helpful assistant."
],
[
"role" => "user",
"content" => "Who are you?"
]
]
],
"parameters" => [
"result_format" => "message"
]
];
$jsonData = json_encode($data);
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, $jsonData);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_HTTPHEADER, [
"Authorization: Bearer $apiKey",
"Content-Type: application/json"
]);
$response = curl_exec($ch);
$httpCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
if ($httpCode == 200) {
echo "Response: " . $response;
} else {
echo "Error: " . $httpCode . " - " . $response;
}
curl_close($ch);
?>
Node.js (HTTP)DashScope does not provide an SDK for the Node.js environment. To make calls using the OpenAI Node.js SDK, see the OpenAI section in this topic. import fetch from 'node-fetch';
// The API keys for the Singapore and Beijing regions are different. Get an API key: https://www.alibabacloud.com/help/en/model-studio/get-api-key
$apiKey = getenv('DASHSCOPE_API_KEY');
const apiKey = process.env.DASHSCOPE_API_KEY;
const data = {
model: "qwen-plus", // This example uses qwen-plus. You can change the model name as needed. For a list of models, see https://www.alibabacloud.com/help/en/model-studio/getting-started/models
input: {
messages: [
{
role: "system",
content: "You are a helpful assistant."
},
{
role: "user",
content: "Who are you?"
}
]
},
parameters: {
result_format: "message"
}
};
fetch('https://dashscope-intl.aliyuncs.com/api/v1/services/aigc/text-generation/generation', {
// The preceding URL is for the Singapore region. If you use a model in the Beijing region, change the URL to: https://dashscope.aliyuncs.com/api/v1/services/aigc/text-generation/generation
method: 'POST',
headers: {
'Authorization': `Bearer ${apiKey}`,
'Content-Type': 'application/json'
},
body: JSON.stringify(data)
})
.then(response => response.json())
.then(data => {
console.log(JSON.stringify(data));
})
.catch(error => {
console.error('Error:', error);
});
C# (HTTP)using System.Net.Http.Headers;
using System.Text;
class Program
{
private static readonly HttpClient httpClient = new HttpClient();
static async Task Main(string[] args)
{
// If you have not configured the environment variable, replace the following line with your Model Studio API key: string? apiKey = "sk-xxx";
// The API keys for the Singapore and Beijing regions are different. Get an API key: https://www.alibabacloud.com/help/en/model-studio/get-api-key
$apiKey = getenv('DASHSCOPE_API_KEY');
string? apiKey = Environment.GetEnvironmentVariable("DASHSCOPE_API_KEY");
if (string.IsNullOrEmpty(apiKey))
{
Console.WriteLine("API key not set. Make sure the 'DASHSCOPE_API_KEY' environment variable is set.");
return;
}
// Set the request URL and content.
// The following is the base_url for the Singapore region. If you use a model in the Beijing region, change the base_url to: https://dashscope.aliyuncs.com/api/v1/services/aigc/text-generation/generation
string url = "https://dashscope-intl.aliyuncs.com/api/v1/services/aigc/text-generation/generation";
// This example uses qwen-plus. You can change the model name as needed. For a list of models, see https://www.alibabacloud.com/help/en/model-studio/getting-started/models
string jsonContent = @"{
""model"": ""qwen-plus"",
""input"": {
""messages"": [
{
""role"": ""system"",
""content"": ""You are a helpful assistant.""
},
{
""role"": ""user"",
""content"": ""Who are you?""
}
]
},
""parameters"": {
""result_format"": ""message""
}
}";
// Send the request and get the response.
string result = await SendPostRequestAsync(url, jsonContent, apiKey);
// Print the result.
Console.WriteLine(result);
}
private static async Task<string> SendPostRequestAsync(string url, string jsonContent, string apiKey)
{
using (var content = new StringContent(jsonContent, Encoding.UTF8, "application/json"))
{
// Set the request headers.
httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", apiKey);
httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
// Send the request and get the response.
HttpResponseMessage response = await httpClient.PostAsync(url, content);
// Process the response.
if (response.IsSuccessStatusCode)
{
return await response.Content.ReadAsStringAsync();
}
else
{
return $"Request failed: {response.StatusCode}";
}
}
}
}
Go (HTTP)DashScope does not provide an SDK for Go. To make calls using the OpenAI Go SDK, see the OpenAI-Go section in this topic. package main
import (
"bytes"
"encoding/json"
"fmt"
"io"
"log"
"net/http"
"os"
)
type Message struct {
Role string `json:"role"`
Content string `json:"content"`
}
type Input struct {
Messages []Message `json:"messages"`
}
type Parameters struct {
ResultFormat string `json:"result_format"`
}
type RequestBody struct {
Model string `json:"model"`
Input Input `json:"input"`
Parameters Parameters `json:"parameters"`
}
func main() {
// Create an HTTP client.
client := &http.Client{}
// Build the request body.
requestBody := RequestBody{
// This example uses qwen-plus. You can change the model name as needed. For a list of models, see https://www.alibabacloud.com/help/en/model-studio/getting-started/models
Model: "qwen-plus",
Input: Input{
Messages: []Message{
{
Role: "system",
Content: "You are a helpful assistant.",
},
{
Role: "user",
Content: "Who are you?",
},
},
},
Parameters: Parameters{
ResultFormat: "message",
},
}
jsonData, err := json.Marshal(requestBody)
if err != nil {
log.Fatal(err)
}
// Create a POST request.
// The following is the base_url for the Singapore region. If you use a model in the Beijing region, change the base_url to: https://dashscope.aliyuncs.com/api/v1/services/aigc/text-generation/generation
req, err := http.NewRequest("POST", "https://dashscope-intl.aliyuncs.com/api/v1/services/aigc/text-generation/generation", bytes.NewBuffer(jsonData))
if err != nil {
log.Fatal(err)
}
// Set the request headers.
// If you have not configured the environment variable, replace the following line with your Model Studio API key: apiKey := "sk-xxx"
// The API keys for the Singapore and Beijing regions are different. Get an API key: https://www.alibabacloud.com/help/en/model-studio/get-api-key
apiKey := os.Getenv("DASHSCOPE_API_KEY")
req.Header.Set("Authorization", "Bearer "+apiKey)
req.Header.Set("Content-Type", "application/json")
// Send the request.
resp, err := client.Do(req)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
// Read the response body.
bodyText, err := io.ReadAll(resp.Body)
if err != nil {
log.Fatal(err)
}
// Print the response content.
fmt.Printf("%s\n", bodyText)
}
curlThe API keys for the Singapore and Beijing regions are different. For more information, see Preparations: Obtain and configure an API key The following is the URL for the Singapore region. If you use a model in the Beijing region, change the URL to: https://dashscope.aliyuncs.com/api/v1/services/aigc/text-generation/generation curl --location "https://dashscope-intl.aliyuncs.com/api/v1/services/aigc/text-generation/generation" \
--header "Authorization: Bearer $DASHSCOPE_API_KEY" \
--header "Content-Type: application/json" \
--data '{
"model": "qwen-plus",
"input":{
"messages":[
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Who are you?"
}
]
},
"parameters": {
"result_format": "message"
}
}'
Streaming outputFor more information, see Streaming output. Pythonimport os
import dashscope
# The following is the base_url for the Singapore region. If you use a model in the Beijing region, change the base_url to: https://dashscope.aliyuncs.com/api/v1
dashscope.base_http_api_url = 'https://dashscope-intl.aliyuncs.com/api/v1'
messages = [
{'role':'system','content':'you are a helpful assistant'},
{'role': 'user','content': 'Who are you?'}
]
responses = dashscope.Generation.call(
# If you have not configured the environment variable, replace the following line with your Model Studio API key: api_key="sk-xxx",
# The API keys for the Singapore and Beijing regions are different. Get an API key: https://www.alibabacloud.com/help/en/model-studio/get-api-key
api_key=os.getenv('DASHSCOPE_API_KEY'),
# This example uses qwen-plus. You can change the model name as needed. For a list of models, see https://www.alibabacloud.com/help/en/model-studio/getting-started/models
model="qwen-plus",
messages=messages,
result_format='message',
stream=True,
incremental_output=True
)
for response in responses:
print(response)
Javaimport java.util.Arrays;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.alibaba.dashscope.aigc.generation.Generation;
import com.alibaba.dashscope.aigc.generation.GenerationParam;
import com.alibaba.dashscope.aigc.generation.GenerationResult;
import com.alibaba.dashscope.common.Message;
import com.alibaba.dashscope.common.Role;
import com.alibaba.dashscope.exception.ApiException;
import com.alibaba.dashscope.exception.InputRequiredException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import com.alibaba.dashscope.utils.JsonUtils;
import io.reactivex.Flowable;
import java.lang.System;
import com.alibaba.dashscope.protocol.Protocol;
public class Main {
private static final Logger logger = LoggerFactory.getLogger(Main.class);
private static void handleGenerationResult(GenerationResult message) {
System.out.println(JsonUtils.toJson(message));
}
public static void streamCallWithMessage(Generation gen, Message userMsg)
throws NoApiKeyException, ApiException, InputRequiredException {
GenerationParam param = buildGenerationParam(userMsg);
Flowable<GenerationResult> result = gen.streamCall(param);
result.blockingForEach(message -> handleGenerationResult(message));
}
private static GenerationParam buildGenerationParam(Message userMsg) {
return GenerationParam.builder()
// If you have not configured the environment variable, replace the following line with your Model Studio API key: .apiKey("sk-xxx")
// The API keys for the Singapore and Beijing regions are different. Get an API key: https://www.alibabacloud.com/help/en/model-studio/get-api-key
.apiKey(System.getenv("DASHSCOPE_API_KEY"))
// This example uses qwen-plus. You can change the model name as needed. For a list of models, see https://www.alibabacloud.com/help/en/model-studio/getting-started/models
.model("qwen-plus")
.messages(Arrays.asList(userMsg))
.resultFormat(GenerationParam.ResultFormat.MESSAGE)
.incrementalOutput(true)
.build();
}
public static void main(String[] args) {
try {
// The following is the URL for the Singapore region. If you use a model in the Beijing region, change the URL to: https://dashscope.aliyuncs.com/api/v1
Generation gen = new Generation(Protocol.HTTP.getValue(), "https://dashscope-intl.aliyuncs.com/api/v1");
Message userMsg = Message.builder().role(Role.USER.getValue()).content("Who are you?").build();
streamCallWithMessage(gen, userMsg);
} catch (ApiException | NoApiKeyException | InputRequiredException e) {
logger.error("An exception occurred: {}", e.getMessage());
}
System.exit(0);
}
}
curl# ======= Important =======
# The following is the base_url for the Singapore region. If you use a model in the Beijing region, change the base_url to: https://dashscope.aliyuncs.com/api/v1/services/aigc/text-generation/generation
# === Delete this comment before execution ====
curl --location "https://dashscope-intl.aliyuncs.com/api/v1/services/aigc/text-generation/generation" \
--header "Authorization: Bearer $DASHSCOPE_API_KEY" \
--header "Content-Type: application/json" \
--header "X-DashScope-SSE: enable" \
--data '{
"model": "qwen-plus",
"input":{
"messages":[
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Who are you?"
}
]
},
"parameters": {
"result_format": "message",
"incremental_output":true
}
}'
Image inputFor more information about using models to analyze images, see Visual Understanding. Pythonimport os
import dashscope
# The following is the base_url for the Singapore region. If you use a model in the Beijing region, change the base_url to: https://dashscope.aliyuncs.com/api/v1
dashscope.base_http_api_url = 'https://dashscope-intl.aliyuncs.com/api/v1'
messages = [
{
"role": "user",
"content": [
{"image": "https://dashscope.oss-cn-beijing.aliyuncs.com/images/dog_and_girl.jpeg"},
{"image": "https://dashscope.oss-cn-beijing.aliyuncs.com/images/tiger.png"},
{"image": "https://dashscope.oss-cn-beijing.aliyuncs.com/images/rabbit.png"},
{"text": "What are these?"}
]
}
]
response = dashscope.MultiModalConversation.call(
# The API keys for the Singapore and Beijing regions are different. Get an API key: https://www.alibabacloud.com/help/en/model-studio/get-api-key
# The following is the base_url for the Singapore region. If you use a model in the Beijing region, change the base_url to: https://dashscope.aliyuncs.com/api/v1/services/aigc/text-generation/generation
api_key=os.getenv('DASHSCOPE_API_KEY'),
# This example uses qwen-vl-max. You can change the model name as needed. For a list of models, see https://www.alibabacloud.com/help/en/model-studio/getting-started/models
model='qwen-vl-max',
messages=messages
)
print(response)
Java// Copyright (c) Alibaba, Inc. and its affiliates.
import java.util.Arrays;
import java.util.Collections;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversation;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationParam;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationResult;
import com.alibaba.dashscope.common.MultiModalMessage;
import com.alibaba.dashscope.common.Role;
import com.alibaba.dashscope.exception.ApiException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import com.alibaba.dashscope.exception.UploadFileException;
import com.alibaba.dashscope.utils.JsonUtils;
import com.alibaba.dashscope.utils.Constants;
public class Main {
static {
Constants.baseHttpApiUrl="https://dashscope-intl.aliyuncs.com/api/v1"; // The following is the base_url for the Singapore region. If you use a model in the Beijing region, change the base_url to: https://dashscope.aliyuncs.com/api/v1;
}
public static void simpleMultiModalConversationCall()
throws ApiException, NoApiKeyException, UploadFileException {
MultiModalConversation conv = new MultiModalConversation();
MultiModalMessage userMessage = MultiModalMessage.builder().role(Role.USER.getValue())
.content(Arrays.asList(
Collections.singletonMap("image", "https://dashscope.oss-cn-beijing.aliyuncs.com/images/dog_and_girl.jpeg"),
Collections.singletonMap("image", "https://dashscope.oss-cn-beijing.aliyuncs.com/images/tiger.png"),
Collections.singletonMap("image", "https://dashscope.oss-cn-beijing.aliyuncs.com/images/rabbit.png"),
Collections.singletonMap("text", "What are these?"))).build();
MultiModalConversationParam param = MultiModalConversationParam.builder()
// If you have not configured the environment variable, replace the following line with your Model Studio API key: .apiKey("sk-xxx")
// The following is the base_url for the Singapore region. If you use a model in the Beijing region, change the base_url to: https://dashscope.aliyuncs.com/api/v1/services/aigc/text-generation/generation
.apiKey(System.getenv("DASHSCOPE_API_KEY"))
// This example uses qwen-vl-plus. You can change the model name as needed. For a list of models, see https://www.alibabacloud.com/help/en/model-studio/getting-started/models
.model("qwen-vl-plus")
.message(userMessage)
.build();
MultiModalConversationResult result = conv.call(param);
System.out.println(JsonUtils.toJson(result));
}
public static void main(String[] args) {
try {
simpleMultiModalConversationCall();
} catch (ApiException | NoApiKeyException | UploadFileException e) {
System.out.println(e.getMessage());
}
System.exit(0);
}
}
curlThe API keys for the Singapore and Beijing regions are different. For more information, see Preparations: Obtain and configure an API key The following is the URL for the Singapore region. If you use a model in the Beijing region, change the URL to: https://dashscope.aliyuncs.com/api/v1/services/aigc/multimodal-generation/generation curl --location 'https://dashscope-intl.aliyuncs.com/api/v1/services/aigc/multimodal-generation/generation' \
--header "Authorization: Bearer $DASHSCOPE_API_KEY" \
--header 'Content-Type: application/json' \
--data '{
"model": "qwen-vl-plus",
"input":{
"messages":[
{
"role": "user",
"content": [
{"image": "https://dashscope.oss-cn-beijing.aliyuncs.com/images/dog_and_girl.jpeg"},
{"image": "https://dashscope.oss-cn-beijing.aliyuncs.com/images/tiger.png"},
{"image": "https://dashscope.oss-cn-beijing.aliyuncs.com/images/rabbit.png"},
{"text": "What are these?"}
]
}
]
}
}'
Video inputThe following code provides an example of how to pass in video frames. For more information about usage, such as how to pass in a video file, see Visual Understanding. Pythonimport os
# DashScope SDK v1.20.10 or later is required.
import dashscope
# The following is the base_url for the Singapore region. If you use a model in the Beijing region, change the base_url to: https://dashscope.aliyuncs.com/api/v1
dashscope.base_http_api_url = 'https://dashscope-intl.aliyuncs.com/api/v1'
messages = [{"role": "user",
"content": [
# If the model is from the Qwen2.5-VL series and an image list is passed, you can set the fps parameter to indicate that the image list is extracted from the original video every 1/fps seconds.
{"video":["https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20241108/xzsgiz/football1.jpg",
"https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20241108/tdescd/football2.jpg",
"https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20241108/zefdja/football3.jpg",
"https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20241108/aedbqh/football4.jpg"],
"fps":2},
{"text": "Describe the specific process in this video"}]}]
response = dashscope.MultiModalConversation.call(
# If you have not configured the environment variable, replace the following line with your Model Studio API key: api_key="sk-xxx",
# The API keys for the Singapore and Beijing regions are different. Get an API key: https://www.alibabacloud.com/help/en/model-studio/get-api-key
api_key=os.getenv("DASHSCOPE_API_KEY"),
model='qwen2.5-vl-72b-instruct', # This example uses qwen2.5-vl-72b-instruct. You can change the model name as needed. For a list of models, see https://www.alibabacloud.com/help/en/model-studio/models
messages=messages
)
print(response["output"]["choices"][0]["message"].content[0]["text"])
Java// DashScope SDK v2.18.3 or later is required.
import java.util.Arrays;
import java.util.Collections;
import java.util.Map;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversation;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationParam;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationResult;
import com.alibaba.dashscope.common.MultiModalMessage;
import com.alibaba.dashscope.common.Role;
import com.alibaba.dashscope.exception.ApiException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import com.alibaba.dashscope.exception.UploadFileException;
import com.alibaba.dashscope.utils.Constants;
public class Main {
static {
// The following is the base_url for the Singapore region. If you use a model in the Beijing region, change the base_url to: https://dashscope.aliyuncs.com/api/v1
Constants.baseHttpApiUrl="https://dashscope-intl.aliyuncs.com/api/v1";
}
private static final String MODEL_NAME = "qwen2.5-vl-72b-instruct"; // This example uses qwen2.5-vl-72b-instruct. You can change the model name as needed. For a list of models, see https://www.alibabacloud.com/help/en/model-studio/models
public static void videoImageListSample() throws ApiException, NoApiKeyException, UploadFileException {
MultiModalConversation conv = new MultiModalConversation();
MultiModalMessage systemMessage = MultiModalMessage.builder()
.role(Role.SYSTEM.getValue())
.content(Arrays.asList(Collections.singletonMap("text", "You are a helpful assistant.")))
.build();
// If the model is from the Qwen2.5-VL series and an image list is passed, you can set the fps parameter to indicate that the image list is extracted from the original video every 1/fps seconds.
Map<String, Object> params = Map.of(
"video", Arrays.asList("https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20241108/xzsgiz/football1.jpg",
"https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20241108/tdescd/football2.jpg",
"https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20241108/zefdja/football3.jpg",
"https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20241108/aedbqh/football4.jpg"),
"fps",2);
MultiModalMessage userMessage = MultiModalMessage.builder()
.role(Role.USER.getValue())
.content(Arrays.asList(
params,
Collections.singletonMap("text", "Describe the specific process in this video")))
.build();
MultiModalConversationParam param = MultiModalConversationParam.builder()
// If you have not configured the environment variable, replace the following line with your Model Studio API key: .apiKey("sk-xxx")
// The API keys for the Singapore and Beijing regions are different. Get an API key: https://www.alibabacloud.com/help/en/model-studio/get-api-key
.apiKey(System.getenv("DASHSCOPE_API_KEY"))
.model(MODEL_NAME)
.messages(Arrays.asList(systemMessage, userMessage)).build();
MultiModalConversationResult result = conv.call(param);
System.out.print(result.getOutput().getChoices().get(0).getMessage().getContent().get(0).get("text"));
}
public static void main(String[] args) {
try {
videoImageListSample();
} catch (ApiException | NoApiKeyException | UploadFileException e) {
System.out.println(e.getMessage());
}
System.exit(0);
}
}
curlThe API keys for the Singapore and Beijing regions are different. For more information, see Preparations: Obtain and Configure an API Key The following is the URL for the Singapore region. If you use a model in the Beijing region, change the URL to: https://dashscope.aliyuncs.com/api/v1/services/aigc/multimodal-generation/generation curl -X POST https://dashscope-intl.aliyuncs.com/api/v1/services/aigc/multimodal-generation/generation \
-H "Authorization: Bearer $DASHSCOPE_API_KEY" \
-H 'Content-Type: application/json' \
-d '{
"model": "qwen2.5-vl-72b-instruct",
"input": {
"messages": [
{
"role": "user",
"content": [
{
"video": [
"https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20241108/xzsgiz/football1.jpg",
"https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20241108/tdescd/football2.jpg",
"https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20241108/zefdja/football3.jpg",
"https://help-static-aliyun-doc.aliyuncs.com/file-manage-files/zh-CN/20241108/aedbqh/football4.jpg"
],
"fps":2
},
{
"text": "Describe the specific process in this video"
}
]
}
]
}
}'
Audio inputFor more information, see Audio file recognition - Qwen. Pythonimport os
import dashscope
# The following is the URL for the Singapore region. If you are using a model in the Beijing region, replace the URL with: https://dashscope.aliyuncs.com/api/v1
dashscope.base_http_api_url = 'https://dashscope-intl.aliyuncs.com/api/v1'
messages = [
{
"role": "system",
"content": [
# Configure the context for customized recognition here.
{"text": ""},
]
},
{
"role": "user",
"content": [
{"audio": "https://dashscope.oss-cn-beijing.aliyuncs.com/audios/welcome.mp3"},
]
}
]
response = dashscope.MultiModalConversation.call(
# The API keys for the Singapore and Beijing regions are different. To obtain an API key, see https://www.alibabacloud.com/help/en/model-studio/get-api-key
# If the environment variable is not configured, replace the following line with your Model Studio API key: api_key = "sk-xxx"
api_key=os.getenv("DASHSCOPE_API_KEY"),
model="qwen3-asr-flash",
messages=messages,
result_format="message",
asr_options={
#"language": "zh", # Optional. If the audio language is known, you can specify it with this parameter to improve recognition accuracy.
"enable_itn":True
}
)
print(response)
The complete result is printed to the console in JSON format. The result includes the status code, a unique request ID, the recognized content, and the token information for this call. {
"output": {
"choices": [
{
"finish_reason": "stop",
"message": {
"annotations": [
{
"language": "zh",
"type": "audio_info",
"emotion": "neutral"
}
],
"content": [
{
"text": "Welcome to Alibaba Cloud."
}
],
"role": "assistant"
}
}
]
},
"usage": {
"input_tokens_details": {
"text_tokens": 0
},
"output_tokens_details": {
"text_tokens": 6
},
"seconds": 1
},
"request_id": "568e2bf0-d6f2-97f8-9f15-a57b11dc6977"
}
Javaimport java.util.Arrays;
import java.util.Collections;
import java.util.HashMap;
import java.util.Map;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversation;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationParam;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationResult;
import com.alibaba.dashscope.common.MultiModalMessage;
import com.alibaba.dashscope.common.Role;
import com.alibaba.dashscope.exception.ApiException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import com.alibaba.dashscope.exception.UploadFileException;
import com.alibaba.dashscope.utils.Constants;
import com.alibaba.dashscope.utils.JsonUtils;
public class Main {
public static void simpleMultiModalConversationCall()
throws ApiException, NoApiKeyException, UploadFileException {
MultiModalConversation conv = new MultiModalConversation();
MultiModalMessage userMessage = MultiModalMessage.builder()
.role(Role.USER.getValue())
.content(Arrays.asList(
Collections.singletonMap("audio", "https://dashscope.oss-cn-beijing.aliyuncs.com/audios/welcome.mp3")))
.build();
MultiModalMessage sysMessage = MultiModalMessage.builder().role(Role.SYSTEM.getValue())
// Configure the context for customized recognition here.
.content(Arrays.asList(Collections.singletonMap("text", "")))
.build();
Map<String, Object> asrOptions = new HashMap<>();
asrOptions.put("enable_itn", true);
// asrOptions.put("language", "zh"); // Optional. If the audio language is known, you can specify it with this parameter to improve recognition accuracy.
MultiModalConversationParam param = MultiModalConversationParam.builder()
// The API keys for the Singapore and Beijing regions are different. To obtain an API key, see https://www.alibabacloud.com/help/en/model-studio/get-api-key
// If the environment variable is not configured, replace the following line with your Model Studio API key: .apiKey("sk-xxx")
.apiKey(System.getenv("DASHSCOPE_API_KEY"))
.model("qwen3-asr-flash")
.message(userMessage)
.message(sysMessage)
.parameter("asr_options", asrOptions)
.build();
MultiModalConversationResult result = conv.call(param);
System.out.println(JsonUtils.toJson(result));
}
public static void main(String[] args) {
try {
// The following is the URL for the Singapore region. If you are using a model in the Beijing region, replace the URL with: https://dashscope.aliyuncs.com/api/v1
Constants.baseHttpApiUrl = "https://dashscope-intl.aliyuncs.com/api/v1";
simpleMultiModalConversationCall();
} catch (ApiException | NoApiKeyException | UploadFileException e) {
System.out.println(e.getMessage());
}
System.exit(0);
}
}
The complete result is printed to the console in JSON format. The result includes the status code, a unique request ID, the recognized content, and the token information for this call. {
"output": {
"choices": [
{
"finish_reason": "stop",
"message": {
"annotations": [
{
"language": "zh",
"type": "audio_info",
"emotion": "neutral"
}
],
"content": [
{
"text": "Welcome to Alibaba Cloud."
}
],
"role": "assistant"
}
}
]
},
"usage": {
"input_tokens_details": {
"text_tokens": 0
},
"output_tokens_details": {
"text_tokens": 6
},
"seconds": 1
},
"request_id": "568e2bf0-d6f2-97f8-9f15-a57b11dc6977"
}
curlYou can configure context for customized recognition using the text parameter of the System Message. # ======= Important =======
# The following is the URL for the Singapore region. If you are using a model in the Beijing region, replace the URL with: https://dashscope.aliyuncs.com/api/v1/services/aigc/multimodal-generation/generation
# The API keys for the Singapore and Beijing regions are different. To obtain an API key, see https://www.alibabacloud.com/help/en/model-studio/get-api-key
# === Delete this comment before execution ===
curl --location --request POST 'https://dashscope-intl.aliyuncs.com/api/v1/services/aigc/multimodal-generation/generation' \
--header 'Authorization: Bearer $DASHSCOPE_API_KEY' \
--header 'Content-Type: application/json' \
--data '{
"model": "qwen3-asr-flash",
"input": {
"messages": [
{
"content": [
{
"text": ""
}
],
"role": "system"
},
{
"content": [
{
"audio": "https://dashscope.oss-cn-beijing.aliyuncs.com/audios/welcome.mp3"
}
],
"role": "user"
}
]
},
"parameters": {
"asr_options": {
"enable_itn": true
}
}
}'
The complete result is printed to the console in JSON format. The result includes the status code, a unique request ID, the recognized content, and the token information for this call. {
"output": {
"choices": [
{
"finish_reason": "stop",
"message": {
"annotations": [
{
"language": "zh",
"type": "audio_info",
"emotion": "neutral"
}
],
"content": [
{
"text": "Welcome to Alibaba Cloud."
}
],
"role": "assistant"
}
}
]
},
"usage": {
"input_tokens_details": {
"text_tokens": 0
},
"output_tokens_details": {
"text_tokens": 6
},
"seconds": 1
},
"request_id": "568e2bf0-d6f2-97f8-9f15-a57b11dc6977"
}
Tool callingFor the complete Function calling flow code, see Text generation model overview. Pythonimport os
import dashscope
# The following is the base_url for the Singapore region. If you use a model in the Beijing region, change the base_url to: https://dashscope.aliyuncs.com/api/v1
dashscope.base_http_api_url = 'https://dashscope-intl.aliyuncs.com/api/v1'
tools = [
{
"type": "function",
"function": {
"name": "get_current_time",
"description": "Useful when you want to know the current time.",
"parameters": {}
}
},
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Useful when you want to check the weather in a specific city.",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "A city or district, such as Beijing, Hangzhou, or Yuhang District."
}
}
},
"required": [
"location"
]
}
}
]
messages = [{"role": "user", "content": "What's the weather like in Hangzhou?"}]
response = dashscope.Generation.call(
# If you have not configured the environment variable, replace the following line with your Model Studio API key: api_key="sk-xxx",
# The API keys for the Singapore and Beijing regions are different. Get an API key: https://www.alibabacloud.com/help/en/model-studio/get-api-key
api_key=os.getenv('DASHSCOPE_API_KEY'),
# This example uses qwen-plus. You can change the model name as needed. For a list of models, see https://www.alibabacloud.com/help/en/model-studio/getting-started/models
model='qwen-plus',
messages=messages,
tools=tools,
result_format='message'
)
print(response)
Javaimport java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import com.alibaba.dashscope.aigc.conversation.ConversationParam.ResultFormat;
import com.alibaba.dashscope.aigc.generation.Generation;
import com.alibaba.dashscope.aigc.generation.GenerationParam;
import com.alibaba.dashscope.aigc.generation.GenerationResult;
import com.alibaba.dashscope.common.Message;
import com.alibaba.dashscope.common.Role;
import com.alibaba.dashscope.exception.ApiException;
import com.alibaba.dashscope.exception.InputRequiredException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import com.alibaba.dashscope.tools.FunctionDefinition;
import com.alibaba.dashscope.tools.ToolFunction;
import com.alibaba.dashscope.utils.JsonUtils;
import com.fasterxml.jackson.databind.node.ObjectNode;
import com.github.victools.jsonschema.generator.Option;
import com.github.victools.jsonschema.generator.OptionPreset;
import com.github.victools.jsonschema.generator.SchemaGenerator;
import com.github.victools.jsonschema.generator.SchemaGeneratorConfig;
import com.github.victools.jsonschema.generator.SchemaGeneratorConfigBuilder;
import com.github.victools.jsonschema.generator.SchemaVersion;
import java.time.LocalDateTime;
import java.time.format.DateTimeFormatter;
import com.alibaba.dashscope.protocol.Protocol;
public class Main {
public class GetWeatherTool {
private String location;
public GetWeatherTool(String location) {
this.location = location;
}
public String call() {
return "It is sunny in " + location + " today";
}
}
public class GetTimeTool {
public GetTimeTool() {
}
public String call() {
LocalDateTime now = LocalDateTime.now();
DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss");
String currentTime = "Current time: " + now.format(formatter) + ".";
return currentTime;
}
}
public static void SelectTool()
throws NoApiKeyException, ApiException, InputRequiredException {
SchemaGeneratorConfigBuilder configBuilder =
new SchemaGeneratorConfigBuilder(SchemaVersion.DRAFT_2020_12, OptionPreset.PLAIN_JSON);
SchemaGeneratorConfig config = configBuilder.with(Option.EXTRA_OPEN_API_FORMAT_VALUES)
.without(Option.FLATTENED_ENUMS_FROM_TOSTRING).build();
SchemaGenerator generator = new SchemaGenerator(config);
ObjectNode jsonSchema_weather = generator.generateSchema(GetWeatherTool.class);
ObjectNode jsonSchema_time = generator.generateSchema(GetTimeTool.class);
FunctionDefinition fdWeather = FunctionDefinition.builder().name("get_current_weather").description("Get the weather for a specified area")
.parameters(JsonUtils.parseString(jsonSchema_weather.toString()).getAsJsonObject()).build();
FunctionDefinition fdTime = FunctionDefinition.builder().name("get_current_time").description("Get the current time")
.parameters(JsonUtils.parseString(jsonSchema_time.toString()).getAsJsonObject()).build();
Message systemMsg = Message.builder().role(Role.SYSTEM.getValue())
.content("You are a helpful assistant. When asked a question, use tools wherever possible.")
.build();
Message userMsg = Message.builder().role(Role.USER.getValue()).content("Weather in Hangzhou").build();
List<Message> messages = new ArrayList<>();
messages.addAll(Arrays.asList(systemMsg, userMsg));
GenerationParam param = GenerationParam.builder()
// The API keys for the Singapore and Beijing regions are different. Get an API key: https://www.alibabacloud.com/help/en/model-studio/get-api-key
.apiKey(System.getenv("DASHSCOPE_API_KEY"))
// This example uses qwen-plus. You can change the model name as needed. For a list of models, see https://www.alibabacloud.com/help/en/model-studio/getting-started/models
.model("qwen-plus")
.messages(messages)
.resultFormat(ResultFormat.MESSAGE)
.tools(Arrays.asList(
ToolFunction.builder().function(fdWeather).build(),
ToolFunction.builder().function(fdTime).build()))
.build();
Generation gen = new Generation(Protocol.HTTP.getValue(), "https://dashscope-intl.aliyuncs.com/api/v1");
// The preceding URL is the base_url for the Singapore region. If you use a model in the Beijing region, change the base_url to: https://dashscope.aliyuncs.com/api/v1
GenerationResult result = gen.call(param);
System.out.println(JsonUtils.toJson(result));
}
public static void main(String[] args) {
try {
SelectTool();
} catch (ApiException | NoApiKeyException | InputRequiredException e) {
System.out.println(String.format("Exception %s", e.getMessage()));
}
System.exit(0);
}
}
curlThe API keys for the Singapore and Beijing regions are different. For more information, see Preparations: Obtain and configure an API key The following is the URL for the Singapore region. If you use a model in the Beijing region, change the URL to: https://dashscope.aliyuncs.com/api/v1/services/aigc/text-generation/generation curl --location "https://dashscope-intl.aliyuncs.com/api/v1/services/aigc/text-generation/generation" \
--header "Authorization: Bearer $DASHSCOPE_API_KEY" \
--header "Content-Type: application/json" \
--data '{
"model": "qwen-plus",
"input": {
"messages": [{
"role": "user",
"content": "What is the weather like in Hangzhou?"
}]
},
"parameters": {
"result_format": "message",
"tools": [{
"type": "function",
"function": {
"name": "get_current_time",
"description": "Useful when you want to know the current time.",
"parameters": {}
}
},{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Useful when you want to check the weather in a specific city.",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "A city or district, such as Beijing, Hangzhou, or Yuhang District."
}
}
},
"required": ["location"]
}
}]
}
}'
Asynchronous invocation# Your Dashscope Python SDK must be v1.19.0 or later.
import asyncio
import platform
import os
import dashscope
from dashscope.aigc.generation import AioGeneration
dashscope.base_http_api_url = 'https://dashscope-intl.aliyuncs.com/api/v1'
# The preceding URL is the base_url for the Singapore region. If you use a model in the Beijing region, change the base_url to: https://dashscope.aliyuncs.com/api/v1
async def main():
response = await AioGeneration.call(
# If you have not configured the environment variable, replace the following line with your Model Studio API key: api_key="sk-xxx",
# The API keys for the Singapore and Beijing regions are different. Get an API key: https://www.alibabacloud.com/help/en/model-studio/get-api-key
api_key=os.getenv('DASHSCOPE_API_KEY'),
# This example uses qwen-plus. You can change the model name as needed. For a list of models, see https://www.alibabacloud.com/help/en/model-studio/getting-started/models
model="qwen-plus",
messages=[{"role": "user", "content": "Who are you?"}],
result_format="message",
)
print(response)
if platform.system() == "Windows":
asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())
asyncio.run(main())
Text extractionFor more information about text extraction with the Qwen-OCR model, see Text extraction. Python# Use [pip install -U dashscope] to update the SDK.
import os
import dashscope
# The following URL is for the Singapore region. If you use a model in the Beijing region, replace the URL with https://dashscope.aliyuncs.com/api/v1.
dashscope.base_http_api_url = 'https://dashscope-intl.aliyuncs.com/api/v1'
messages = [
{
"role":"user",
"content":[
{
"image":"https://prism-test-data.oss-cn-hangzhou.aliyuncs.com/image/car_invoice/car-invoice-img00040.jpg",
"min_pixels": 3136,
"max_pixels": 6422528,
}
]
}
]
params = {
"ocr_options":{
# Set the built-in task for information extraction. You do not need to enter a prompt. The model uses the built-in prompt for the task.
"task": "key_information_extraction",
"task_config": {
"result_schema": {
"seller_name": "",
"buyer_name": "",
"price_before_tax": "",
"organization_code": "",
"invoice_code": ""
}
}
}
}
response = dashscope.MultiModalConversation.call(model='qwen-vl-ocr',
messages=messages,
**params,
# The API keys for the Singapore and Beijing regions are different. To obtain an API key, visit https://www.alibabacloud.com/help/en/model-studio/get-api-key.
api_key=os.getenv('DASHSCOPE_API_KEY'))
print(response.output.choices[0].message.content[0]["ocr_result"])
Javaimport java.util.Arrays;
import java.util.Collections;
import java.util.Map;
import java.util.HashMap;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversation;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationParam;
import com.alibaba.dashscope.aigc.multimodalconversation.MultiModalConversationResult;
import com.alibaba.dashscope.aigc.multimodalconversation.OcrOptions;
import com.alibaba.dashscope.common.MultiModalMessage;
import com.alibaba.dashscope.common.Role;
import com.alibaba.dashscope.exception.ApiException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import com.alibaba.dashscope.exception.UploadFileException;
import com.google.gson.JsonObject;
import com.alibaba.dashscope.utils.Constants;
public class Main {
static {
// The following URL is for the Singapore region. If you use a model in the Beijing region, replace the URL with https://dashscope.aliyuncs.com/api/v1.
Constants.baseHttpApiUrl="https://dashscope-intl.aliyuncs.com/api/v1";
}
public static void simpleMultiModalConversationCall()
throws ApiException, NoApiKeyException, UploadFileException {
MultiModalConversation conv = new MultiModalConversation();
Map<String, Object> map = new HashMap<>();
map.put("image", "https://prism-test-data.oss-cn-hangzhou.aliyuncs.com/image/car_invoice/car-invoice-img00040.jpg");
// The maximum pixel threshold for the input image. If the image exceeds this value, it is scaled down proportionally until the total number of pixels is less than max_pixels.
map.put("max_pixels", "6422528");
// The minimum pixel threshold for the input image. If the image is smaller than this value, it is scaled up proportionally until the total number of pixels is greater than min_pixels.
map.put("min_pixels", "3136");
// Enable the automatic image rotation feature.
map.put("enable_rotate", true);
MultiModalMessage userMessage = MultiModalMessage.builder().role(Role.USER.getValue())
.content(Arrays.asList(
map
)).build();
// Create the main JSON object.
JsonObject resultSchema = new JsonObject();
resultSchema.addProperty("seller_name", "");
resultSchema.addProperty("buyer_name", "");
resultSchema.addProperty("price_before_tax", "");
resultSchema.addProperty("organization_code", "");
resultSchema.addProperty("invoice_code", "");
// Set the built-in task for information extraction. You do not need to enter a prompt. The model uses the built-in prompt for the task.
OcrOptions ocrOptions = OcrOptions.builder()
.task(OcrOptions.Task.KEY_INFORMATION_EXTRACTION)
.taskConfig(OcrOptions.TaskConfig.builder()
.resultSchema(resultSchema)
.build())
.build();
MultiModalConversationParam param = MultiModalConversationParam.builder()
// The API keys for the Singapore and Beijing regions are different. To obtain an API key, visit https://www.alibabacloud.com/help/en/model-studio/get-api-key.
// If you have not configured the environment variable, replace the following line with .apiKey("sk-xxx") and use your Model Studio API key.
.apiKey(System.getenv("DASHSCOPE_API_KEY"))
.model("qwen-vl-ocr")
.message(userMessage)
.ocrOptions(ocrOptions)
.build();
MultiModalConversationResult result = conv.call(param);
System.out.println(result.getOutput().getChoices().get(0).getMessage().getContent().get(0).get("ocr_result"));
}
public static void main(String[] args) {
try {
simpleMultiModalConversationCall();
} catch (ApiException | NoApiKeyException | UploadFileException e) {
System.out.println(e.getMessage());
}
System.exit(0);
}
}
cURLThe API keys for the Singapore and Beijing regions differ. For more information, see Create an API key The following URL is for the Singapore region. If you use a model in the Beijing region, replace the URL with https://dashscope.aliyuncs.com/api/v1/services/aigc/multimodal-generation/generation. curl --location 'https://dashscope-intl.aliyuncs.com/api/v1/services/aigc/multimodal-generation/generation' \
--header "Authorization: Bearer $DASHSCOPE_API_KEY" \
--header 'Content-Type: application/json' \
--data '
{
"model": "qwen-vl-ocr",
"input": {
"messages": [
{
"role": "user",
"content": [
{
"image": "https://prism-test-data.oss-cn-hangzhou.aliyuncs.com/image/car_invoice/car-invoice-img00040.jpg",
"min_pixels": 3136,
"max_pixels": 6422528,
"enable_rotate": true
}
]
}
]
},
"parameters": {
"ocr_options": {
"task": "key_information_extraction",
"task_config": {
"result_schema": {
"seller_name": "",
"buyer_name": "",
"price_before_tax": "",
"organization_code": "",
"invoice_code": ""
}
}
}
}
}
'
Document understandingPythonimport os
import dashscope
# Currently, only the Beijing region supports calling the qwen-long-latest model.
dashscope.base_http_api_url = 'https://dashscope.aliyuncs.com/api/v1'
messages = [
{'role': 'system', 'content': 'you are a helpful assisstant'},
# Replace '{FILE_ID}' with the file ID used in your actual conversation scenario.
{'role':'system','content':f'fileid://{FILE_ID}'},
{'role': 'user', 'content': 'What is this article about?'}]
response = dashscope.Generation.call(
# If you have not configured the environment variable, replace the following line with your Model Studio API key: api_key="sk-xxx",
api_key=os.getenv('DASHSCOPE_API_KEY'),
model="qwen-long-latest",
messages=messages,
result_format='message'
)
print(response)
Javaimport os
import dashscope
# Currently, only the Beijing region supports calling the qwen-long-latest model.
dashscope.base_http_api_url = 'https://dashscope.aliyuncs.com/api/v1'
messages = [
{'role': 'system', 'content': 'you are a helpful assisstant'},
# Replace '{FILE_ID}' with the file ID used in your actual conversation scenario.
{'role':'system','content':f'fileid://{FILE_ID}'},
{'role': 'user', 'content': 'What is this article about?'}]
response = dashscope.Generation.call(
# If you have not configured the environment variable, replace the following line with your Model Studio API key: api_key="sk-xxx",
api_key=os.getenv('DASHSCOPE_API_KEY'),
model="qwen-long-latest",
messages=messages,
result_format='message'
)
print(response)
curlCurrently, only the Beijing region supports calling the document understanding model. Replace {FILE_ID} with the file ID used in your actual conversation scenario. curl --location "https://dashscope.aliyuncs.com/api/v1/services/aigc/text-generation/generation" \
--header "Authorization: Bearer $DASHSCOPE_API_KEY" \
--header "Content-Type: application/json" \
--data '{
"model": "qwen-long-latest",
"input":{
"messages":[
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "system",
"content": "fileid://{FILE_ID}"
},
{
"role": "user",
"content": "What is this article about?"
}
]
},
"parameters": {
"result_format": "message"
}
}'
|
model string (required) The model name. Supported models: Qwen large language models (commercial and open source), code models, Qwen-VL For specific model names and billing details, see Model list. |
messages array (required) A list of messages that make up the conversation history. When making an HTTP call, place messages in the input object. Message types System Message object (optional) The goal or role of the model. If you set a system message, place it at the beginning of the messages list. Properties content string or array (required) Message content. This is an array only when you call the Recording File Recognition - Qwen feature. Otherwise, it is a string. Properties text string Specifies the context. Qwen3 ASR lets you provide reference information, such as background text and entity vocabularies (Context), during speech recognition to obtain customized recognition results. Length limit: 10,000 tokens. For more information, see Context enhancement. role string (required) Fixed as system. We do not recommend setting a System Message for QwQ models. Setting a System Message for QVQ models will not take effect. User Message object (required) The message sent by the user to the model. Properties content string or array (required) The content of the user's message. This is a string if your input is only text. It is an array if your input includes multimodal data, such as images. Properties text string The text information to pass. video array or string When you use the Qwen-VL model or QVQ model for video understanding, the input type is array for an image list and string for a video file. To pass in local files, see Local file (Qwen-VL) or Local file (QVQ). Examples: Image list: {"video":["https://xx1.jpg",...,"https://xxn.jpg"]} Video file: {"video":"https://xxx.mp4"}
For Qwen-VL models, only some can directly accept video files as input (for more information, see Video Understanding (Qwen-VL)). However, you can directly pass in video files for QVQ models. fps float (optional) This setting controls the frame extraction frequency for video files passed to the Qwen-VL model or QVQ model, specifying that one frame is extracted from the video file every fps1 seconds. When you pass an image list to the Qwen2.5-VL, Qwen3-VL, or QVQ model, this setting specifies that the image list is extracted from the original video every fps1 seconds.
In addition, the fps parameter allows Qwen2.5-VL, Qwen3-VL, and QVQ models to perceive the time interval between frames. Compared to other Qwen-VL models, this adds temporal understanding capabilities, such as locating specific events or summarizing key points from different time periods. Qwen2.5-VL series models qwen-vl-max series: qwen-vl-max-latest, qwen-vl-max-2025-04-08 and later models. qwen-vl-plus series: qwen-vl-plus-latest, qwen-vl-plus-2025-01-25 and later models. Open source series: qwen2.5-vl models.
Used with the video parameter. The value range is (0.1, 10). Defaults to 2.0. Example values are as follows: Passing an image list: {"video":["https://xx1.jpg",...,"https://xxn.jpg"], "fps":2} Passing a video file: {"video": "https://xx1.mp4", "fps":2}
A larger fps is suitable for high-speed motion scenarios, such as sports events or action movies. A smaller fps is suitable for long videos or scenarios with static content. When using the OpenAI SDK, video files are sampled every 0.5 seconds by default, and image lists are assumed to be extracted from a video every 0.5 seconds. This cannot be modified. audio string This parameter is required when the model is an speech recognition model, such as `qwen3-asr-flash`. The audio file to pass in for the audio understanding or Speech-to-text with Qwen feature. Example: {"audio":"https://xxx.mp3"} min_pixels integer (Optional) Supported by Qwen-OCR and Qwen-VL models. This parameter sets the minimum pixel threshold for the input image. When the input image has fewer pixels than min_pixels, the image is scaled up proportionally until its total pixels exceed min_pixels. min_pixels value range Qwen-OCR, qwen-vl-max-0813 and earlier, qwen-vl-plus-0815 and earlier updated models: The default and minimum value is 3,136. qwen-vl-max-0813 and later, qwen-vl-plus-0815 and later updated models: The default and minimum value is 4,096.
Qwen3-VL: Defaults to 65,536. Minimum value: 4,096.
max_pixels integer (optional) Supported by Qwen-OCR and Qwen-VL. It sets the maximum pixel threshold for the input image. When the input image's pixel count is within the [min_pixels, max_pixels] range, the model recognizes the original image. When the input image's pixel count is greater than max_pixels, the image is scaled down proportionally until its total pixel count is less than max_pixels. max_pixels value range For Qwen-OCR models: Defaults to 6422528. Maximum value: 23520000. For Qwen-VL models, there are two cases:
cache_control object (optional) Supported only by models that allow explicit caching, and is used to enable explicit caching. Properties type string (required) Fixed as ephemeral. role string (required) The role of the user message, fixed as user. Assistant Message object (optional) The model's response to the user's message. Properties content string (optional) The message content. This parameter is optional only when the tool_calls parameter is specified in the assistant message. role string (required) Fixed as assistant. partial boolean (optional) Specifies whether to enable partial mode, see Partial mode. Supported models qwen-max series qwen-max-2025-01-25 and later models qwen-plus series (non-thinking mode) qwen-plus-2025-01-25 and later models qwen-flash series (non-thinking mode) qwen-flash-2025-07-28 and later models qwen3-coder series qwen3-coder-plus, qwen3-coder-flash, qwen3-coder-480b-a35b-instruct, qwen3-coder-30b-a3b-instruct qwen-turbo series (non-thinking mode) qwen-turbo-2024-11-01 and later models qwen-open-source series qwen2.5 series text models
tool_calls array (optional) After a function call is initiated, this parameter contains the tool that the model decides to call and the arguments required to call it. It contains one or more objects and is obtained from the tool_calls field of the previous model response. Properties id string The ID of this tool response. type string The type of tool. Currently, only function is supported. function object The function to be called. Properties name string The name of the function to be called. arguments string The parameters to be input into the tool, as a JSON string. index integer The index of the tool information in the tool_calls list. Tool Message object (optional) The output of the tool. Properties content string (required) The content of the tool message, which is typically the output of the tool function. role string (required) The role of the tool message, fixed as tool. tool_call_id string (optional) The ID returned after a function call is initiated. You can get it through response.output.choices[0].message.tool_calls[0]["id"] and use it to associate the tool message with the corresponding tool. |
temperature float (optional) The sampling temperature, which controls the diversity of the generated text. A higher temperature value results in more diverse text, while a lower value produces more deterministic text. Value range: [0, 2) When making an HTTP call, place temperature in the parameters object. We do not recommend modifying the default temperature value for QVQ models. |
top_p float (optional) The probability threshold for nucleus sampling, which controls the diversity of the generated text. A higher top_p value results in more diverse text. A lower value produces more deterministic text. Value range: (0, 1.0]. Default top_p values Qwen3 (non-thinking mode), Qwen3-Instruct series, Qwen3-Coder series, qwen-max series, qwen-plus series (non-thinking mode), qwen-flash series (non-thinking mode), qwen-turbo series (non-thinking mode), qwen open source series, qwen-vl-max-2025-08-13, Qwen3-VL (non-thinking mode): 0.8; qwen-vl-plus series, qwen-vl-max, qwen-vl-max-latest, qwen-vl-max-2025-04-08, qwen2.5-vl-3b-instruct, qwen2.5-vl-7b-instruct, qwen2.5-vl-32b-instruct, qwen2.5-vl-72b-instruct: 0.001; QVQ series, qwen-vl-plus-2025-07-10, qwen-vl-plus-2025-08-15: 0.5; Qwen3-Omni-Flash series: 1.0; Qwen3 (thinking mode), Qwen3-VL (thinking mode), Qwen3-Thinking, QwQ Series, Qwen3-Omni-Captioner: 0.95 In the Java SDK, this is topP. When making an HTTP call, place top_p in the parameters object. We do not recommend modifying the default top_p value for QVQ models. |
top_k integer (Optional) The size of the candidate set for sampling during generation. For example, a value of 50 means that only the 50 tokens with the highest scores in a single generation are used as the candidate set for random sampling. A larger value increases randomness, while a smaller value increases determinism. A value of None or a value greater than 100 means that the top_k strategy is not enabled, and only the top_p strategy takes effect. The value must be greater than or equal to 0. Default top_k values QVQ series, qwen-vl-plus-2025-07-10, qwen-vl-plus-2025-08-15: 10; QwQ Series: 40; Other qwen-vl-plus series, models before qwen-vl-max-2025-08-13, qwen-vl-ocr, qwen2.5-omni-7b: 1; Qwen3-Omni-Flash series: 50 All other models: 20; In the Java SDK, this is topK. When making an HTTP call, place top_k in the parameters object. We do not recommend modifying the default top_k value for QVQ models. |
enable_thinking boolean (optional) Specifies whether to enable thinking mode. This applies to Qwen3, Qwen3-VL commercial and open source editions, and the Qwen3-Omni-Flash model. The default value for the Qwen3 open source edition is True. The default value for Qwen3 commercial models is False. In the Java SDK, this is enableThinking. When making an HTTP call, place enable_thinking in the parameters object. |
thinking_budget integer (optional) The maximum length of the thinking process. This parameter takes effect when enable_thinking is set to true and applies to all models in the Qwen3 series and the Qwen3-VL model. For more information, see Limit thinking length. Defaults to the model's maximum chain-of-thought length. |
enable_code_interpreter boolean (Optional) Whether to enable the code interpreter. Defaults to false. Only qwen3-max-preview (thinking mode) supports this parameter. When you call the API using the Python SDK, configure this parameter using extra_body. For example: extra_body={"enable_code_interpreter": xxx}。 |
repetition_penalty float (Optional) Controls the repetition in the generated text. A higher value reduces repetition. A value of 1.0 means no penalty. The value must be greater than 0. In the Java SDK, this parameter is repetitionPenalty. When making an HTTP call, place repetition_penalty in the parameters object. When using the qwen-vl-plus_latest, or qwen-vl-plus_2025-01-25 models for text extraction, set repetition_penalty to 1.0. For the Qwen-OCR model, the default value of repetition_penalty is 1.05. This parameter significantly affects model performance. Do not change this value. Do not change the default repetition_penalty value for QVQ models. |
presence_penalty float (Optional) Controls the degree of repetition in the text that the model generates. Value range: [-2.0, 2.0]. A positive value reduces repetition, and a negative value increases it. Scenarios: A higher `presence_penalty` value is suitable for scenarios that require diversity, fun, or creativity, such as creative writing or brainstorming. A lower `presence_penalty` value is suitable for scenarios that require consistency or professional terminology, such as technical documents or other formal documents. Default presence_penalty values Qwen3 (non-thinking mode), Qwen3-Instruct series, qwen3-0.6b/1.7b/4b (thinking mode), QVQ series, qwen-max, qwen-max-latest, qwen-max-latestqwen2.5-vl series, qwen-vl-max series, qwen-vl-plus, Qwen3-VL (non-thinking): 1.5. qwen-vl-plus-latest, qwen-vl-plus-2025-08-15: 1.2 qwen-vl-plus-2025-01-25: 1.0. qwen3-8b/14b/32b/30b-a3b/235b-a22b (thinking mode), qwen-plus/qwen-plus-latest/2025-04-28 (thinking mode), qwen-turbo/qwen-turbo/2025-04-28 (thinking mode): 0.5. All other models: 0.0. How it works If the parameter value is positive, the model applies a penalty to tokens that already exist in the text. The penalty is independent of the number of occurrences. This reduces the chance of these tokens reappearing, which reduces content repetition and increases word diversity. Example Prompt: Translate this sentence into English: "Esta película es buena. La trama es buena, la actuación es buena, la música es buena, y en general, toda la película es simplemente buena. Es realmente buena, de hecho. La trama es tan buena, y la actuación es tan buena, y la música es tan buena." Parameter value 2.0: This movie is very good. The plot is fantastic, the acting is great, and the music is beautiful. Overall, the film is incredibly good. It is truly excellent. The plot is brilliant, the acting is outstanding, and the music is so moving. Parameter value 0.0: This movie is very good. The plot is good, the acting is good, and the music is also good. Overall, the whole movie is very good. In fact, it is really great. The plot is very good, the acting is also outstanding, and the music is also excellent. Parameter value -2.0: This movie is very good. The plot is very good, the acting is very good, and the music is also very good. Overall, the whole movie is very good. In fact, it is really great. The plot is very good, the acting is also very good, and the music is also very good. When you use the qwen-vl-plus-2025-01-25 model for text extraction, we recommend that you set `presence_penalty` to 1.5. Do not modify the default presence_penalty value for QVQ models. The Java SDK does not support this parameter. When you make an HTTP call, place presence_penalty in the parameters object. |
vl_high_resolution_images boolean (Optional) Defaults to false Specifies whether to increase the default token limit for input images. This parameter applies to Qwen-VL and QVQ models. `false` (default): Processes images using the default token limit. Qwen3-VL commercial and open-source versions, qwen-vl-max-0813 and later, qwen-vl-plus-0815 and later updated models: The default token limit is 2,560.
QVQ and other Qwen-VL models: The default token limit is 1,280.
`true`: The token limit for input images is increased to 16,384.
The parameter in the Java SDK is vlHighResolutionImages. The minimum required Java SDK version is 2.20.8. For HTTP calls, place vl_high_resolution_images in the parameters object. |
vl_enable_image_hw_output boolean (Optional) Defaults to false. Specifies whether to return the dimensions of the image after scaling. The model scales the input image. If this parameter is set to true, the model returns the height and width of the scaled image. When streaming output is enabled, this information is returned in the last chunk. Supported by the Qwen-VL model. In the Java SDK, this parameter is vlEnableImageHwOutput. The minimum required Java SDK version is 2.20.8. For HTTP calls, place vl_enable_image_hw_output in the parameters object. |
ocr_options object (Optional) Parameters for using built-in tasks with the Qwen-OCR model. Properties task string (Required) The name of the built-in task. Valid values are: "text_recognition": General text recognition "key_information_extraction": Key information extraction "document_parsing": Document parsing "table_parsing": Table parsing "formula_recognition": Formula recognition "multi_lan": Multilingual recognition "advanced_recognition": Advanced recognition
task_config arrays (Optional) This parameter is used when the task is key_information_extraction. Property result_schema object (Required) Defines the fields for the model to extract. Use any JSON structure with up to three nested layers. Provide only the JSON object keys and leave the values empty. Example: "result_schema" : {
"Recipient Information" : {
"Recipient Name" : "",
"Recipient Phone Number" : "",
"Recipient Address":""
}
}
This parameter corresponds to OcrOptions in the Java SDK. The minimum required versions are DashScope Python SDK 1.22.2 and Java SDK 2.18.4. When making an HTTP call, place ocr_options in the parameters object. |
max_input_tokens integer (Optional) The maximum number of tokens in the input. This parameter is currently supported only for the qwen-plus-0728/latest model. The default value for qwen-plus-latest is 129,024. The default value may be changed to 1,000,000 in the future. The default value for qwen-plus-2025-07-28 is 1,000,000.
This parameter is not currently supported by the Java SDK. |
max_tokens integer (Optional) The maximum number of tokens to return for the request. max_tokens parameter does not affect the generation process of the model. If the number of generated tokens exceeds the specified max_tokens value, the request returns truncated content.
The default and maximum values are the model's maximum output length. For the maximum output length of each model, see Models and pricing. The `max_tokens` parameter is suitable for scenarios that require a limited word count, such as generating summaries or keywords, controlling costs, or reducing response time. qwen-vl-ocr models, the max_tokens parameter (maximum output length) defaults to 4,096. To increase this value to a number in the range of 4,097 to 8,192, send an email to modelstudio@service.aliyun.com with the following information: your Alibaba Cloud account ID, image type (such as document, e-commerce, or contract), model name, estimated QPS and total daily requests, and the percentage of requests where the model output length exceeds 4,096. For QwQ, QVQ, and Qwen3 models with thinking mode enabled, max_tokens limits the length of the response content, not the length of the deep thinking content. In the Java SDK, this parameter is maxTokens. For Qwen-VL and OCR models, the parameter is maxLength, but versions 2.18.4 and later also support maxTokens. When you make an HTTP call, place max_tokens in the parameters |
seed integer (Optional) Setting the `seed` parameter makes the text generation process more deterministic. It is typically used to ensure that the model's results are consistent across runs. By passing the same seed value in each model call and keeping other parameters unchanged, the model returns the same result whenever possible. Value range: 0 to 231−1. When making an HTTP call, put seed in the parameters object. |
stream boolean (Optional) Specifies whether to stream the response. Valid values: false (default): The model returns the complete result at once after all content is generated. true: Streams the output as it is generated. A chunk is sent as soon as a part of the content is generated.
This parameter is supported only by the Python SDK. To implement streaming output with the Java SDK, call the streamCall interface. To implement streaming output over HTTP, set X-DashScope-SSE to enable in the header. Qwen3 Commercial Edition (thinking mode), Qwen3 Open Source Edition, QwQ, and QVQ support only streaming output. |
incremental_output boolean (Optional) Defaults to false. Defaults to true for the Qwen3-Max, Qwen3-VL, Qwen3 open-source edition, QwQ, and QVQ models. Set this parameter to true to enable incremental output in streaming output mode. Parameter values: false: Each output is the entire sequence generated so far. The final output is the complete result. I
I like
I like apple
I like apple.
true: Enables incremental output. Subsequent outputs do not include previously outputted content. You must read these segments in real time to obtain the complete result. I
like
apple
.
In the Java SDK, this parameter is incrementalOutput. When making an HTTP call, put incremental_output in the parameters object. The QwQ model and the Qwen3 model in thinking mode only support the value true. Because the default value for the Qwen3 commercial model is false, you must manually set this parameter to true when using thinking mode. The Qwen3 open-source edition model does not support the value false. |
response_format object (Optional) Defaults to {"type": "text"}. The format of the response content. Valid values: {"type": "text"} or {"type": "json_object"}. If you set this parameter to {"type": "json_object"}, the model outputs a standard JSON string. For more information, see structured output. If you set this parameter to {"type": "json_object"}, instruct the model to output the JSON format in the System Message or User Message. For example: "Output the result in JSON format." In the Java SDK, this parameter is responseFormat. When making an HTTP call, place response_format in the parameters object. |
output_format string (Optional) Defaults to "model_detailed_report". Specifies the format of the output. This parameter is valid only when you call the qwen-deep-research model. Valid values: "model_detailed_report": A detailed research report of about 6,000 words. "model_summary_report": A summary research report of about 1,500 to 2,000 words.
|
result_format string (Optional) Defaults to "text". For the Qwen3-Max, Qwen3-VL, QwQ, Qwen3 open-source (except qwen3-next-80b-a3b-instruct) models, defaults to "message". The format of the returned data. Set this parameter to "message" to more easily conduct multi-turn conversations. The platform will change the default value to "message" in a future release. In the Java SDK, the parameter is resultFormat. For HTTP calls, set result_format in the parameters object. If you use the Qwen-VL, QVQ, OCR, models, setting this parameter to "text" has no effect. For the Qwen3-Max, Qwen3-VL, and Qwen3 models in thinking mode, this parameter can only be set to "message". The default value for Qwen3 commercial models is "text". You must set the value to "message". If you use the Java SDK to call Qwen3 open-source models and pass "text", the response is still returned in the "message" format. |
logprobs boolean Optional. Defaults to false. Specifies whether to return the log probabilities of output tokens. Valid values: Log probabilities are not returned for content generated during the reasoning phase (reasoning_content). Supported for snapshot models of the qwen-plus and qwen-turbo series (excluding mainline models), and for Qwen3 open-source models. |
top_logprobs integer (Optional) Default: 0 Specifies the number of candidate tokens with the highest log probabilities to return at each generation step. Valid values: [0, 5] This parameter takes effect only when logprobs is true. |
n integer (Optional) Defaults to 1. The number of responses to generate. The value range is [1, 4]. For scenarios that require multiple responses, such as creative writing and ad copy generation, set a larger value for n. This is supported only by the qwen-plus, Qwen3 (non-thinking mode) models. The value is fixed to 1 when the tools parameter is passed. A larger value for n does not increase input token consumption. It does increase output token consumption. |
stop string or array (Optional) When the `stop` parameter is used, the model automatically stops generating when the generated text is about to include the specified string or `token_id`. You can pass sensitive words in the `stop` parameter to control the model's output. When `stop` is an array, you cannot input both a `token_id` and a string as elements. For example, you cannot specify `stop` as ["Hello",104307]. |
tools array (Optional) An array of one or more tool objects that the model can call. In a single function call flow, the model selects one tool from this array. If you enable the parallel_tool_calls parameter, the model may select multiple tools. When you use `tools`, set the result_format parameter to "message". Use the `tools` parameter to initiate a function call or to submit the execution result of a tool function to the model. Currently, this feature is not supported for Qwen-VL. Properties type string (Required) The type of the tools. Currently, only `function` is supported. function object (Required) Properties name string (Required) The name of the tool function. The name must consist of letters and numbers, and can include underscores and hyphens. The maximum length is 64 characters. description string (Required) A description of the tool function. The model uses this description to decide when and how to call the function. parameters object (Required) The parameters of the tool, described as a valid JSON Schema object. For more information, see the JSON Schema documentation. If the parameters object is empty, the function has no input parameters. When you make an HTTP call, place the tools parameter inside the parameters JSON object. The qwen-vl series models are not supported. |
tool_choice string or object (Optional) Controls which tool the model calls when the tools parameter is used. It has three possible values: "none" indicates that no tool is called. "none" is also the default value if the `tools` parameter is empty.
"auto" means that the model decides whether to call a tool. When the tools parameter is not empty, defaults to "auto".
An object structure that specifies a tool for the model to call. For example, tool_choice={"type": "function", "function": {"name": "user_function"}}. This value is not supported if the model outputs its thought process.
In the Java SDK, this parameter is named toolChoicetool_choice in the parameters object. |
parallel_tool_calls boolean (Optional) Defaults to false. Specifies whether to enable parallel tool calling. If this parameter is set to true, parallel tool calling is enabled. If this parameter is set to false, parallel tool calling is disabled. For more information, see Parallel tool calling. |
translation_options object (Optional) The translation parameters to configure when you use a translation model. Properties source_lang string (Required) The full English name of the source language. For more information, see Supported languages. You can set source_lang to "auto", and the model automatically determines the language of the input text. target_lang string (Required) The full English name of the target language. For more information, see Supported languages. terms arrays (Optional) The term array to set when you use the term intervention feature. Properties source string (Required) The term in the source language. target string (Required) The term in the target language. tm_list arrays (Optional) The translation memory array to set when you use the translation memory feature. Properties source string (Required) The statement in the source language. target string (Required) The statement in the target language. domains string (Optional) The domain prompt statement to set when you use the domain prompting feature. Domain prompt statements are currently supported only in English. In the Java SDK, this parameter is translationOptions. When making an HTTP call, put translation_options in the parameters object. |
asr_options object (Optional) This parameter is active only when you invoke the Qwen audio file recognition feature. It applies only to the Qwen3 ASR model. Use this parameter to specify whether to enable certain features. For more information, see QuickStart. When you make a call using HTTP or the Java SDK, put the asr_options parameter into the parameters object. Properties language string (Optional). Default: None. If the language of the audio is known, use this parameter to specify the language to detect. This improves detection accuracy. Specify only one language. If the audio language is uncertain or contains multiple languages, such as a mix of Chinese, English, Japanese, and Korean, do not specify this parameter. Parameter value: zh: Chinese en: English ja: Japanese de: German ko: Korean ru: Russian fr: French pt: Portuguese ar: Arabic it: Italian es: Spanish
enable_itn boolean (Optional). Default: false. Specifies whether to enable Inverse Text Normalization (ITN). This feature applies only to Chinese and English audio. ITN is a post-processing step in speech recognition. It transforms the detected results from spoken words into a standard, conventional written format. Value: |