Large language models (LLMs) may perform poorly with real-time issues or mathematical calculations. You can use the function calling feature to equip LLMs with tools to interact with the outside world.
Supported models
The following models are supported:
QwQ supports function calling. However, its usage is different from the above models, see QwQ function calling instead.
Currently multimodal Qwen models are not supported.
We recommend Qwen-Plus, which is balanced in terms of performance, speed, and cost.
If you have high requirements for response speed and cost, we recommend the commercial Qwen-Turbo and the open-source Qwen2.5.
If you have high requirements for response accuracy, we recommend the commercial Qwen-Max and the open-source qwen2.5-72b-instruct.
Overview
If you ask Qwen "What is the latest news from Alibaba Cloud", it cannot provide the accurate answer:
I cannot provide real-time information because my data is only updated until 2021.
A human can help the LLM by these steps:
Select a tool
To get information about real-time news, open a browser.
Extract parameters
Based on the query "What is the latest news from Alibaba Cloud", input "Alibaba Cloud news" in the browser's input box.
Run the tool
The browser returns various web pages, including "Alibaba Cloud Named a Leader in Gartner® Magic Quadrant™ for Cloud Database Management Systems for Fifth Consecutive Year."
Give the tool's output to the LLM
Feed the web page content into the prompt for Qwen: "Here is the information: Alibaba Cloud Named a Leader in Gartner® Magic Quadrant™ for Cloud Database Management Systems for Fifth Consecutive Year ... Please summarize and answer: What is the latest news from Alibaba Cloud". With adequate reference information, the model can provide a relevant response:
The latest news from Alibaba Cloud is that it has been named a Leader in the Gartner® Magic Quadrant™ for Cloud Database Management Systems for the fifth consecutive year.
...
This recognition highlights Alibaba Cloud's continued excellence and leadership in providing robust and innovative cloud database solutions.
At this point, the model has managed to answer questions regarding real-time news. However, this process necessitates manual intervention, such as tool selection, parameter extraction, and tool execution.
The function calling feature can automate this process. After the model receives a question, it automatically selects a tool, extracts parameters, run the tool, and summarize the outputs. Performance showcase:
This for reference only and no requests are actually sent.
The following chart shows how function calling works:
Prerequisites
You must first obtain an API key and set the API key as an environment variable. If you need to use OpenAI SDK or DashScope SDK, you must install the SDK. If you are using a sub-workspace, ensure the Super Admin has authorized the model for the sub-workspace.
How to use
This section details the steps for function calling using the OpenAI SDK, with weather query and time query as examples.
If you are using the DashScope SDK or you want to see the full code, click the link in the following table.
DashScope | |
OpenAI | |
1. Define tools
Tools serve as the interface between the LLM and the external world. To implement function calling, you must first define your tools.
1.1. Define tool functions
Start by defining two tool functions: the weather query tool and the time query tool.
Weather query tool
The weather query tool accepts the
arguments
parameter in the format of:{"location": "The location to query"}
. The tool outputs a string in the format of:"It is {weather} today in {location}."
.In this topic, the weather query tool is a mock function that simply selects from sunny, cloudy, or rainy at random. In practice, you can replace it with actual weather services.
Time query tool
The time query tool requires no input parameters and outputs a string:
"Current time: {queried time}."
If you are using Node.js, use the following command to install the tool package date-fns first:
npm install date-fns
## Step 1: Define tool functions
# Import the random module
import random
from datetime import datetime
# Simulate the weather query tool. Sample return: "It is rainy today in Singapore."
def get_current_weather(arguments):
# Define a list of alternative weather conditions
weather_conditions = ["sunny", "cloudy", "rainy"]
# Randomly select a weather condition
random_weather = random.choice(weather_conditions)
# Extract location information from JSON
location = arguments["location"]
# Return formatted weather information
return f"It is {random_weather} today in {location}."
# Tool to query the current time. Sample return: "Current time: 2024-04-15 17:15:18."
def get_current_time():
# Get the current date and time
current_datetime = datetime.now()
# Format the current date and time
formatted_time = current_datetime.strftime('%Y-%m-%d %H:%M:%S')
# Return the formatted current time
return f"Current time: {formatted_time}."
# Test tool functions and output results. You can remove the following four lines of test code when running subsequent steps
print("Test tool output:")
print(get_current_weather({"location": "Shanghai"}))
print(get_current_time())
print("\n")
// Step 1: Define tool functions
// Import the time query tool
import { format } from 'date-fns';
function getCurrentWeather(args) {
// Define a list of alternative weather conditions
const weatherConditions = ["sunny", "cloudy", "rainy"];
// Randomly select a weather condition
const randomWeather = weatherConditions[Math.floor(Math.random() * weatherConditions.length)];
// Extract location information from JSON
const location = args.location;
// Return formatted weather information
return `$It is ${randomWeather} today in {location}.`;
}
function getCurrentTime() {
// Get the current date and time
const currentDatetime = new Date();
// Format the current date and time
const formattedTime = format(currentDatetime, 'yyyy-MM-dd HH:mm:ss');
// Return the formatted current time
return `Current time: ${formattedTime}.`;
}
// Test tool functions and output results. You can remove the following four lines of test code when running subsequent steps
console.log("Test tool output:")
console.log(getCurrentWeather({location:"Shanghai"}));
console.log(getCurrentTime());
console.log("\n")
Sample return of the tool:
Test tool output:
Shanghai today is cloudy.
Current time: 2025-01-08 20:21:45.
1.2 Create tools array
To enable accurate tool selection by the LLM, you need to provide tool information in the JSON format, which includes the tool's purpose, scenario, and input parameters.
| The description format for the weather query tool:
|
Before initiating function calling, you need to pass the tool description information through the tools
parameter. The tools
is of the JSON Array type, and the elements in the Array are the tool description information.
The tools parameter is specified when you initiate function calling.
# Paste the following code after the code in Step 1
## Step 2: Create tools array
tools = [
{
"type": "function",
"function": {
"name": "get_current_time",
"description": "Very useful when you want to know the current time.",
}
},
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Very useful when you want to query the weather of a specified city.",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City or district, such as Singapore, Hangzhou, Yuhang District, etc.",
}
},
"required": ["location"]
}
}
}
]
tool_name = [tool["function"]["name"] for tool in tools]
print(f"Created {len(tools)} tools: {tool_name}\n")
// Paste the following code after the code in Step 1
// Step 2: Create tools array
const tools = [
{
type: "function",
function: {
name: "get_current_time",
description: "Very useful when you want to know the current time.",
}
},
{
type: "function",
function: {
name: "get_current_weather",
description: "Very useful when you want to query the weather of a specified city.",
parameters: {
type: "object",
properties: {
location: {
type: "string",
description: "City or district, such as Singapore, Hangzhou, Yuhang District, etc.",
}
},
required: ["location"]
}
}
}
];
const toolNames = tools.map(tool => tool.function.name);
console.log(`Created ${tools.length} tools: ${toolNames.join(', ')}\n`);
2. Create messages array
Just like normal conversation with Qwen, you need to maintain a messages array to convey instructions and context to the LLM. This array should include both System Message and User Message before you initiate function calling.
System Message
Although the purpose and scenario of tools have been described in Create tools array, you can highlight when to activate tools within the System Message to enhance the accuracy of the LLM. In this example, use the following System Prompt:
You are a very helpful assistant. If the user asks about the weather, please call the 'get_current_weather' function;
If the user asks about the time, please call the 'get_current_time' function.
Please answer questions in a friendly tone.
User Message
The User Message passes the user's query. If the user asks "Shanghai weather," the messages array would be:
# Step 3: Create messages array
# Paste the following code after the code in Step 2
messages = [
{
"role": "system",
"content": """You are a very helpful assistant. If the user asks about the weather, please call the 'get_current_weather' function;
If the user asks about the time, please call the 'get_current_time' function.
Please answer questions in a friendly tone.""",
},
{
"role": "user",
"content": "Shanghai weather"
}
]
print("Messages array created\n")
// Step 3: Create messages array
// Paste the following code after the code in Step 2
const messages = [
{
role: "system",
content: "You are a very helpful assistant. If the user asks about the weather, please call the 'get_current_weather' function; If the user asks about the time, please call the 'get_current_time' function. Please answer questions in a friendly tone.",
},
{
role: "user",
content: "Shanghai weather"
}
];
console.log("Messages array created\n");
You can also ask about the current time.
3. Initiate function calling
With the tools and messages arrays prepared, use the following code to initiate a function call. The LLM will determine whether to invoke a tool and provide the necessary tool function and input parameters.
# Step 4: Initiate function calling
# Paste the following code after the code in Step 3
from openai import OpenAI
import os
client = OpenAI(
# If the environment variable is not configured, replace the following line with: api_key="sk-xxx",
api_key=os.getenv("DASHSCOPE_API_KEY"),
base_url="https://dashscope-intl.aliyuncs.com/compatible-mode/v1",
)
def function_calling():
completion = client.chat.completions.create(
model="qwen-plus", # qwen-plus is used as an example. You can use other models. Model list: https://www.alibabacloud.com/help/en/model-studio/getting-started/models
messages=messages,
tools=tools
)
print("Return object: ")
print(completion.choices[0].message.model_dump_json())
print("\n")
return completion
print("Initiating function calling...")
completion = function_calling()
// Step 4: Initiate function calling
// Paste the following code after the code in Step 3
import OpenAI from "openai";
const openai = new OpenAI(
{
// If the environment variable is not configured, replace the following line with: apiKey: "sk-xxx",
apiKey: process.env.DASHSCOPE_API_KEY,
baseURL: "https://dashscope-intl.aliyuncs.com/compatible-mode/v1"
}
);
async function functionCalling() {
const completion = await openai.chat.completions.create({
model: "qwen-plus", // qwen-plus is used as an example. You can use other models. Model list: https://www.alibabacloud.com/help/en/model-studio/getting-started/models
messages: messages,
tools: tools
});
console.log("Return object: ");
console.log(JSON.stringify(completion.choices[0].message));
console.log("\n");
return completion;
}
const completion = await functionCalling();
The LLM decides the tool function to use in the tool_calls
parameter, such as "get_current_weather"
, and provides the input parameter: "{\"location\": \"Shanghai\"}"
.
{
"content": "",
"refusal": null,
"role": "assistant",
"audio": null,
"function_call": null,
"tool_calls": [
{
"id": "call_6596dafa2a6a46f7a217da",
"function": {
"arguments": "{\"location\": \"Shanghai\"}",
"name": "get_current_weather"
},
"type": "function",
"index": 0
}
]
}
Note that if the LLM decides no tool is required for the question, the tool_calls
parameter will not be included in the response. The LLM will give its response directly in the content
parameter. For example, when the input is "Hello", the tool_calls
parameter is null:
{
"content": "Hello! How can I help you? If you have questions about the weather or time, I am particularly good at answering.",
"refusal": null,
"role": "assistant",
"audio": null,
"function_call": null,
"tool_calls": null
}
If thetool_calls
parameter is not returned, your program should directly return thecontent
without further steps.
If you want the model to select a specific tool for each call, see Force calling.
4. Execute tool functions
After you have the tool function name and input parameters, execute the function to obtain its output.
The execution of tool functions occurs within your computing environment, not by the LLM.
Because the LLM outputs only strings, parse the tool function name and input parameters before execution.
Tool function
Create a
function_mapper
from the tool function name to the tool function entity.Input parameters
The input parameters are JSON strings. Use a JSON parsing tool to extract the input parameter information as JSON objects.
# Step 5: Run tool functions
# Paste the following code after the code in Step 4
import json
print("Executing tool functions...")
# Get the function name and input parameters from the returned result
function_name = completion.choices[0].message.tool_calls[0].function.name
arguments_string = completion.choices[0].message.tool_calls[0].function.arguments
# Use the json module to parse the parameter string
arguments = json.loads(arguments_string)
# Create a function mapping table
function_mapper = {
"get_current_weather": get_current_weather,
"get_current_time": get_current_time
}
# Get the function entity
function = function_mapper[function_name]
# If the input parameters are empty, call the function directly
if arguments == {}:
function_output = function()
# Otherwise, call the function with the input parameters
else:
function_output = function(arguments)
# Print the tool's output
print(f"Tool function output: {function_output}\n")
// Step 5: Run tool functions
// Paste the following code after the code in Step 4
console.log("Executing tool functions...");
const function_name = completion.choices[0].message.tool_calls[0].function.name;
const arguments_string = completion.choices[0].message.tool_calls[0].function.arguments;
// Use the JSON module to parse the parameter string
const args = JSON.parse(arguments_string);
// Create a function mapping table
const functionMapper = {
"get_current_weather": getCurrentWeather,
"get_current_time": getCurrentTime
};
// Get the function entity
const func = functionMapper[function_name];
// If the input parameters are empty, call the function directly
let functionOutput;
if (Object.keys(args).length === 0) {
functionOutput = func();
} else {
// Otherwise, call the function with the input parameters
functionOutput = func(args);
}
// Print the tool's output
console.log(`Tool function output: ${functionOutput}\n`);
Sample response:
Tool function output: It is cloudy today in Shanghai.
You can use the tool function output as the final response. But if you want a more human-like response, use the LLM to summarize the tool function output.
5. Use LLM to summarize tool function output (optional)
Advanced usage
Streaming output
To enhance the user experience and minimize waiting time, you can use the streaming output mode to quickly retrieve the name of the needed tool function:
Tool function name: appears only in the first returned object.
Input parameter information: output in a continuous stream.
Streaming output allow for more flexible handling of function calling results. To implement streaming output, use the following code to modify that from 3. Initiate function calling.
def function_calling():
completion = client.chat.completions.create(
model="qwen-plus",
messages=messages,
tools=tools,
stream=True
)
for chunk in completion:
print(chunk.model_dump_json())
function_calling()
async function functionCalling() {
const completion = await openai.chat.completions.create({
model: "qwen-plus",
messages: messages,
tools: tools,
stream: true
});
for await (const chunk of completion) {
console.log(JSON.stringify(chunk))
}
}
functionCalling();
The tool function name is retrieved from the first returned object, while the input parameter information must be concatenated before you can run the tool function.
{"id":"chatcmpl-3f8155c3-e96f-95bc-a2a6-8e48537a0893","choices":[{"delta":{"content":null,"function_call":null,"refusal":null,"role":"assistant","tool_calls":[{"index":0,"id":"call_5507104cabae4f64a0fdd3","function":{"arguments":"{\"location\":","name":"get_current_weather"},"type":"function"}]},"finish_reason":null,"index":0,"logprobs":null}],"created":1736251532,"model":"qwen-plus","object":"chat.completion.chunk","service_tier":null,"system_fingerprint":null,"usage":null}
{"id":"chatcmpl-3f8155c3-e96f-95bc-a2a6-8e48537a0893","choices":[{"delta":{"content":null,"function_call":null,"refusal":null,"role":null,"tool_calls":[{"index":0,"id":"","function":{"arguments":" \"Shanghai\"}","name":""},"type":"function"}]},"finish_reason":null,"index":0,"logprobs":null}],"created":1736251532,"model":"qwen-plus","object":"chat.completion.chunk","service_tier":null,"system_fingerprint":null,"usage":null}
{"id":"chatcmpl-3f8155c3-e96f-95bc-a2a6-8e48537a0893","choices":[{"delta":{"content":null,"function_call":null,"refusal":null,"role":null,"tool_calls":[{"index":0,"id":"","function":{"arguments":null,"name":null},"type":"function"}]},"finish_reason":"tool_calls","index":0,"logprobs":null}],"created":1736251532,"model":"qwen-plus","object":"chat.completion.chunk","service_tier":null,"system_fingerprint":null,"usage":null}
If you need to use the LLM to summarize the tool function output, the Assistant Message you add must follow the following format:
{
"content": "",
"refusal": None,
"role": "assistant",
"audio": None,
"function_call": None,
"tool_calls": [
{
"id": "call_xxx",
"function": {
"arguments": '{"location": "Shanghai"}',
"name": "get_current_weather",
},
"type": "function",
"index": 0,
}
],
}
The following elements must be replaced:
id
Replace the
id
intool_calls
withchoices[0].delta.tool_calls[0].id
from the first returned object.arguments
After concatenating the input parameter information, replace
arguments
intool_calls
.name
Replace
name
intool_calls
withchoices[0].delta.tool_calls[0].function.name
from the first returned object.
Specify calling method
Concurrent calling
In the preceding sections, the query "Shanghai weather" requires only a single tool call. However, the query may require multiple calls, for example: How is the weather in Beijing, Tianjin, Shanghai, and Chongqing
or Weather in Hangzhou and the current time
. Only one call information will be returned after initiating function calling. Take How is the weather in Beijing, Tianjin, Shanghai, and Chongqing
as an example:
{
"content": "",
"refusal": null,
"role": "assistant",
"audio": null,
"function_call": null,
"tool_calls": [
{
"id": "call_61a2bbd82a8042289f1ff2",
"function": {
"arguments": "{\"location\": \"Beijing\"}",
"name": "get_current_weather"
},
"type": "function",
"index": 0
}
]
}
Only Beijing is returned. To solve this problem, set parallel_tool_calls
to true
when initiating function calling. Then, the returned object will contain all required functions and request parameters.
def function_calling():
completion = client.chat.completions.create(
model="qwen-plus", # qwen-plus is used as an example. You can use other models.
messages=messages,
tools=tools,
# New parameter
parallel_tool_calls=True
)
print("Return object: ")
print(completion.choices[0].message.model_dump_json())
print("\n")
return completion
print("Initiating function calling...")
completion = function_calling()
async function functionCalling() {
const completion = await openai.chat.completions.create({
model: "qwen-plus", // qwen-plus is used as an example. You can use other models.
messages: messages,
tools: tools,
parallel_tool_calls: true
});
console.log("Return object: ");
console.log(JSON.stringify(completion.choices[0].message));
console.log("\n");
return completion;
}
const completion = await functionCalling();
The returned tool_calls
array contains the request parameters of all four cities:
{
"content": "",
"role": "assistant",
"tool_calls": [
{
"function": {
"name": "get_current_weather",
"arguments": "{\"location\": \"Beijing\"}"
},
"index": 0,
"id": "call_c2d8a3a24c4d4929b26ae2",
"type": "function"
},
{
"function": {
"name": "get_current_weather",
"arguments": "{\"location\": \"Tianjin\"}"
},
"index": 1,
"id": "call_dc7f2f678f1944da9194cd",
"type": "function"
},
{
"function": {
"name": "get_current_weather",
"arguments": "{\"location\": \"Shanghai\"}"
},
"index": 2,
"id": "call_55c95dd718d94d9789c7c0",
"type": "function"
},
{
"function": {
"name": "get_current_weather",
"arguments": "{\"location\": \"Chongqing\"}"
},
"index": 3,
"id": "call_98a0cc7fded64b3ba88251",
"type": "function"
}
]
}
Force calling
Content generated by the LLM can be unpredictable, so the LLM may sometimes call inappropriate tools. To ensure the LLM adheres to a specific strategy for certain questions (such as using a particular tool, using at least one tool, or preventing any tool use), modify the tool_choice
parameter.
The default value oftool_choice
is"auto"
, which means the LLM decides which tool to call.
If you need to use the LLM to summarize the tool function output, omit the tool_choice
parameter when initiating the summary request. Otherwise, the LLM will continue to provide calling details.
Force the use of a specific tool
If you want to force the calling of a specific tool for certain questions, set the
tool_choice
parameter to{"type": "function", "function": {"name": "the_function_to_call"}}
. This way, the LLM will not participate in tool selection, and will only provide the input parameters.For example, if the scenario is limited to weather-related questions, you can modify
function_calling
to:def function_calling(): completion = client.chat.completions.create( model="qwen-plus", messages=messages, tools=tools, tool_choice={"type": "function", "function": {"name": "get_current_weather"}} ) print(completion.model_dump_json()) function_calling()
∂async function functionCalling() { const response = await openai.chat.completions.create({ model: "qwen-plus", messages: messages, tools: tools, tool_choice: {"type": "function", "function": {"name": "get_current_weather"}} }); console.log("Return object: "); console.log(JSON.stringify(response.choices[0].message)); console.log("\n"); return response; } const response = await functionCalling();
No matter what question is asked, the tool function in the return object will always be
get_current_weather
.Make sure that the questions are related to the selected tool to avoid unexpected results.
Force the use of at least one tool
If you want to ensure that at least one tool is called no matter what the question is (the
tool_calls
parameter in the returned object is not empty), you can set thetool_choice
parameter to"required"
. This way, at least one tool and the input parameter information will always be returned.For example, if the scenario always requires tools, you can modify
function_calling
to:def function_calling(): completion = client.chat.completions.create( model="qwen-plus", messages=messages, tools=tools, tool_choice="required" ) print(completion.model_dump_json()) function_calling()
async function functionCalling() { const completion = await openai.chat.completions.create({ model: "qwen-plus", messages: messages, tools: tools, tool_choice: "required" }); console.log("Return object: "); console.log(JSON.stringify(completion.choices[0].message)); console.log("\n"); return completion; } const completion = await functionCalling();
No matter what question is asked, the
tool_calls
parameter in the return object will always be populated.Make sure that the questions are related to the selected tool to avoid unexpected results.
Force no tool
If you want to ensure that no tool is used no matter what the question is (the return object contains
content
buttool_calls
is empty), you can either set thetool_choice
parameter to"none"
or omit thetools
parameter. If you do either of these, thetool_calls
parameter in the function's return will always be empty.For example, if the scenario always requires no tool, you can modify
function_calling
to:def function_calling(): completion = client.chat.completions.create( model="qwen-plus", messages=messages, tools=tools, tool_choice="none" ) print(completion.model_dump_json()) function_calling()
async function functionCalling() { const completion = await openai.chat.completions.create({ model: "qwen-plus", messages: messages, tools: tools, tool_choice: "none" }); console.log("Return object: "); console.log(JSON.stringify(completion.choices[0].message)); console.log("\n"); return completion; } const completion = await functionCalling();
Billing details
To initiate function calling, you must specify the tools and messages parameters. The billing is based not only on the content in the messages
parameter but also on the input tokens derived from the tool descriptions in the tools
parameter."
Full code
OpenAI compatible
You can use the OpenAI SDK or the OpenAI-compatible HTTP method to initiate function calling with Qwen models.
Python
Sample code
from openai import OpenAI
from datetime import datetime
import json
import os
import random
client = OpenAI(
# If the environment variable is not configured, replace the following line with: api_key="sk-xxx",
api_key=os.getenv("DASHSCOPE_API_KEY"),
base_url="https://dashscope-intl.aliyuncs.com/compatible-mode/v1", # Fill in the base_url of DashScope SDK
)
# Define the tool list. The model will refer to the tool's name and description when choosing which tool to use
tools = [
# Tool 1 Get the current time
{
"type": "function",
"function": {
"name": "get_current_time",
"description": "Very useful when you want to know the current time.",
# Since getting the current time does not require input parameters, parameters is an empty dictionary
"parameters": {}
}
},
# Tool 2 Get the weather of a specified city
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Very useful when you want to query the weather of a specified city.",
"parameters": {
"type": "object",
"properties": {
# When querying the weather, you need to provide the location, so the parameter is set to location
"location": {
"type": "string",
"description": "City or district, such as Singapore, Hangzhou, Yuhang District, etc."
}
}
},
"required": [
"location"
]
}
}
]
# Simulate the weather query tool. Sample return: "It is rainy today in Singapore."
def get_current_weather(arguments):
# Define a weather conditions list
weather_conditions = ["sunny", "cloudy", "rainny"]
# Pick a weather randomly
random_weather = random.choice(weather_conditions)
# Extract localtion from JSON
location = arguments["location"]
# Return formatted weather information
return f"It is {random_weather} today in {location}."
# Tool to query the current time. Sample return: "Current time: 2024-04-15 17:15:18."
def get_current_time():
# Get the current date and time
current_datetime = datetime.now()
# Format the current date and time
formatted_time = current_datetime.strftime('%Y-%m-%d %H:%M:%S')
# Return the formatted current time
return f"Current time: {formatted_time}."
# Encapsulate the model response function
def get_response(messages):
completion = client.chat.completions.create(
model="qwen-plus", # qwen-plus is used as an example. You can use other models. Model list: https://www.alibabacloud.com/help/en/model-studio/getting-started/models
messages=messages,
tools=tools
)
return completion
def call_with_messages():
print('\n')
messages = [
{
"content": input('Please enter: '), # Example questions: "What time is it now?" "What time is it an hour later?" "How is the weather in Singapore?"
"role": "user"
}
]
print("-"*60)
# The first round of model invocation
i = 1
first_response = get_response(messages)
assistant_output = first_response.choices[0].message
print(f"\nThe output information of the LLM in round {i}: {first_response}\n")
if assistant_output.content is None:
assistant_output.content = ""
messages.append(assistant_output)
# If no tool invocation is needed, return the final answer directly
if assistant_output.tool_calls == None: # If the model judges that no tool invocation is needed, directly print the assistant's reply without the second round of model invocation
print(f"No tool invocation is needed, I can reply directly: {assistant_output.content}")
return
# If tool invocation is needed, perform multiple rounds of model invocation until the model judges that no tool invocation is needed
while assistant_output.tool_calls != None:
# If it is judged that the weather query tool needs to be invoked, run the weather query tool
tool_info = {"content": "","role": "tool", "tool_call_id": assistant_output.tool_calls[0].id}
if assistant_output.tool_calls[0].function.name == "get_current_weather":
# Extract location parameter information
argumens = json.loads(assistant_output.tool_calls[0].function.arguments)
tool_info["content"] = get_current_weather(argumens)
# If it is judged that the time query tool needs to be invoked, run the time query tool
elif assistant_output.tool_calls[0].function.name == 'get_current_time':
tool_info["content"] = get_current_time()
tool_output = tool_info["content"]
print(f"Tool output information: {tool_output}\n")
print("-"*60)
messages.append(tool_info)
assistant_output = get_response(messages).choices[0].message
if assistant_output.content is None:
assistant_output.content = ""
messages.append(assistant_output)
i += 1
print(f"The output information of the LLM in round {i}: {assistant_output}\n")
print(f"Final answer: {assistant_output.content}")
if __name__ == '__main__':
call_with_messages()
Sample response
Enter What time is it?
and the program outputs:
Below are the model's return details during the function call process (round 1). For the input "How is the weather in Hangzhou?", the model returns the tool_calls parameter. For the input "Hello", the model determines that no tool invocation is necessary and does not return the tool_calls parameter.
Input: How is the weather in Hangzhou?
{
'id': 'chatcmpl-e2f045fd-2604-9cdb-bb61-37c805ecd15a',
'choices': [
{
'finish_reason': 'tool_calls',
'index': 0,
'logprobs': None,
'message': {
'content': '',
'role': 'assistant',
'function_call': None,
'tool_calls': [
{
'id': 'call_7a33ebc99d5342969f4868',
'function': {
'arguments': '{
"location": "Hangzhou"
}',
'name': 'get_current_weather'
},
'type': 'function',
'index': 0
}
]
}
}
],
'created': 1726049697,
'model': 'qwen-max',
'object': 'chat.completion',
'service_tier': None,
'system_fingerprint': None,
'usage': {
'completion_tokens': 18,
'prompt_tokens': 217,
'total_tokens': 235
}
}
Input: Hello
{
'id': 'chatcmpl-5d890637-9211-9bda-b184-961acf3be38d',
'choices': [
{
'finish_reason': 'stop',
'index': 0,
'logprobs': None,
'message': {
'content': 'Hello! How can I help you?',
'role': 'assistant',
'function_call': None,
'tool_calls': None
}
}
],
'created': 1726049765,
'model': 'qwen-max',
'object': 'chat.completion',
'service_tier': None,
'system_fingerprint': None,
'usage': {
'completion_tokens': 7,
'prompt_tokens': 216,
'total_tokens': 223
}
}
Node.js
Sample code
import OpenAI from "openai";
import { format } from 'date-fns';
import readline from 'readline';
function getCurrentWeather(location) {
return `${location} is rainy today.`;
}
function getCurrentTime() {
// Obtain the current date and time
const currentDatetime = new Date();
// Format the current date and time
const formattedTime = format(currentDatetime, 'yyyy-MM-dd HH:mm:ss');
// Return the formatted current time
return `Current time: ${formattedTime}.`;
}
const openai = new OpenAI(
{
// If the environment variable is not configured, replace the following line with Bailean API Key: apiKey: "sk-xxx",
apiKey: process.env.DASHSCOPE_API_KEY,
baseURL: "https://dashscope-intl.aliyuncs.com/compatible-mode/v1"
}
);
const tools = [
// Tool 1 Obtain the current time
{
"type": "function",
"function": {
"name": "getCurrentTime",
"description": "Useful when you want to know the current time.",
// Because obtaining the current time does not require input parameters, parameters are empty
"parameters": {}
}
},
// Tool 2 Obtain the weather of a specified city
{
"type": "function",
"function": {
"name": "getCurrentWeather",
"description": "Useful when you want to query the weather of a specified city.",
"parameters": {
"type": "object",
"properties": {
// When querying the weather, a location must be provided, so the parameter is set to location
"location": {
"type": "string",
"description": "City or district, such as Singapore, Hangzhou, Yuhang District, etc."
}
},
"required": ["location"]
}
}
}
];
async function getResponse(messages) {
const response = await openai.chat.completions.create({
model: "qwen-plus", // qwen-plus is used as an example. You can use other models. Model list: https://www.alibabacloud.com/help/en/model-studio/getting-started/models
messages: messages,
tools: tools,
});
return response;
}
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout
});
rl.question("user: ", async (question) => {
const messages = [{"role": "user","content": question}];
let i = 1;
const firstResponse = await getResponse(messages);
let assistantOutput = firstResponse.choices[0].message;
console.log(`Round ${i} model output information: ${JSON.stringify(assistantOutput)}`);
if (Object.is(assistantOutput.content,null)){
assistantOutput.content = "";
}
messages.push(assistantOutput);
if (! ("tool_calls" in assistantOutput)) {
console.log(`No need to invoke tools, I can reply directly: ${assistantOutput.content}`);
rl.close();
} else{
while ("tool_calls" in assistantOutput) {
let toolInfo = {};
if (assistantOutput.tool_calls[0].function.name == "getCurrentWeather" ) {
toolInfo = {"role": "tool"};
let location = JSON.parse(assistantOutput.tool_calls[0].function.arguments)["location"];
toolInfo["content"] = getCurrentWeather(location);
} else if (assistantOutput.tool_calls[0].function.name == "getCurrentTime" ) {
toolInfo = {"role":"tool"};
toolInfo["content"] = getCurrentTime();
}
console.log(`Tool output information: ${JSON.stringify(toolInfo)}`);
console.log("=".repeat(100));
messages.push(toolInfo);
assistantOutput = (await getResponse(messages)).choices[0].message;
if (Object.is(assistantOutput.content,null)){
assistantOutput.content = "";
}
messages.push(assistantOutput);
i += 1;
console.log(`Round ${i} model output information: ${JSON.stringify(assistantOutput)}`)
}
console.log("=".repeat(100));
console.log(`Final model output information: ${JSON.stringify(assistantOutput.content)}`);
rl.close();
}});
Sample response
Enter How is the weather in Beijing, Tianjin, Shanghai, and Chongqing?
, and the program outputs:
Round 1 model output information: {"content":"","role":"assistant","tool_calls":[{"function":{"name":"getCurrentWeather","arguments":"{\"location\": \"Beijing\"}"},"index":0,"id":"call_d2aff21240b24c7291db6d","type":"function"}]}
Tool output information: {"role":"tool","content":"It is rainy in Beijing today."}
====================================================================================================
Round 2 model output information: {"content":"","role":"assistant","tool_calls":[{"function":{"name":"getCurrentWeather","arguments":"{\"location\": \"Tianjin\"}"},"index":0,"id":"call_bdcfa937e69b4eae997b5e","type":"function"}]}
Tool output information: {"role":"tool","content":"It is rainy in Tianjin today."}
====================================================================================================
Round 3 model output information: {"content":"","role":"assistant","tool_calls":[{"function":{"name":"getCurrentWeather","arguments":"{\"location\": \"Shanghai\"}"},"index":0,"id":"call_bbf22d017e8e439e811974","type":"function"}]}
Tool output information: {"role":"tool","content":"It is rainy in Shanghai today."}
====================================================================================================
Round 4 model output information: {"content":"","role":"assistant","tool_calls":[{"function":{"name":"getCurrentWeather","arguments":"{\"location\": \"Chongqing\"}"},"index":0,"id":"call_f4f8e149af01492fb60162","type":"function"}]}
Tool output information: {"role":"tool","content":"It is rainy in Chongqing today."}
====================================================================================================
Round 5 model output information: {"content":"The weather in all four municipalities (Beijing, Tianjin, Shanghai, Chongqing) is rainy today. Don't forget to bring an umbrella!","role":"assistant"}
====================================================================================================
Final model output information: "The weather in all four municipalities (Beijing, Tianjin, Shanghai, Chongqing) is rainy today. Don't forget to bring an umbrella!"
HTTP
Sample code
import requests
import os
from datetime import datetime
import json
# Define the tool list. The model will refer to the tool's name and description when choosing which tool to use
tools = [
# Tool 1 Get the current time
{
"type": "function",
"function": {
"name": "get_current_time",
"description": "Very useful when you want to know the current time.",
"parameters": {} # Since getting the current time does not require input parameters, parameters is an empty dictionary
}
},
# Tool 2 Get the weather of a specified city
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Very useful when you want to query the weather of a specified city.",
"parameters": { # When querying the weather, you need to provide the location, so the parameter is set to location
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City or district, such as Singapore, Hangzhou, Yuhang District, etc."
}
}
},
"required": [
"location"
]
}
}
]
# Simulate the weather query tool. Sample return: "It is sunny today in Singapore."
def get_current_weather(location):
return f"It is sunny today in {location}. "
# Tool to query the current time. Sample return: "Current time: 2024-04-15 17:15:18."
def get_current_time():
# Get the current date and time
current_datetime = datetime.now()
# Format the current date and time
formatted_time = current_datetime.strftime('%Y-%m-%d %H:%M:%S')
# Return the formatted current time
return f"Current time: {formatted_time}."
def get_response(messages):
# If the environment variable is not configured, replace the following line with: api_key="sk-xxx",
api_key = os.getenv("DASHSCOPE_API_KEY")
url = 'https://dashscope-intl.aliyuncs.com/compatible-mode/v1/chat/completions'
headers = {'Content-Type': 'application/json',
'Authorization':f'Bearer {api_key}'}
body = {
'model': 'qwen-plus',
"messages": messages,
"tools":tools
}
response = requests.post(url, headers=headers, json=body)
return response.json()
def call_with_messages():
messages = [
{
"content": input('Please enter: '), # Example questions: "What time is it now?" "What time is it an hour later?" "How is the weather in Singapore?"
"role": "user"
}
]
# The first round of model invocation
first_response = get_response(messages)
print(f"\nFirst round result: {first_response}")
assistant_output = first_response['choices'][0]['message']
if assistant_output['content'] is None:
assistant_output['content'] = ""
messages.append(assistant_output)
if 'tool_calls' not in assistant_output: # If the model judges that no tool invocation is needed, directly print the assistant's reply without the second round of model invocation
print(f"Final answer: {assistant_output['content']}")
return
# If the model chooses the get_current_weather tool
elif assistant_output['tool_calls'][0]['function']['name'] == 'get_current_weather':
tool_info = {"name": "get_current_weather", "role":"tool"}
location = json.loads(assistant_output['tool_calls'][0]['function']['arguments'])['location']
tool_info['content'] = get_current_weather(location)
# If the model chooses the get_current_time tool
elif assistant_output['tool_calls'][0]['function']['name'] == 'get_current_time':
tool_info = {"name": "get_current_time", "role":"tool"}
tool_info['content'] = get_current_time()
print(f"Tool output information: {tool_info['content']}")
messages.append(tool_info)
# The second round of model invocation, summarizing the tool's output
second_response = get_response(messages)
print(f"Second round result: {second_response}")
print(f"Final answer: {second_response['choices'][0]['message']['content']}")
if __name__ == '__main__':
call_with_messages()
Sample response
Enter: How is the weather in Hangzhou?
and the program outputs:
Below are the model's return details during the function call process (round 1). For the input "How is the weather in Hangzhou?", the model returns the tool_calls parameter. For the input "Hello", the model determines that no tool invocation is necessary and does not return the tool_calls parameter.
Input: How is the weather in Hangzhou?
{
'choices': [
{
'message': {
'content': '',
'role': 'assistant',
'tool_calls': [
{
'function': {
'name': 'get_current_weather',
'arguments': '{
"location": "Hangzhou"
}'
},
'index': 0,
'id': 'call_416cd81b8e7641edb654c4',
'type': 'function'
}
]
},
'finish_reason': 'tool_calls',
'index': 0,
'logprobs': None
}
],
'object': 'chat.completion',
'usage': {
'prompt_tokens': 217,
'completion_tokens': 18,
'total_tokens': 235
},
'created': 1726050222,
'system_fingerprint': None,
'model': 'qwen-max',
'id': 'chatcmpl-61e30855-ee69-93ab-98d5-4194c51a9980'
}
Input: Hello
{
'choices': [
{
'message': {
'content': 'Hello! How can I help you?',
'role': 'assistant'
},
'finish_reason': 'stop',
'index': 0,
'logprobs': None
}
],
'object': 'chat.completion',
'usage': {
'prompt_tokens': 216,
'completion_tokens': 7,
'total_tokens': 223
},
'created': 1726050238,
'system_fingerprint': None,
'model': 'qwen-max',
'id': 'chatcmpl-2f2f86d1-bc4e-9494-baca-aac5b0555091'
}
DashScope
You can use the DashScope SDK or the HTTP method to initiate function calling with Qwen models.
Python
Sample code
import os
from dashscope import Generation
from datetime import datetime
import random
import json
import dashscope
dashscope.base_http_api_url = 'https://dashscope-intl.aliyuncs.com/api/v1'
# Define the tool list. The model will refer to the tool's name and description when selecting which tool to use.
tools = [
# Tool 1: Obtain the current time
{
"type": "function",
"function": {
"name": "get_current_time",
"description": "Useful when you want to know the current time.",
"parameters": {} # Since obtaining the current time requires no input parameters, parameters is an empty dictionary.
}
},
# Tool 2: Obtain the weather of a specified city
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Useful when you want to query the weather of a specified city.",
"parameters": {
# When querying the weather, you need to provide a location, so the parameter is set to location.
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City or district, such as Singapore, Hangzhou, Yuhang District, etc."
}
}
},
"required": [
"location"
]
}
}
]
# Simulate a weather query tool. Sample return: "It is sunny today in Singapore."
def get_current_weather(location):
return f"It is sunny today in {location}."
# Tool to query the current time. Sample return: "Current time: 2024-04-15 17:15:18."
def get_current_time():
# Obtain the current date and time
current_datetime = datetime.now()
# Format the current date and time
formatted_time = current_datetime.strftime('%Y-%m-%d %H:%M:%S')
# Return the formatted current time
return f"Current time: {formatted_time}."
# Encapsulate the model response function
def get_response(messages):
response = Generation.call(
# If the environment variable is not configured, replace the line below with: api_key="sk-xxx",
api_key=os.getenv("DASHSCOPE_API_KEY"),
model='qwen-plus', # qwen-plus is used as an example. You can use other models. Model list: https://www.alibabacloud.com/help/en/model-studio/getting-started/models
messages=messages,
tools=tools,
seed=random.randint(1, 10000), # Set the random number seed. If not set, the default random number seed is 1234.
result_format='message' # Set the output to message format
)
return response
def call_with_messages():
print('\n')
messages = [
{
"content": input('Please enter: '), # Example questions: "What time is it now?" "What time will it be in an hour?" "How is the weather in Singapore?"
"role": "user"
}
]
# The first round of model invocation
first_response = get_response(messages)
assistant_output = first_response.output.choices[0].message
print(f"\nFirst round output from the model: {first_response}\n")
messages.append(assistant_output)
if 'tool_calls' not in assistant_output: # If the model determines that no tool needs to be called, print the assistant's reply directly without a second round of model invocation.
print(f"Final answer: {assistant_output.content}")
return
# If the tool chosen by the model is get_current_weather
elif assistant_output.tool_calls[0]['function']['name'] == 'get_current_weather':
tool_info = {"name": "get_current_weather", "role":"tool"}
location = json.loads(assistant_output.tool_calls[0]['function']['arguments'])['location']
tool_info['content'] = get_current_weather(location)
# If the tool chosen by the model is get_current_time
elif assistant_output.tool_calls[0]['function']['name'] == 'get_current_time':
tool_info = {"name": "get_current_time", "role":"tool"}
tool_info['content'] = get_current_time()
print(f"Tool output information: {tool_info['content']}\n")
messages.append(tool_info)
# The second round of model invocation, summarizing the tool's output
second_response = get_response(messages)
print(f"Second round output from the model: {second_response}\n")
print(f"Final answer: {second_response.output.choices[0].message['content']}")
if __name__ == '__main__':
call_with_messages()
Sample response
Enter a question and get the response:
Below are the model's return details during the function call process (round 1). For the input "How is the weather in Hangzhou?", the model returns the tool_calls parameter. For the input "Hello", the model determines that no tool invocation is necessary and does not return the tool_calls parameter.
Input: Hangzhou weather
{
"status_code": 200,
"request_id": "33cf0a53-ea38-9f47-8fce-b93b55d86573",
"code": "",
"message": "",
"output": {
"text": null,
"finish_reason": null,
"choices": [
{
"finish_reason": "tool_calls",
"message": {
"role": "assistant",
"content": "",
"tool_calls": [
{
"function": {
"name": "get_current_weather",
"arguments": "{\"location\": \"Hangzhou\"}"
},
"index": 0,
"id": "call_9f62f52f3a834a8194f634",
"type": "function"
}
]
}
}
]
},
"usage": {
"input_tokens": 217,
"output_tokens": 18,
"total_tokens": 235
}
}
Input: Hello
{
"status_code": 200,
"request_id": "4818ce03-e7c9-96de-a7bc-781649d98465",
"code": "",
"message": "",
"output": {
"text": null,
"finish_reason": null,
"choices": [
{
"finish_reason": "stop",
"message": {
"role": "assistant",
"content": "Hello! How can I assist you?"
}
}
]
},
"usage": {
"input_tokens": 216,
"output_tokens": 7,
"total_tokens": 223
}
}
Java
Sample code
// Copyright (c) Alibaba, Inc. and its affiliates.
// version >= 2.12.0
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import com.alibaba.dashscope.aigc.conversation.ConversationParam.ResultFormat;
import com.alibaba.dashscope.aigc.generation.Generation;
import com.alibaba.dashscope.aigc.generation.GenerationOutput.Choice;
import com.alibaba.dashscope.aigc.generation.GenerationParam;
import com.alibaba.dashscope.aigc.generation.GenerationResult;
import com.alibaba.dashscope.common.Message;
import com.alibaba.dashscope.common.Role;
import com.alibaba.dashscope.exception.ApiException;
import com.alibaba.dashscope.exception.InputRequiredException;
import com.alibaba.dashscope.exception.NoApiKeyException;
import com.alibaba.dashscope.tools.FunctionDefinition;
import com.alibaba.dashscope.tools.ToolCallBase;
import com.alibaba.dashscope.tools.ToolCallFunction;
import com.alibaba.dashscope.tools.ToolFunction;
import com.alibaba.dashscope.utils.JsonUtils;
import com.alibaba.dashscope.protocol.Protocol;
import com.fasterxml.jackson.databind.node.ObjectNode;
import com.github.victools.jsonschema.generator.Option;
import com.github.victools.jsonschema.generator.OptionPreset;
import com.github.victools.jsonschema.generator.SchemaGenerator;
import com.github.victools.jsonschema.generator.SchemaGeneratorConfig;
import com.github.victools.jsonschema.generator.SchemaGeneratorConfigBuilder;
import com.github.victools.jsonschema.generator.SchemaVersion;
import java.time.LocalDateTime;
import java.time.format.DateTimeFormatter;
import java.util.Scanner;
import com.fasterxml.jackson.databind.JsonNode;
import com.fasterxml.jackson.databind.ObjectMapper;
public class Main {
public static class GetWeatherTool {
private String location;
public GetWeatherTool(String location) {
this.location = location;
}
public String callWeather() {
// Assume location is a JSON string, for example {"location": "Singapore"}
// Need to extract the value of "location"
try {
// Use the Jackson library to parse JSON
ObjectMapper objectMapper = new ObjectMapper();
JsonNode jsonNode = objectMapper.readTree(location);
String locationName = jsonNode.get("location").asText();
return locationName + " is sunny today";
} catch (Exception e) {
// If parsing fails, return the original string
return location + " is sunny today";
}
}
}
public static class GetTimeTool {
public String getCurrentTime() {
LocalDateTime now = LocalDateTime.now();
DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss");
return "Current time: " + now.format(formatter) + ".";
}
}
private static ObjectNode generateSchema(Class<?> clazz) {
SchemaGeneratorConfigBuilder configBuilder =
new SchemaGeneratorConfigBuilder(SchemaVersion.DRAFT_2020_12, OptionPreset.PLAIN_JSON);
SchemaGeneratorConfig config = configBuilder.with(Option.EXTRA_OPEN_API_FORMAT_VALUES)
.without(Option.FLATTENED_ENUMS_FROM_TOSTRING).build();
SchemaGenerator generator = new SchemaGenerator(config);
return generator.generateSchema(clazz);
}
public static void selectTool()
throws NoApiKeyException, ApiException, InputRequiredException {
ObjectNode jsonSchemaWeather = generateSchema(GetWeatherTool.class);
ObjectNode jsonSchemaTime = generateSchema(GetTimeTool.class);
FunctionDefinition fdWeather = FunctionDefinition.builder().name("get_current_weather")
.description("Obtain the weather of a specified area")
.parameters(JsonUtils.parseString(jsonSchemaWeather.toString()).getAsJsonObject()).build();
FunctionDefinition fdTime = FunctionDefinition.builder().name("get_current_time")
.description("Obtain the current time")
.parameters(JsonUtils.parseString(jsonSchemaTime.toString()).getAsJsonObject()).build();
Message systemMsg = Message.builder().role(Role.SYSTEM.getValue())
.content("You are a helpful assistant. When asked a question, use tools wherever possible.")
.build();
Scanner scanner = new Scanner(System.in);
System.out.println("Please enter: ");
String userInput = scanner.nextLine();
Message userMsg = Message.builder().role(Role.USER.getValue()).content(userInput).build();
List<Message> messages = new ArrayList<>(Arrays.asList(systemMsg, userMsg));
GenerationParam param = GenerationParam.builder()
// qwen-plus is used as an example. You can use other models. Model list: https://www.alibabacloud.com/help/en/model-studio/getting-started/models
.model("qwen-plus")
// If the environment variable is not configured, replace the line below with: .apiKey("sk-xxx")
.apiKey(System.getenv("DASHSCOPE_API_KEY"))
.messages(messages).resultFormat(ResultFormat.MESSAGE)
.tools(Arrays.asList(
ToolFunction.builder().function(fdWeather).build(),
ToolFunction.builder().function(fdTime).build()
)).build();
Generation gen = new Generation(Protocol.HTTP.getValue(), "https://dashscope-intl.aliyuncs.com/api/v1");
GenerationResult result = gen.call(param);
System.out.println("First round output: " + JsonUtils.toJson(result));
boolean needToolCall = true;
while (needToolCall) {
needToolCall = false;
for (Choice choice : result.getOutput().getChoices()) {
messages.add(choice.getMessage());
if (choice.getMessage().getToolCalls() != null) {
for (ToolCallBase toolCall : choice.getMessage().getToolCalls()) {
if (toolCall.getType().equals("function")) {
String functionName = ((ToolCallFunction) toolCall).getFunction().getName();
String functionArgument = ((ToolCallFunction) toolCall).getFunction().getArguments();
if (functionName.equals("get_current_weather")) {
GetWeatherTool weatherTool = new GetWeatherTool(functionArgument);
String weather = weatherTool.callWeather();
Message toolResultMessage = Message.builder().role("tool")
.content(weather).toolCallId(toolCall.getId()).build();
messages.add(toolResultMessage);
System.out.println("Tool output information: " + weather);
} else if (functionName.equals("get_current_time")) {
GetTimeTool timeTool = new GetTimeTool();
String time = timeTool.getCurrentTime();
Message toolResultMessage = Message.builder().role("tool")
.content(time).toolCallId(toolCall.getId()).build();
messages.add(toolResultMessage);
System.out.println("Tool output information: " + time);
}
needToolCall = true;
}
}
} else {
System.out.println("Final answer: " + choice.getMessage().getContent());
return;
}
}
if (needToolCall) {
param.setMessages(messages);
result = gen.call(param);
System.out.println("Next round output: " + JsonUtils.toJson(result));
}
}
System.out.println("Final answer: " + result.getOutput().getChoices().get(0).getMessage().getContent());
}
public static void main(String[] args) {
try {
selectTool();
} catch (ApiException | NoApiKeyException | InputRequiredException e) {
System.out.println(String.format("Exception: %s", e.getMessage()));
} catch (Exception e) {
System.out.println(String.format("Exception: %s", e.getMessage()));
}
System.exit(0);
}
}
Enter a question and get the response:
Below are the model's return details during the function call process (round 1). For the input "How is the weather in Hangzhou?", the model returns the tool_calls parameter. For the input "Hello", the model determines that no tool invocation is necessary and does not return the tool_calls parameter.
Input: Hangzhou weather
{
"requestId": "e2faa5cf-1707-973b-b216-36aa4ef52afc",
"usage": {
"input_tokens": 254,
"output_tokens": 19,
"total_tokens": 273
},
"output": {
"choices": [
{
"finish_reason": "tool_calls",
"message": {
"role": "assistant",
"content": "",
"tool_calls": [
{
"type": "function",
"id": "",
"function": {
"name": "get_current_whether",
"arguments": "{\"location\": \"Hangzhou\"}"
}
}
]
}
}
]
}
}
Input: Hello
{
"requestId": "f6ca3828-3b5f-99bf-8bae-90b4aa88923f",
"usage": {
"input_tokens": 253,
"output_tokens": 7,
"total_tokens": 260
},
"output": {
"choices": [
{
"finish_reason": "stop",
"message": {
"role": "assistant",
"content": "Hello! How can I assist you?"
}
}
]
}
}
HTTP
Sample code
import requests
import os
from datetime import datetime
import json
# Define the tool list. The model will refer to the tool's name and description when selecting which tool to use.
tools = [
# Tool 1: Obtain the current time
{
"type": "function",
"function": {
"name": "get_current_time",
"description": "Useful when you want to know the current time.",
"parameters": {} # Since obtaining the current time requires no input parameters, parameters is an empty dictionary.
}
},
# Tool 2: Obtain the weather of a specified city
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Useful when you want to query the weather of a specified city.",
"parameters": { # When querying the weather, you need to provide a location, so the parameter is set to location.
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City or district, such as Singapore, Hangzhou, Yuhang District, etc."
}
}
},
"required": [
"location"
]
}
}
]
# Simulate a weather query tool. Sample return: "It is sunny today in Singapore."
def get_current_weather(location):
return f"It is sunny today in {location}."
# Tool to query the current time. Sample return: "Current time: 2024-04-15 17:15:18."
def get_current_time():
# Obtain the current date and time
current_datetime = datetime.now()
# Format the current date and time
formatted_time = current_datetime.strftime('%Y-%m-%d %H:%M:%S')
# Return the formatted current time
return f"Current time: {formatted_time}."
def get_response(messages):
api_key = os.getenv("DASHSCOPE_API_KEY")
url = 'https://dashscope-intl.aliyuncs.com/api/v1/services/aigc/text-generation/generation'
headers = {'Content-Type': 'application/json',
'Authorization':f'Bearer {api_key}'}
body = {
'model': 'qwen-plus',
"input": {
"messages": messages
},
"parameters": {
"result_format": "message",
"tools": tools
}
}
response = requests.post(url, headers=headers, json=body)
return response.json()
def call_with_messages():
messages = [
{
"content": input('Please enter: '), # Example questions: "What time is it now?" "What time will it be in an hour?" "How is the weather in Singapore?"
"role": "user"
}
]
# The first round of model invocation
first_response = get_response(messages)
print(f"\nFirst round result: {first_response}")
assistant_output = first_response['output']['choices'][0]['message']
messages.append(assistant_output)
if 'tool_calls' not in assistant_output: # If the model determines that no tool needs to be called, print the assistant's reply directly without a second round of model invocation.
print(f"Final answer: {assistant_output['content']}")
return
# If the tool chosen by the model is get_current_weather
elif assistant_output['tool_calls'][0]['function']['name'] == 'get_current_weather':
tool_info = {"name": "get_current_weather", "role":"tool"}
location = json.loads(assistant_output['tool_calls'][0]['function']['arguments'])['location']
tool_info['content'] = get_current_weather(location)
# If the tool chosen by the model is get_current_time
elif assistant_output['tool_calls'][0]['function']['name'] == 'get_current_time':
tool_info = {"name": "get_current_time", "role":"tool"}
tool_info['content'] = get_current_time()
print(f"Tool output information: {tool_info['content']}")
messages.append(tool_info)
# The second round of model invocation, summarizing the tool's output
second_response = get_response(messages)
print(f"Second round result: {second_response}")
print(f"Final answer: {second_response['output']['choices'][0]['message']['content']}")
if __name__ == '__main__':
call_with_messages()
import java.io.BufferedReader;
import java.io.DataOutputStream;
import java.io.InputStreamReader;
import java.net.HttpURLConnection;
import java.net.URL;
import java.nio.charset.StandardCharsets;
import java.util.Scanner;
import java.time.LocalDateTime;
import java.time.format.DateTimeFormatter;
import org.json.JSONArray;
import org.json.JSONObject;
public class Main {
private static final String userAGENT = "Java-HttpURLConnection/1.0";
public static void main(String[] args) throws Exception {
// User input question
Scanner scanner = new Scanner(System.in);
System.out.println("Please enter: ");
String UserInput = scanner.nextLine();
// Initialize messages
JSONArray messages = new JSONArray();
// Define system information system_message
JSONObject systemMessage = new JSONObject();
systemMessage.put("role","system");
systemMessage.put("content","You are a helpful assistant.");
// Construct user_message based on user input
JSONObject userMessage = new JSONObject();
userMessage.put("role","user");
userMessage.put("content",UserInput);
// Add system_message and user_message to messages in sequence
messages.put(systemMessage);
messages.put(userMessage);
// Perform the first round of model invocation and print the result
JSONObject responseJson = getResponse(messages);
System.out.println("First round result: "+responseJson);
// Obtain assistant information assistant_message
JSONObject assistantMessage = responseJson.getJSONObject("output").getJSONArray("choices").getJSONObject(0).getJSONObject("message");
// Initialize tool information tool_message
JSONObject toolMessage = new JSONObject();
// If assistant_message does not have the tool_calls parameter, print the response information in assistant_message directly and return
if (! assistantMessage.has("tool_calls")){
System.out.println("Final answer: "+assistantMessage.get("content"));
return;
}
// If assistant_message has the tool_calls parameter, it means the model determines that a tool needs to be called
else {
// Add assistant_message to messages
messages.put(assistantMessage);
// If the model determines that the get_current_weather function needs to be called
if (assistantMessage.getJSONArray("tool_calls").getJSONObject(0).getJSONObject("function").getString("name").equals("get_current_weather")) {
// Obtain the arguments information and extract the location parameter
JSONObject argumentsJson = new JSONObject(assistantMessage.getJSONArray("tool_calls").getJSONObject(0).getJSONObject("function").getString("arguments"));
String location = argumentsJson.getString("location");
// Run the tool function, obtain the tool's output, and print
String toolOutput = getCurrentWeather(location);
System.out.println("Tool output information: "+toolOutput);
// Construct tool_message information
toolMessage.put("name","get_current_weather");
toolMessage.put("role","tool");
toolMessage.put("content",toolOutput);
}
// If the model determines that the get_current_time function needs to be called
if (assistantMessage.getJSONArray("tool_calls").getJSONObject(0).getJSONObject("function").getString("name").equals("get_current_time")) {
// Run the tool function, obtain the tool's output, and print
String toolOutput = getCurrentTime();
System.out.println("Tool output information: "+toolOutput);
// Construct tool_message information
toolMessage.put("name","get_current_time");
toolMessage.put("role","tool");
toolMessage.put("content",toolOutput);
}
}
// Add tool_message to messages
messages.put(toolMessage);
// Perform the second round of model invocation and print the result
JSONObject secondResponse = getResponse(messages);
System.out.println("Second round result: "+secondResponse);
System.out.println("Final answer: "+secondResponse.getJSONObject("output").getJSONArray("choices").getJSONObject(0).getJSONObject("message").getString("content"));
}
// Define the function to obtain the weather
public static String getCurrentWeather(String location) {
return location+" is sunny today";
}
// Define the function to obtain the current time
public static String getCurrentTime() {
LocalDateTime now = LocalDateTime.now();
DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss");
String currentTime = "Current time: " + now.format(formatter) + ".";
return currentTime;
}
// Encapsulate the model response function, input: messages, output: JSON formatted HTTP response
public static JSONObject getResponse(JSONArray messages) throws Exception{
// Initialize the tool library
JSONArray tools = new JSONArray();
// Define Tool 1: Obtain the current time
String jsonStringTime = "{\"type\": \"function\", \"function\": {\"name\": \"get_current_time\", \"description\": \"Useful when you want to know the current time.\", \"parameters\": {}}}";
JSONObject getCurrentTimeJson = new JSONObject(jsonStringTime);
// Define Tool 2: Obtain the weather of a specified area
String jsonString_weather = "{\"type\": \"function\", \"function\": {\"name\": \"get_current_weather\", \"description\": \"Useful when you want to query the weather of a specified city.\", \"parameters\": {\"type\": \"object\", \"properties\": {\"location\": {\"type\": \"string\", \"description\": \"City or district, such as Singapore, Hangzhou, Yuhang District, etc.\"}}}, \"required\": [\"location\"]}}";
JSONObject getCurrentWeatherJson = new JSONObject(jsonString_weather);
// Add the two tools to the tool library
tools.put(getCurrentTimeJson);
tools.put(getCurrentWeatherJson);
String toolsString = tools.toString();
// API call URL
String urlStr = "https://dashscope-intl.aliyuncs.com/api/v1/services/aigc/text-generation/generation";
// Obtain DASHSCOPE_API_KEY through environment variables
String apiKey = System.getenv("DASHSCOPE_API_KEY");
URL url = new URL(urlStr);
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
connection.setRequestMethod("POST");
// Define request header information
connection.setRequestProperty("Content-Type", "application/json");
connection.setRequestProperty("Authorization", "Bearer " + apiKey);
connection.setDoOutput(true);
// Define request body information
String jsonInputString = String.format("{\"model\": \"qwen-max\", \"input\": {\"messages\":%s}, \"parameters\": {\"result_format\": \"message\",\"tools\":%s}}",messages.toString(),toolsString);
// Obtain the HTTP response
try (DataOutputStream wr = new DataOutputStream(connection.getOutputStream())) {
wr.write(jsonInputString.getBytes(StandardCharsets.UTF_8));
wr.flush();
}
StringBuilder response = new StringBuilder();
try (BufferedReader in = new BufferedReader(
new InputStreamReader(connection.getInputStream()))) {
String inputLine;
while ((inputLine = in.readLine()) != null) {
response.append(inputLine);
}
}
connection.disconnect();
// Return the JSON formatted response
return new JSONObject(response.toString());
}
}
Sample response
Enter: How is the weather in Hangzhou?
and the program outputs:
Below are the model's return details during the function call process (round 1). For the input "How is the weather in Hangzhou?", the model returns the tool_calls parameter. For the input "Hello", the model determines that no tool invocation is necessary and does not return the tool_calls parameter.
Input: Hangzhou weather
{
'output': {
'choices': [
{
'finish_reason': 'tool_calls',
'message': {
'role': 'assistant',
'tool_calls': [
{
'function': {
'name': 'get_current_weather',
'arguments': '{
"location": "Hangzhou"
}'
},
'index': 0,
'id': 'call_240d6341de4c484384849d',
'type': 'function'
}
],
'content': ''
}
}
]
},
'usage': {
'total_tokens': 235,
'output_tokens': 18,
'input_tokens': 217
},
'request_id': '235ed6a4-b6c0-9df0-aa0f-3c6dce89f3bd'
}
Input: Hello
{
'output': {
'choices': [
{
'finish_reason': 'stop',
'message': {
'role': 'assistant',
'content': 'Hello! How can I assist you?'
}
}
]
},
'usage': {
'total_tokens': 223,
'output_tokens': 7,
'input_tokens': 216
},
'request_id': '42c42853-3caf-9815-96e8-9c950f4c26a0'
}
Error code
If the call failed and an error message is returned, see Error messages.