Workflow applications streamline complex tasks by breaking them down into a series of steps. In Model Studio, you can create a workflow application that integrates large language models (LLMs), APIs, and other nodes to effectively reduce coding efforts. This topic describes how to create a workflow application.
Only users who created Model Studio applications before April 21, 2025, can access the Application tab and use all its features, including applications (Agent applications, Workflow applications, Agent orchestration applications), components (Prompt, Plug-in), data (Knowledge base, Application data) features and related APIs. This feature is in preview. Use with caution in production environments.
Overview
Why use workflow applications
Workflow is crucial in modern software development and business management. It can simplify complex tasks into manageable steps to boost efficiency. Workflow applications in Model Studio clearly define task execution order, allocate responsibilities, and establish step dependencies. This enables automation and streamlines processes.
Workflow applications are useful in scenarios such as:
Travel planning: Input parameters like destination to automatically generate travel plans, including flights, accommodations, and attractions.
Report analysis: Use data processing, analysis, and visualization plug-ins to produce structured and formatted analysis reports for complex datasets.
Customer service: Automatically classify and deal with customer inquiries to enhance the speed and precision of customer service responses.
Content creation: Produce content such as articles and marketing materials based on themes and requirements.
Education and training: Design personalized learning plans that include progress tracking and assessments, facilitating student self-learning.
Medical consultation: Use various analysis tools to generate preliminary diagnoses or examination recommendations based on patient symptoms and help doctors make further decisions.
Supported models
For detailed information about the models, see Models.
For the API call rate limits of each model, see Throttling.
The actual supported models displayed in the agent application should prevail.
Use case
This example shows how to create a workflow application to identify whether a text message is related to telecom fraud.
| |
| |
| |
| |
| |
| |
|
Node description
Nodes are the core functional units of workflow applications, each responsible for specific tasks such as executing operations, triggering conditions, processing data, or determining process flow. They are like flexible building blocks that can be assembled to build efficient, intelligent automated processes.
Start/End
When to use
When designing a workflow, determine the structure and content of input/output parameters and fill them into the Start/End nodes.
How to use
Start node
Component
Description
Variable Name
The name of the input parameter. Subsequent nodes can call the variable by its name.
Type
The data type of the input parameter. Ccurrently, it only supports String, Boolean, and Number. If you need to pass in Array or Object, you need to parse it yourself before the workflow.
Description
The description of the input parameter, used to explain the purpose of the parameter.
NoteIn addition to custom variables, the application also has some built-in utility variables:
query
The user's query. Used to receive the prompt variable from API calls or text sent by users in the test window.
historyList (only in dialog workflows)
The conversation history list, used for the application to automatically maintain conversation history records and provide multi-round conversation experience.
imageList (only in dialog workflows)
The image list, used to store images uploaded by users in the test window. Currently only supports storing a single image.
End node
Component
Description
Output Mode
The format of the output parameters. Valid values: Text Output and JSON Output.
Text Box (in text output mode)
You can freely arrange the paragraph format of the output content, supporting text input or variable references, suitable for unstructured content.
Variable Name/Variable Value (in JSON output mode)
Output content must be arranged according to JSON format. You can customize variable names and support text input or variable references, suitable for structured content.
Response
Only effective when calling the application via API, used to decide whether to output the node content. To understand the purpose of this component, see Control node content using "Response".
Knowledge Base
Why use it
Imagine you have an enormous "knowledge repository" containing massive documents, FAQs, and product information. This node helps you extract the "most relevant" content from this massive information, preparing for subsequent AI Q&A.
Function and usage
Retrieves one or more knowledge bases to obtain relevant document chunks. This allows the LLM to read more context and provide more professional or accurate answers. Often used in conjunction with LLM nodes.
Commonly used when building dialog workflows or intelligent customer service systems.
Parameters
Parameter
Parameter Description
Input
content
: You can directly input text or reference variables output from previous nodes.imageList
: Supports image search. You can directly input image links or reference variables output from previous nodes.Select Knowledge Base
Specify which knowledge bases to search in (supports multiple selections).
Output
Name the result variable of this node, such as
kbResult
.Output structure example:
{ "rewriteQuery": "...", "chunkList": [ { "score": 0.36, "documentName": "Document Name", "title": "Document Title", "content": "Related Content Fragment", "imagesUrl": "Image URL" } ] }
The commonly used part here is
chunkList
, which contains document chunks and similarity scores.
A higher search result score indicates a stronger match. You can filter, sort, or combine chunks in subsequent nodes.
Local vector databases are not supported. You need to upload your files to the knowledge base to use them.
LLM
Why use it
The LLM can understand language, generate text, analyze images, and participate in multi-round conversations. You can use it to write copy, summarize text, or analyze image content (if a VL model is used).
Features
It can process one input at a time or batch process large amounts of data.
You can use different LLMs based on performance, speed, or other characteristics.
Parameters
Parameter
Parameter Description
Mode
Single Mode: Fast search version with a lower search ratio and without query rewriting.
Batch Mode: The node will run multiple times. Each time it runs, an item from the list is sequentially assigned to the batch variable. This process will continue until all items in the list have been processed or the maximum number of batches has been reached.
Batch Configuration:
Maximum Number of Batches: The upper limit for batch iterations. Valid range: 1 to 100. The default value for regular users: 100.
NoteThe actual number depends on the minimum length of the arrays in the user's input. If there are no input variables, it depends on the number of batches configured.
Number of Parallel Runs: The concurrency limit of batch processing. If you set this value to 1, all tasks are executed in series. Valid range: 1 to 10.
Model Configuration
Choose a suitable LLM and configure model parameters . For a list of supported models, see Supported models.
When selecting a VL model:
Model Input Parameters: Use vlImageUrl to reference a variable or enter image URLs.
Image Source: Choose from Image Set or Video Frame.
Image Set: The model considers uploaded images independent from each other and will match the images and queries for understanding.
Video Frame: The model considers the uploaded images from the same video and understands the images sequentially as a whole. The number of video frames must be no less than four.
Parameter Configuration
Temperature: Adjusts content diversity. Higher values increase randomness and uniqueness, while lower values yield more predictable and consistent results.
Maximum Reply Length: The maximum text length generated by the model, not including the prompt. This limit varies by model, and the exact maximum may differ.
System Prompt
Defines the role, task, and output format of a model. For example, "You are a math expert, specializing in solving math problems. Please output the math problem-solving process and results in the specified format."
User Prompt
Set up the prompt template and insert variables. The model will generate content based on this configuration.
Output
The variable name for the output of the node. Subsequent nodes can identify and process the variable by its name.
NoteTo integrate the application into your business through API, see Application calling.
LLM node in dialog workflows
Different from the LLM node of task-based workflows, the LLM node of dialog workflows supports multi-round conversations.
Round Configuration: The application collects the variables from previous rounds specified in Context and passes them as input parameters to the LLM.
Context: The context required by the LLM. The default ${System Variables.historyList} represents the input and output of the application from previous rounds. Other parameters refer to the variables from previous rounds.
Text model example
Click Test and enter
Chip Engineer
as the query:Sample output:
Visual model example
The model supports accepts either a single image or multiple images and supports input as URL and base64.
NoteDirectly input a single image, such as
https://****.com/****.jpg
.Input multiple images as a list, such as
["URL", "URL", "URL"]
.Click Test, enter
https://****.com/****.jpg
as the query.Sample output:
API
Definition
Call custom API services via POST, GET, PUT, PATCH, DELETE methods, and output the API call results.
Call Method
Purpose
POST
Used to submit data to the server to create a new resource.
GET
Used to retrieve a representation of a resource, without modifying data on the server.
PUT
Used to update the representation of a specified resource on the server or create a new resource on the server.
PATCH
Used to partially update a resource on the server.
DELETE
Used to delete a specified resource from the server.
Parameters
Parameter
Description
API Request URL
The API address to call. Valid values: POST, GET, PUT, PATCH, DELETE.
Header Settings
Configure Header parameters, setting KEY, VALUE.
Param Settings
Configure Param parameters, setting KEY, VALUE.
Body Settings
Valid values: none, form-data, raw, JSON.
Output
The variable name for the output of the node. Subsequent nodes can identify and process the variable by its name.
NoteTo integrate the application into your business through API, see Application Call.
Node example
Using the POST method to call an interface.
Intent Classification
Definition
Intelligently classify and match based on intent descriptions, selecting one path to execute.
Parameter
Parameter
Description
Input
Enter the variables to be processed in this node. You can reference variables of preceding nodes or the Start node, or enter the variable values.
Model Configuration
Select Qwen-Plus.
Intent Configuration
Configure different intent categories and corresponding descriptions. The model matches subsequent links based on the descriptions. For example: "Math problems" and "Weather Q&A".
Other Intents
If no other intents are matched, this link is matched.
Intent Mode
Single Selection: The LLM selects and outputs the most appropriate intent from Intent Configuration.
Multiple Selection: The LLM selects and outputs all matching intents from Intent Configuration.
Thinking Mode
Speed Mode: This mode does not output the thinking process to improve its speed. Suitable for simple scenarios.
Effect Mode: This mode thinks step by step to output more accurate answers.
Advanced Configurations
You can provide additional prompts to the model as advanced configurations, where you can input more conditions or examples to make the model's classification more in line with your requirements.
In this example, the advanced configuration guides the model to classify "query delivery time" as the "Order Inquiry" intent by providing specific classification examples, while also limiting the classification scope and excluding other unrelated issues.
Context
When Context is enabled, the system will automatically record historical conversation in the Message format. When calling the model, the context will be passed in, and the model will generate based on the context content.
This configuration item is only available in the intent classification node of dialog workflows.
NoteWhen Context is enabled, the variable passed in this node must be of the List type.
Output
The name of the variable processed by this node. Subsequent nodes can identify and process the variable by its name.
NoteThis node supports context in dialog workflows.
Running this node will consume tokens. The consumption amount will be displayed.
Text Conversion
Definition
Used for text content conversion and processing, such as extracting specific content or converting formats, and supports template mode.
Parameters
Parameter
Description
Output Mode
Choose from Text Output and JSON Output.
Input box
Specify a processing method in which the LLM converts the input into a specific format. You can reference the result of the predecessor nodes through variables.
Text Output: Enter
/
to insert variables.JSON Output: Enter variable name and select Reference or Input.
Node example
The following example shows a basic text conversion workflow: After the user enters a keyword. The text conversion node receives this keyword and processes it, producing an appropriate output.
Click Test and enter
Mathematics
as the query:Sample output:
Script Conversion
Definition
Uses the specified code to convert the input into a specific format or form. The process includes parsing, converting, and formatting for consistency and readability.
Parameters
Parameters
Parameter Description
Input
Enter the variables to be processed in this node. You can reference variables of preceding nodes or the Start node, or enter the variable values.
Code
The code to convert the input to a specific format for subsequent nodes. In the code, you can reference variables from preceding nodes.
Output
The name of the variable processed by this node. Subsequent nodes can identify and process the variable by its name.
Node example
The following example shows a basic script conversion workflow. The user first inputs two parameters, which are then passed to the Script Conversion node. Inside the node, the code processes these parameters and ultimately generates the required output.
Click Test and enter
Beijing
for City andFebruary 10, 2022
for Date:Sample output:
Conditional Judgment
Definition
Specify conditions for the Conditional Judgment node. The node selects subsequent link based on the conditions. You can configure and/or conditions. If multiple conditions are met, the links are executed from top to bottom.
Parameters
Parameter
Description
Conditional Branch
Enter the conditional judgment statements.
Other
Outputs without conditional judgment.
Node example
The following example shows a Conditional Judgement workflow. The user inputs two parameters, which are then passed to the Conditional Judgment node. Inside the node, the parameters undergo conditional evaluation, and then the output response is generated through different branches of the text conversion node.
Click Test and enter
12345
for secret andadmin
for admin.Sample output:
Function Compute
Definition
Authorizes and call custom services in Function Compute.
Parameters
Parameter
Description
Input
Enter the variables to be processed in this node. You can reference variables of preceding nodes or the Start node, or enter the variable values.
Region
Select one of the regions: Singapore, Kuala Lumpur, Jakarta.
Service Configuration
Select the service configuration.
Output
The name of the variable processed by this node. Subsequent nodes can identify and process the variable by its name.
Plugin
Definition
Enhances workflow's functionality to perform more sophisticated tasks. Model Studio offers a series of official plug-ins, including Calculator and Image Generation. You can also develop custom plug-ins based on your requirements.
For more information, see Plugin overview.
Publish the application
Click Publish in the upper right corner of the canvas application. After you publish the application, it can be accessed by using API or shared with RAM users under the same Alibaba Cloud account on a web page.
Use API
On the Publish Channel tab, click View API to learn how to use API to call the application.
Note: Replace YOUR_API_KEY with your actual API key.
Summary of API call-related questions:
For call methods (HTTP/SDK), see Application calling API reference.
For detailed parameter information, see Application call API parameters.
For parameter passing issues, see Pass-through of application parameters.
For call error messages, see Error messages for solutions.
The application itself does not limit traffic. Throttling is imposed on the model you are using, see Model List.
The API call timeout is 300 seconds and cannot be modified.
Other call methods (dialog applications)
See Application sharing.
View workflow application versions
| |
| |
|
Delete and copy workflow applications
You can find the published application card in . Click to delete, copy, or rename applications. |