All Products
Search
Document Center

Alibaba Cloud Model Studio:Workflow application

Last Updated:May 16, 2025

Workflow applications streamline complex tasks by breaking them down into a series of steps. In Model Studio, you can create a workflow application that integrates large language models (LLMs), APIs, and other nodes to effectively reduce coding efforts. This topic describes how to create a workflow application.

Important

Only users who created Model Studio applications before April 21, 2025, can access the Application tab and use all its features, including applications (Agent applications, Workflow applications, Agent orchestration applications), components (Prompt, Plug-in), data (Knowledge base, Application data) features and related APIs. This feature is in preview. Use with caution in production environments.

image

Overview

Why use workflow applications

Workflow is crucial in modern software development and business management. It can simplify complex tasks into manageable steps to boost efficiency. Workflow applications in Model Studio clearly define task execution order, allocate responsibilities, and establish step dependencies. This enables automation and streamlines processes.

Workflow applications are useful in scenarios such as:

  • Travel planning: Input parameters like destination to automatically generate travel plans, including flights, accommodations, and attractions.

  • Report analysis: Use data processing, analysis, and visualization plug-ins to produce structured and formatted analysis reports for complex datasets.

  • Customer service: Automatically classify and deal with customer inquiries to enhance the speed and precision of customer service responses.

  • Content creation: Produce content such as articles and marketing materials based on themes and requirements.

  • Education and training: Design personalized learning plans that include progress tracking and assessments, facilitating student self-learning.

  • Medical consultation: Use various analysis tools to generate preliminary diagnoses or examination recommendations based on patient symptoms and help doctors make further decisions.

Supported models

For detailed information about the models, see Models.

For the API call rate limits of each model, see Throttling.

Note

The actual supported models displayed in the agent application should prevail.

Use case

This example shows how to create a workflow application to identify whether a text message is related to telecom fraud.

  1. Go to My Applications. Click Create Application, choose Workflow Application and click Create Task-based Workflow.

  1. Configure the Start node: Delete the default city and date parameters. The input node has a built-in default parameter query, so no other parameters are needed here.

  1. Add an LLM node to identify fraud information:

    Drag an LLM node from the left-side pane into the canvas. Connect it to the Start node and configure the parameters.

    Use the default settings for parameters not mentioned here.

    • Model Configuration: Qwen-Plus

    • System Prompt:

      Analyze and determine whether the given information is suspected of fraud. Provide a definite answer on whether there is a suspicion of fraud.
       Processing requirements: Carefully review the content of the information, focusing on keywords and typical fraud patterns, such as requests for urgent transfers, provision of personal information, and promises of unrealistic benefits. 
      Procedure: 
      1. Identify key elements in the information, including but not limited to the sender's identity, requests made, promised returns, and any urgency expressions. 
      2. Compare with known fraud case characteristics to check if there are similar tactics or language patterns in the information. 
      3. Evaluate the overall reasonableness of the information, considering whether the requests made are in line with conventional logic and processes. 
      4. If the information contains links or attachments, do not click or download them directly to avoid potential security risks, and remind users of the dangers of such content. 
      Output format: Clearly indicate whether the information exhibits characteristics of fraud and briefly explain the basis for judgment. If there is a suspicion of fraud, provide some suggestions or preventive measures to protect user safety.
    • User Prompt:

      Determine whether “${sys.query}” is suspected of fraud.

p910525

  1. Connect the LLM node and the End node. Configure the variables of the End node.

    Use the default settings for parameters not mentioned here.

    • Input: Enter / to insert variable: large model.result

p910524

  1. Click Test in the upper right corner and enter Your package has been stored at the pickup station for several days. Please come and collect it at your earliest convenience.

    Click Execute. After the workflow is executed, the End node will display the Run Result.

image

  1. Again, click Test in the upper right corner and enter You've won $1 million in the lottery. Please check.

    Click Execute. After the workflow is executed, the End node will display the Run Result.

image

  1. If the workflow application works well, click Publish in the upper right corner.

Node description

Nodes are the core functional units of workflow applications, each responsible for specific tasks such as executing operations, triggering conditions, processing data, or determining process flow. They are like flexible building blocks that can be assembled to build efficient, intelligent automated processes.

Start/End

  • When to use

    • When designing a workflow, determine the structure and content of input/output parameters and fill them into the Start/End nodes.

  • How to use

    • Start node

      Component

      Description

      Variable Name

      The name of the input parameter. Subsequent nodes can call the variable by its name.

      Type

      The data type of the input parameter. Ccurrently, it only supports String, Boolean, and Number. If you need to pass in Array or Object, you need to parse it yourself before the workflow.

      Description

      The description of the input parameter, used to explain the purpose of the parameter.

      Note

      In addition to custom variables, the application also has some built-in utility variables:

      • query

        The user's query. Used to receive the prompt variable from API calls or text sent by users in the test window.

      • historyList (only in dialog workflows)

        The conversation history list, used for the application to automatically maintain conversation history records and provide multi-round conversation experience.

      • imageList (only in dialog workflows)

        The image list, used to store images uploaded by users in the test window. Currently only supports storing a single image.

    • End node

      Component

      Description

      Output Mode

      The format of the output parameters. Valid values: Text Output and JSON Output.

      Text Box (in text output mode)

      You can freely arrange the paragraph format of the output content, supporting text input or variable references, suitable for unstructured content.

      Variable Name/Variable Value (in JSON output mode)

      Output content must be arranged according to JSON format. You can customize variable names and support text input or variable references, suitable for structured content.

      Response

      Only effective when calling the application via API, used to decide whether to output the node content. To understand the purpose of this component, see Control node content using "Response".

Knowledge Base

  • Why use it

    Imagine you have an enormous "knowledge repository" containing massive documents, FAQs, and product information. This node helps you extract the "most relevant" content from this massive information, preparing for subsequent AI Q&A.

  • Function and usage

    • Retrieves one or more knowledge bases to obtain relevant document chunks. This allows the LLM to read more context and provide more professional or accurate answers. Often used in conjunction with LLM nodes.

    • Commonly used when building dialog workflows or intelligent customer service systems.

  • Parameters

    Parameter

    Parameter Description

    Input

    content: You can directly input text or reference variables output from previous nodes.

    imageList: Supports image search. You can directly input image links or reference variables output from previous nodes.

    Select Knowledge Base

    Specify which knowledge bases to search in (supports multiple selections).

    Output

    Name the result variable of this node, such as kbResult.

  • Output structure example:

    {
      "rewriteQuery": "...",
      "chunkList": [
        {
          "score": 0.36,
          "documentName": "Document Name",
          "title": "Document Title",
          "content": "Related Content Fragment",
          "imagesUrl": "Image URL"
        }
      ]
    }

    The commonly used part here is chunkList, which contains document chunks and similarity scores.

Note

A higher search result score indicates a stronger match. You can filter, sort, or combine chunks in subsequent nodes.

Local vector databases are not supported. You need to upload your files to the knowledge base to use them.

LLM

  • Why use it

    The LLM can understand language, generate text, analyze images, and participate in multi-round conversations. You can use it to write copy, summarize text, or analyze image content (if a VL model is used).

  • Features

    • It can process one input at a time or batch process large amounts of data.

    • You can use different LLMs based on performance, speed, or other characteristics.

  • Parameters

    Parameter

    Parameter Description

    Mode

    Single Mode: Fast search version with a lower search ratio and without query rewriting.

    Batch Mode: The node will run multiple times. Each time it runs, an item from the list is sequentially assigned to the batch variable. This process will continue until all items in the list have been processed or the maximum number of batches has been reached.

    Batch Configuration:

    • Maximum Number of Batches: The upper limit for batch iterations. Valid range: 1 to 100. The default value for regular users: 100.

      Note

      The actual number depends on the minimum length of the arrays in the user's input. If there are no input variables, it depends on the number of batches configured.

    • Number of Parallel Runs: The concurrency limit of batch processing. If you set this value to 1, all tasks are executed in series. Valid range: 1 to 10.

    Model Configuration

    Choose a suitable LLM and configure model parameters . For a list of supported models, see Supported models.

    When selecting a VL model:

    • Model Input Parameters: Use vlImageUrl to reference a variable or enter image URLs.

    • Image Source: Choose from Image Set or Video Frame.

      • Image Set: The model considers uploaded images independent from each other and will match the images and queries for understanding.

      • Video Frame: The model considers the uploaded images from the same video and understands the images sequentially as a whole. The number of video frames must be no less than four.

    Parameter Configuration

    Temperature: Adjusts content diversity. Higher values increase randomness and uniqueness, while lower values yield more predictable and consistent results.

    Maximum Reply Length: The maximum text length generated by the model, not including the prompt. This limit varies by model, and the exact maximum may differ.

    System Prompt

    Defines the role, task, and output format of a model. For example, "You are a math expert, specializing in solving math problems. Please output the math problem-solving process and results in the specified format."

    User Prompt

    Set up the prompt template and insert variables. The model will generate content based on this configuration.

    Output

    The variable name for the output of the node. Subsequent nodes can identify and process the variable by its name.

    Note

    To integrate the application into your business through API, see Application calling.

  • LLM node in dialog workflows

    • Different from the LLM node of task-based workflows, the LLM node of dialog workflows supports multi-round conversations.

      Round Configuration: The application collects the variables from previous rounds specified in Context and passes them as input parameters to the LLM.

      image

    • Context: The context required by the LLM. The default ${System Variables.historyList} represents the input and output of the application from previous rounds. Other parameters refer to the variables from previous rounds.

      image

  • Text model example

    Click Test and enter Chip Engineer as the query:

    image

    Sample output:

    image

  • Visual model example

    The model supports accepts either a single image or multiple images and supports input as URL and base64.

    Note

    Directly input a single image, such as https://****.com/****.jpg.

    Input multiple images as a list, such as ["URL", "URL", "URL"].

    Click Test, enter https://****.com/****.jpg as the query.

    image

    Sample output:

    image

API

  • Definition

    Call custom API services via POST, GET, PUT, PATCH, DELETE methods, and output the API call results.

    Call Method

    Purpose

    POST

    Used to submit data to the server to create a new resource.

    GET

    Used to retrieve a representation of a resource, without modifying data on the server.

    PUT

    Used to update the representation of a specified resource on the server or create a new resource on the server.

    PATCH

    Used to partially update a resource on the server.

    DELETE

    Used to delete a specified resource from the server.

  • Parameters

    Parameter

    Description

    API Request URL

    The API address to call. Valid values: POST, GET, PUT, PATCH, DELETE.

    Header Settings

    Configure Header parameters, setting KEY, VALUE.

    Param Settings

    Configure Param parameters, setting KEY, VALUE.

    Body Settings

    Valid values: none, form-data, raw, JSON.

    Output

    The variable name for the output of the node. Subsequent nodes can identify and process the variable by its name.

    Note

    To integrate the application into your business through API, see Application Call.

  • Node example

    Using the POST method to call an interface.

    image

Intent Classification

  • Definition

    Intelligently classify and match based on intent descriptions, selecting one path to execute.

  • Parameter

    Parameter

    Description

    Input

    Enter the variables to be processed in this node. You can reference variables of preceding nodes or the Start node, or enter the variable values.

    Model Configuration

    Select Qwen-Plus.

    Intent Configuration

    Configure different intent categories and corresponding descriptions. The model matches subsequent links based on the descriptions. For example: "Math problems" and "Weather Q&A".

    Other Intents

    If no other intents are matched, this link is matched.

    Intent Mode

    • Single Selection: The LLM selects and outputs the most appropriate intent from Intent Configuration.

    • Multiple Selection: The LLM selects and outputs all matching intents from Intent Configuration.

    Thinking Mode

    • Speed Mode: This mode does not output the thinking process to improve its speed. Suitable for simple scenarios.

    • Effect Mode: This mode thinks step by step to output more accurate answers.

    Advanced Configurations

    You can provide additional prompts to the model as advanced configurations, where you can input more conditions or examples to make the model's classification more in line with your requirements.

    Example

    Suppose you are developing a customer service system for an e-commerce platform, and users may ask various questions about order inquiries, returns, and payments. To ensure accurate classification by the model, you can add relevant prompts and examples in the advanced configuration.

    Please classify the intent based on the following examples:
    Example 1: User input "I want to return the coat I just bought", classified as "Return".
    Example 2: User input "Please help me check the shipping status of the order", classified as "Order Inquiry".
    Conditions: Only process queries related to orders, ignore payment and technical issues.

    Effect:

    User input: "When can the book I ordered on your website last week be delivered to my home?"

    Classification result: "Order Inquiry"

    In this example, the advanced configuration guides the model to classify "query delivery time" as the "Order Inquiry" intent by providing specific classification examples, while also limiting the classification scope and excluding other unrelated issues.

    Context

    When Context is enabled, the system will automatically record historical conversation in the Message format. When calling the model, the context will be passed in, and the model will generate based on the context content.

    This configuration item is only available in the intent classification node of dialog workflows.
    Note

    When Context is enabled, the variable passed in this node must be of the List type.

    Output

    The name of the variable processed by this node. Subsequent nodes can identify and process the variable by its name.

    Note
    • This node supports context in dialog workflows.

    • Running this node will consume tokens. The consumption amount will be displayed.

Text Conversion

  • Definition

    Used for text content conversion and processing, such as extracting specific content or converting formats, and supports template mode.

  • Parameters

    Parameter

    Description

    Output Mode

    Choose from Text Output and JSON Output.

    Input box

    Specify a processing method in which the LLM converts the input into a specific format. You can reference the result of the predecessor nodes through variables.

    • Text Output: Enter / to insert variables.

    • JSON Output: Enter variable name and select Reference or Input.

  • Node example

    The following example shows a basic text conversion workflow: After the user enters a keyword. The text conversion node receives this keyword and processes it, producing an appropriate output.

    Click Test and enter Mathematics as the query:

    image

    Sample output:

    image

Script Conversion

  • Definition

    Uses the specified code to convert the input into a specific format or form. The process includes parsing, converting, and formatting for consistency and readability.

  • Parameters

    Parameters

    Parameter Description

    Input

    Enter the variables to be processed in this node. You can reference variables of preceding nodes or the Start node, or enter the variable values.

    Code

    The code to convert the input to a specific format for subsequent nodes. In the code, you can reference variables from preceding nodes.

    Output

    The name of the variable processed by this node. Subsequent nodes can identify and process the variable by its name.

  • Node example

    The following example shows a basic script conversion workflow. The user first inputs two parameters, which are then passed to the Script Conversion node. Inside the node, the code processes these parameters and ultimately generates the required output.

    Click Test and enter Beijing for City and February 10, 2022 for Date:

    image

    Sample output:

    image

Conditional Judgment

  • Definition

    Specify conditions for the Conditional Judgment node. The node selects subsequent link based on the conditions. You can configure and/or conditions. If multiple conditions are met, the links are executed from top to bottom.

  • Parameters

    Parameter

    Description

    Conditional Branch

    Enter the conditional judgment statements.

    Other

    Outputs without conditional judgment.

  • Node example

    The following example shows a Conditional Judgement workflow. The user inputs two parameters, which are then passed to the Conditional Judgment node. Inside the node, the parameters undergo conditional evaluation, and then the output response is generated through different branches of the text conversion node.

    Click Test and enter 12345 for secret and admin for admin.

    image

    Sample output:

    image

Function Compute

  • Definition

    Authorizes and call custom services in Function Compute.

  • Parameters

    Parameter

    Description

    Input

    Enter the variables to be processed in this node. You can reference variables of preceding nodes or the Start node, or enter the variable values.

    Region

    Select one of the regions: Singapore, Kuala Lumpur, Jakarta.

    Service Configuration

    Select the service configuration.

    Output

    The name of the variable processed by this node. Subsequent nodes can identify and process the variable by its name.

Plugin

  • Definition

    Enhances workflow's functionality to perform more sophisticated tasks. Model Studio offers a series of official plug-ins, including Calculator and Image Generation. You can also develop custom plug-ins based on your requirements.

    For more information, see Plugin overview.

Publish the application

Click Publish in the upper right corner of the canvas application. After you publish the application, it can be accessed by using API or shared with RAM users under the same Alibaba Cloud account on a web page.

Use API

On the Publish Channel tab, click View API to learn how to use API to call the application.

Note: Replace YOUR_API_KEY with your actual API key.

image

Summary of API call-related questions:

Note

The API call timeout is 300 seconds and cannot be modified.

Other call methods (dialog applications)

See Application sharing.

View workflow application versions

  1. To publish a new version, click Publish at the upper right corner of the page. Enter the version details (such as 1.0.0) and click OK.

image

  1. Click the image icon at the top of the page. In the Historical Version panel, you can view or apply different versions of the application by clicking Use This Version or Return to Current Canvas.

    You can also click Export This Version in DSL to export the DSL of a specific historical version.

image

  1. (Optional) You can view or search for nodes on the current canvas in the toolbar.

image

Delete and copy workflow applications

You can find the published application card in My Applications > Applications. Click More > Copy Application/Delete Application to delete, copy, or rename applications.

image

FAQ

Workflow application related

How to write workflow application results to a database?

Use a Script Conversion node to write content from the previous node to a database.

Asynchronous call parameter retrieval failure in workflow applications?

GET https://dashscope-intl.aliyuncs.com/api/v1/tasks/{task_id}

Failed workflows do not support passing in variables. You need to manually replace the task_id in the link with the task_id you want to query.

How to upload files in Model Studio workflow applications?

You can add an API node in the workflow application to upload files.

How to upload images?

Use a VL model and pass in the image URL address through specified parameters.

Can asynchronous task APIs be used in workflow applications?

The workflow application timeout is 300 seconds, so it is not recommended to use asynchronous task APIs in the flow.

How should the frontend call the API and stream the output in Model Studio workflows?

Frontend calling is not currently supported.

Model Studio workflow cannot import standalone .yaml files

Importing standalone .yaml files is not supported. A compressed package containing md5 files is required. It is recommended to regenerate the MD5.

Can variable names in Model Studio workflows be in Chinese?

Variable names do not support using Chinese.

Node related

Error when intent classification node with context enabled is running?

If the intent classification node has context enabled, the variable type you pass to this node needs to be List type.

Error when using API call node with streaming output?

The API node used in workflows does not support this, although HTTP API itself does support it. It is recommended to use agent orchestration applications to call other agent applications to achieve streaming return results.

How to handle slow response speed of conditional judgment nodes in Model Studio workflows?

  1. Check workflow configuration: Ensure that each node in the workflow is configured correctly, especially the conditional judgment node. Avoid unnecessary complex calculations or data processing, which may reduce response time.

  2. Optimize code logic: If custom scripts are involved in the condition judgment, try to optimize the script logic, reduce unnecessary loops or data processing to improve execution efficiency.

  3. Batch testing: Batch test the average response time of the current workflow to determine if there are performance bottlenecks under specific conditions.

How to output thinking processes in streaming for large model nodes?

You need to add a text conversion node after the large model node and configure the reasoning_content variable, turning on the result return switch. The return condition needs to be received by the end node.

Cannot customize output parameters for large model nodes?

  1. Use a script node to process the output: Add a script node after the large model node to process the output of the large model node, converting it to the format you need or adding additional output parameters.

  2. Configure batch nodes: If you use a large model node in a batch node, you can select the output of the large model node as the final output in the batch node configuration. Specific steps are as follows:

    • Add a large model node to the batch node.

    • In the batch node configuration, select the output of the large model node as the final output resultList.

For more detailed information, please refer to Application Parameter Passing.

Workflow API call node has no return result and parameter passing issues

Confirm that the API-Key and Base URL are correct. Confirm that the input parameters are configured correctly, adjust the field input type, use model observation to view model usage details.