All Products
Search
Document Center

Chat App Message Service:Natural language generation

Last Updated:Jan 08, 2026

This topic describes how to configure the natural language generation component. This component enables you to use large language models (LLMs) for multi-turn conversations, knowledge retrieval, and content generation.

Component information

Important

Content generated by LLMs may contain errors. Review and verify the content carefully before use.

Component icon

image

Component Name

Natural language generation

Prerequisites

Navigate to the flow orchestration canvas by opening an existing flow or creating a new one.

  • Open the orchestration canvas of an existing flow

    Go to Chat App Message Service console > Flow Editor > Flow Management. Click the name of the flow that you want to edit to open the orchestration canvas.

    image

  • Create a new flow to open its orchestration canvas. For more information, see Create a flow.

Procedure

  1. On the canvas, click the Natural language generation component icon to view the component configuration area on the right.

    image

  2. Configure the component as needed. For more information about the parameters, see Parameters.

  3. Click Save in the upper-right corner. In the message that appears, click Save.

    image

Configuration Item Description

Click Implementation type and select Model or Application. The parameters vary based on the implementation type. The following sections describe the parameters.

Implementation type: Model

Parameter

Description

Protocol

When the implementation type is Model, only the OpenAI protocol is supported.

baseUrl

The network endpoint of the model service, such as "https://api.openai.com/v1" or another OpenAI-compatible endpoint.

apiKey

The API key for the model service.

Model name

The name of the model to use, such as "gpt-3.5-turbo" or "qwen-plus".

Initial prompt

The initial prompt for the model session. This prompt guides the model's output. Example: "You are a witty comedian. Use humorous language in the following Q&A pairs."

Model input

The input for the current turn of the model conversation. You can directly reference a variable or embed multiple variables in a text segment. Example: "{{incomingMessage}}" or "Please find information about {{topic}}."

Model output variable name

The name of the variable that stores the output of the current turn. This variable can be reused in subsequent steps of the flow and sent as a message reply.

Fallback text

The content to output if the model service is unavailable. Example: "Sorry, I cannot answer your question right now."

Implementation type: Application

Configuration setting

Description

Protocol

For the application implementation type, Dashscope is the only supported provider.

Note

For more information about applications, see Application development.

apiKey

The API key for the application service.

Note

For more information about API keys, see Obtain an API key.

workspaceId

The ID of the workspace where the agent, workflow, or agent orchestration application resides. This parameter is required when you call an application in a sub-workspace. It is not required when you call an application in the default workspace.

Note

For more information about workspaces, see Workspace permission management.

appId

The application ID.

Application input

The input for the current turn of the application conversation. You can directly reference a variable or embed multiple variables in a text segment. Example: "{{incomingMessage}}" or "Please find information about {{topic}}."

Custom pass-through parameters

Custom parameters to pass through. Example: {"city": "Hangzhou"}.

Application output variable name

The name of the variable that stores the output of the current turn. This variable can be reused in subsequent steps of the flow and sent as a message reply.

Fallback text

The content to output if the application service is unavailable. Example: "Sorry, I cannot answer your question right now."