All Products
Search
Document Center

Chat App Message Service:Natural Language Generate

Last Updated:May 14, 2025

This topic describes how to configure the Natural Language Generate component. This component allows you to use large language models for multi-round conversations, knowledge retrieval, and content generation.

Component information

Important

AI large model-generated content may contain issues. Please carefully evaluate and verify before use with caution.

Icon

image

Name

Natural Language Generate

Preparations

Go to the canvas page of an existing flow or a new flow.

  • Go to the canvas page of an existing flow.

    Log on to Chat App Message Service Console. Choose Chat Flow > Flow Management. Click the name of the flow that you want to edit. The canvas page of the flow appears.

    image

  • Create a new flow to go to the canvas page. For more information, see Create a flow.

Procedure

  1. Click the Natural Language Generate icon on the canvas to view the configurations on the right.

    image

  2. Configure the component based on your needs. For more information, see Parameters.

  3. Click Save in the upper-right corner. In the message that appears, click Save.

    image

Parameters

You can set Implementation Type to Model or Application. Different implementation types have different parameters. The following tables describe the specific parameters.

Implementation Type - Model

Parameter

Description

Protocol

The protocol of the model service. Valid value: OpenAI.

baseUrl

The endpoint for the model service. Example: https://api.openai.com/v1 or another OpenAI-compatible endpoint.

apiKey

The key of the model service.

Model Name

The model name. Example: gpt-3.5-turbo or qwen-plus.

Initial Prompt

The initial prompt input for the model session, used to guide its output. Example: You are a witty comedian, please use humorous language in the following Q&A.

Model Input

The current round of model conversation input can directly reference or embed multiple variables within a text. Example: {{incomingMessage}} or “Please help me find information about {{topic}}.”

Model Output Variable Name

The variable name for output of this round in the model conversation can be reused in subsequent processes and used as the content of a message reply.

Fallback Text

This content will be used as the output when the model service is unavailable. Example: Sorry, I am temporarily unable to answer your question.

Implementation Type - Application

Parameter

Description

Protocol

The protocol of the application service. Valid value: DashScope.

Note

For more information about applications, see Application building.

apiKey

The key of the application service.

Note

For more information, see Obtain an API key.

workspaceId

The workspace ID where the agent, workflow, or agent orchestration application resides. It needs to be passed when calling an application in a sub-workspace, but not when calling an application in the default workspace.

Note

For information about workspaces, see Authorize a sub-workspace to use models.

appId

The application ID.

Application Input

The current round of application conversation input can directly reference or embed multiple variables within a text. Example: {{incomingMessage}} or Please help me find information about {{topic}}.

Custom Pass-through Parameters

Custom pass-through parameters. Example: {"city": "Hangzhou"}.

Application Output Variable Name

The variable name for output of this round in the application conversation can be reused in subsequent processes and used as the content of a message reply.

Fallback Text

This content will be used as the output when the application service is unavailable. Example: Sorry, I am temporarily unable to answer your question.