All Products
Search
Document Center

Data Management:ChatWithDesensitizeSSE

Last Updated:Mar 31, 2026

This API provides an SSE interface for chat with data masking.

Operation description

This API provides chat functionality with DMS data masking capabilities.

Try it now

Try this API in OpenAPI Explorer, no manual signing needed. Successful calls auto-generate SDK code matching your parameters. Download it with built-in credential security for local usage.

Test

RAM authorization

The table below describes the authorization required to call this API. You can define it in a Resource Access Management (RAM) policy. The table's columns are detailed below:

  • Action: The actions can be used in the Action element of RAM permission policy statements to grant permissions to perform the operation.

  • API: The API that you can call to perform the action.

  • Access level: The predefined level of access granted for each API. Valid values: create, list, get, update, and delete.

  • Resource type: The type of the resource that supports authorization to perform the action. It indicates if the action supports resource-level permission. The specified resource must be compatible with the action. Otherwise, the policy will be ineffective.

    • For APIs with resource-level permissions, required resource types are marked with an asterisk (*). Specify the corresponding Alibaba Cloud Resource Name (ARN) in the Resource element of the policy.

    • For APIs without resource-level permissions, it is shown as All Resources. Use an asterisk (*) in the Resource element of the policy.

  • Condition key: The condition keys defined by the service. The key allows for granular control, applying to either actions alone or actions associated with specific resources. In addition to service-specific condition keys, Alibaba Cloud provides a set of common condition keys applicable across all RAM-supported services.

  • Dependent action: The dependent actions required to run the action. To complete the action, the RAM user or the RAM role must have the permissions to perform all dependent actions.

Action

Access level

Resource type

Condition key

Dependent action

dms:ChatWithDesensitizeSSE

none

*All Resource

*

None None

Request syntax

POST /worknode/innerapi/services HTTP/1.1

Request parameters

Parameter

Type

Required

Description

Example

InstanceId

integer

Yes

Instance ID. Use the instance ID to specify the corresponding data masking rule. Obtain the value of this parameter by invoking the ListInstances or GetInstance interface.

123***

Messages

array

No

The context passed to the Large Language Model (LLM), ordered by conversation sequence.

[ { "content": "你好", "role": "user" } ]

any

No

The context passed to the Large Language Model (LLM), ordered by conversation sequence.

{ "content": "你好", "role": "user" }

Model

string

No

Model name. Supported models include Qwen series plain text Large Language Models (commercial and open source versions). Multimodal models are not supported.

qwen-plus

Stop

array

No

List of stop words.

string

No

List of stop words.

\n

NeedDesensitization

boolean

No

Whether data masking is required. Default is false.

false

DesensitizationRule

string

No

Data masking category. Cannot be empty if `needDesensitization` is true.

UserInfo

MaxTokens

integer

No

Maximum number of tokens for model output. If the generated content exceeds this value, generation stops early. Use this parameter to control output length.

256

PresencePenalty

string

No

Controls the repetition of content when the model generates text. Value range: [-2.0, 2.0]. Positive values reduce repetition, while negative values increase it.

0.0

ResponseFormat

string

No

Format of the returned content. Options: text: output text response; json_object: output a standard JSON string.

text

Seed

integer

No

Random number seed. Ensures reproducible results with the same input and parameters. Value range: [0, 2^31−1].

1

EnableThinking

boolean

No

Enable thinking mode when using a hybrid thinking model.

true

ThinkingBudget

integer

No

Maximum number of tokens for the thinking process.

256

Temperature

string

No

Sampling temperature. Controls the diversity of text generated by the model. A higher temperature generates more diverse text; a lower temperature generates more deterministic text. Value range: [0, 2).

1

TopLogprobs

integer

No

Specify the number of candidate tokens with the highest probability returned by the model at each generation step. Value range: [0, 5].

1

TopK

integer

No

Specify the number of candidate tokens used for sampling during generation. A larger value results in more random output; a smaller value results in more deterministic output. If set to null or greater than 100, it is disabled.

10

TopP

string

No

Probability threshold for nucleus sampling. Controls the diversity of text generated by the model. A higher top_p generates more diverse text. Value range: (0, 1.0].

0.5

XDashScopeDataInspection

string

No

Based on the Content Moderation capabilities of the Qwen API, determine whether to further detect non-compliant information in input and output content.

{}

SearchOptions

object

No

Web search policy.

{}

string

No

Web search policy.

{}

ModalitiesList

array

No

Output data modalities. Applies only to the Qwen-Omni model.

["text","audio"]

string

No

Output data modality.

text

AudioJson

string

No

Tone and format of the output audio. Applies only to the Qwen-Omni model, and the `modalities` parameter must be [\"text\",\"audio\"].

{}

EnableCodeInterpreter

boolean

No

Enable the code interpreter feature. This takes effect only when `model` is `qwen3-max-preview` and `enable_thinking` is true.

false

Logprobs

boolean

No

Return the logarithmic probability of output tokens.

false

VlHighResolutionImages

boolean

No

Increase the pixel limit of input images to the pixel value corresponding to 16384 tokens.

false

EnableSearch

boolean

No

Enable web search.

false

IncludeUsage

boolean

No

Include Token consumption information in the last block of the response during streaming output.

true

Stream

boolean

No

Use streaming output.

1-68f11da7e2b826dcc63c5877-hd

Input

string

No

Input for the vectorization model.

test

Dimensions

integer

No

Dimensions of the vectorization model.

256

Parameters

string

No

Model configuration parameters.

{}

Response elements

Element

Type

Description

Example

object

RequestId

string

Request ID.

283C461F-11D8-48AA-B695-DF092DA32AF3

Data

string

Returned data.

true

ErrorCode

string

Error code.

UnknownError

ErrorMessage

string

Error message.

UnknownError

Success

boolean

Indicates whether the request was successful. Return values are as follows:

  • true: The request was successful.

  • false: The request failed.

true

Examples

Success response

JSON format

{
  "RequestId": "283C461F-11D8-48AA-B695-DF092DA32AF3",
  "Data": "true",
  "ErrorCode": "UnknownError",
  "ErrorMessage": "UnknownError",
  "Success": true
}

Error codes

See Error Codes for a complete list.

Release notes

See Release Notes for a complete list.