All Products
Search
Document Center

Data Management:ChatWithDesensitize

Last Updated:Jan 13, 2026

Chat API with DMS Data Masking capabilities.

Operation description

Chat API with DMS Data Masking capabilities.

Debugging

You can run this interface directly in OpenAPI Explorer, saving you the trouble of calculating signatures. After running successfully, OpenAPI Explorer can automatically generate SDK code samples.

Authorization information

The following table shows the authorization information corresponding to the API. The authorization information can be used in the Action policy element to grant a RAM user or RAM role the permissions to call this API operation. Description:

  • Operation: the value that you can use in the Action element to specify the operation on a resource.
  • Access level: the access level of each operation. The levels are read, write, and list.
  • Resource type: the type of the resource on which you can authorize the RAM user or the RAM role to perform the operation. Take note of the following items:
    • For mandatory resource types, indicate with a prefix of * .
    • If the permissions cannot be granted at the resource level, All Resources is used in the Resource type column of the operation.
  • Condition Key: the condition key that is defined by the cloud service.
  • Associated operation: other operations that the RAM user or the RAM role must have permissions to perform to complete the operation. To complete the operation, the RAM user or the RAM role must have the permissions to perform the associated operations.
OperationAccess levelResource typeCondition keyAssociated operation
dms:ChatWithDesensitizenone
*All Resources
*
    none
none

Request parameters

ParameterTypeRequiredDescriptionExample
InstanceIdlongYes

The ID of the instance, used to specify the corresponding data masking rules. You can call the ListInstances or GetInstance operation to query the ID of the instance.

123***
Messagesarray<object>No

The conversation context passed to the model, arranged in chronological order.

objectNo

For each message body of the dialog, the standard format is as follows: { "content": "$message", "role": "$role:system,user,assistant" }

ModelstringNo

The model name. Supported Models: Qwen series text-only Large Language Models (Commercial and Open-source). Multi-modal models are not supported.

qwen-plus
StoparrayNo

Stop sequences.

stringNo

Stop sequences: Generation will terminate immediately if the model generates any of the strings specified in the 'stop' parameter.

\n
NeedDesensitizationbooleanNo

Whether to enable data masking. Defaults to false.

false
DesensitizationRulestringNo

Masking category. Required when needDataMasking is true.

UserInfo
MaxTokensintegerNo

Limits the maximum number of tokens the model can generate. If the output exceeds this value, generation will be truncated. Suitable for scenarios where you need to control the output length.

256
PresencePenaltynumberNo

Controls the degree of repetition in generated text. Valid values: [-2.0, 2.0]. Positive values decrease repetition, while negative values increase it.

0.0
ResponseFormatstringNo

The format of the returned content. Valid values: text: Plain text response; json_object: Standardized JSON string.

text
SeedintegerNo

Random seed. Used to ensure the reproducibility of results under the same input and parameters. Valid values: [0, 2^31−1].

1
EnableThinkingbooleanNo

Specifies whether to enable Thinking Mode when using hybrid thinking models.

true
ThinkingBudgetintegerNo

The maximum number of tokens allowed for the model's internal reasoning process.

256
TemperaturenumberNo

The sampling temperature controls the diversity of the generated text. The higher the temperature, the more diverse the generated text, and conversely, the more deterministic the generated text. Valid values: [0, 2).

1
TopLogprobsintegerNo

Specifies the number of most likely candidate tokens to return at each generation step. Valid values: [0, 5].

1
TopKintegerNo

Specifies the number of candidate tokens to consider during sampling. Higher values increase randomness, while lower values make the output more deterministic. Set to null or a value greater than 100 to disable.

10
TopPnumberNo

The probability threshold for nucleus sampling, used to control the diversity of the generated text. Higher Top-P values result in more diverse generated text. Valid values: (0,1.0].

0.5
XDashScopeDataInspectionstringNo

Specifies whether to further identify non-compliant information in both input and output content, building upon the built-in content safety capabilities of the Tongyi Qianwen API.

{}
SearchOptionsobjectNo

Web search strategy.

stringNo

The strategy key-value pair for Web search.

{}
ModalitiesListarrayNo

Output data modality; only applicable to the Qwen-Omni model.

stringNo

Output data modality.

text
AudioJsonstringNo

Output audio voice and format; only applicable to the Qwen-Omni model, provided that the modalities parameter is set to ["text", "audio"].

{}
EnableCodeInterpreterbooleanNo

Specifies whether to enable the code interpreter feature. Takes effect only when model is qwen3-max-preview and enable_thinking is true.

false
LogprobsbooleanNo

Specifies whether to return the log probabilities of the output tokens.

false
VlHighResolutionImagesbooleanNo

Specifies whether to increase the maximum pixel limit of input images to the equivalent of 16,384 tokens.

false
EnableSearchbooleanNo

Whether to enable web search.

false

Response parameters

ParameterTypeDescriptionExample
object

The response.

RequestIdstring

ID of the request.

0C1CB646-1DE4-4AD0-B4A4-7D47DD52E931
ErrorCodestring

Error code.

UnknownError
ErrorMessagestring

Error message.

UnknownError
Successboolean

Indicates whether the operation was successful. Valid values:

  • true: The request was successful.
  • false: The request fails.
true
Dataobject

The data returned.

Createdstring

The Unix timestamp (in seconds) when the request was created.

1763710100
Modelstring

The model used for this request.

qwen-plus
Choicesarray<object>

The candidate array for model-generated content.

Choicesobject

Candidate content generated by the model.

FinishReasonstring

Finish reason: ● stop: The model reached a natural stop point or a specified stop sequence. ● length: Generation ended because the maximum number of tokens was reached. ● tool_calls: The model stopped because it needs to call a tool to proceed.

stop
Messageobject

The message body output by the model.

Contentstring

The content of the model's response.

ReasoningContentstring

The internal reasoning content of the model.

Rolestring

Message role.

system
Logprobsobject

Token probability information of model output.

any

Token probability information of model output.

{}
Usageobject

The token consumption information of this request.

CompletionTokensstring

The number of output tokens.

10
PromptTokensstring

The number of input tokens.

9
TotalTokensstring

The total number of tokens consumed.

19
PromptTokensDetailsobject

Fine-grained classification of input tokens.

string

Fine-grained classification of input tokens.

{}
CompletionTokensDetailsobject

Fine-grained classification of output tokens when using the Qwen-VL model.

string

Fine-grained classification of output tokens.

{}
Messagestring

Error message, provided when StatusCode is not 200.

InvalidParameter
StatusCodestring

Error code, 200 for normal calls, others for exceptions.

200
Typestring

Error type.

invalid_request_error

Examples

Sample success responses

JSONformat

{
  "RequestId": "0C1CB646-1DE4-4AD0-B4A4-7D47DD52E931",
  "ErrorCode": "UnknownError",
  "ErrorMessage": "UnknownError",
  "Success": true,
  "Data": {
    "Created": 1763710100,
    "Model": "qwen-plus",
    "Choices": [
      {
        "FinishReason": "stop",
        "Message": {
          "Content": "",
          "ReasoningContent": "",
          "Role": "system"
        },
        "Logprobs": {
          "key": {}
        }
      }
    ],
    "Usage": {
      "CompletionTokens": 10,
      "PromptTokens": 9,
      "TotalTokens": 19,
      "PromptTokensDetails": {
        "key": {}
      },
      "CompletionTokensDetails": {
        "key": {}
      }
    },
    "Message": "InvalidParameter",
    "StatusCode": 200,
    "Type": "invalid_request_error"
  }
}

Error codes

For a list of error codes, visit the Service error codes.

Change history

Change timeSummary of changesOperation
2025-11-26The request parameters of the API has changed. The response structure of the API has changedView Change Details
2025-11-25Add OperationView Change Details