All Products
Search
Document Center

Cloud Config:Enable AI Guardrails for prompt attacks on Alibaba Cloud Model Studio outputs

Last Updated:Nov 21, 2025

A resource is considered compliant if AI Guardrails is enabled to protect against prompt attacks on Alibaba Cloud Model Studio outputs.

Threat level

Default threat level: High.

You can change the risk level as needed.

Detection logic

A resource is considered compliant if AI Guardrails is enabled to protect against prompt attacks on Alibaba Cloud Model Studio outputs.

Rule details

Parameter

Description

Rule name

Enable AI Guardrails for prompt attacks on Alibaba Cloud Model Studio outputs

Rule identifier

bailian-response-guard-prompt-attack-enabled

Tag

[]

Automatic remediation

Not supported

Rule trigger

24-hour period

Supported resource types

[ACS::::Account]

Input parameters

None

Remediation guide

None.