All Products
Search
Document Center

Cloud Config:Enable AI Guardrails against prompt attacks for Alibaba Cloud Model Studio

Last Updated:Nov 21, 2025

Enabling input prompt attack security guardrails for Bailian is considered compliant.

Risk level

Default risk level: High.

You can change the risk level as needed.

Detection logic

Enabling input prompt attack security guardrails for Bailian is considered compliant.

Rule details

Parameter

Description

Rule name

Enable AI Guardrails against prompt attacks for Alibaba Cloud Model Studio

Rule identifier

bailian-query-guard-prompt-attack-enabled

Tag

[]

Automatic remediation

Not supported

Rule trigger

Triggered every 24 hours

Supported resource types

[ACS::::Account]

Input parameters

None