All Products
Search
Document Center

Cloud Config:Enable the malicious URL AI guardrail for Model Studio outputs

Last Updated:Nov 21, 2025

Enabling the malicious URL safety guard for Bailian's output content is considered compliant.

Threat level

Default risk level: High.

You can change the risk level as needed.

Detection logic

Enabling the malicious URL safety guard for Bailian's output content is considered compliant.

Rule details

Parameter

Description

Rule name

Enable the malicious URL AI guardrail for Model Studio outputs

Rule identifier

bailian-response-guard-url-detect-enabled

Tags

[]

Automatic remediation

Not supported

Rule trigger

Periodic, 24 hours

Supported resource types

[ACS::::Account]

Input parameters

None

Remediation guidance

None.