All Products
Search
Document Center

AI Guardrails:Use the online testing feature

Last Updated:Mar 31, 2026

Before integrating the Guardrails API, use the online testing feature to validate detection behavior against your content. Submit text, select a policy template, and inspect results — all from the console, without writing code.

Prerequisites

Before you begin, ensure that you have:

Running tests incurs charges. For details, see Billing overview.

Run a test

  1. Log on to the Guardrails console.

  2. In the input box, enter the text you want to test.

  3. Below the input box, select a detection policy template:

    TemplatePolicy ID
    AI input content moderationquery_security_check_intl
    AI-generated content moderationresponse_security_check_intl
  4. Click Run test.

  5. Review the Test Results panel.

    image

Alternatively, select a sample template to test with preset content. Sample templates cover content compliance, sensitive content, and prompt attacks. After selecting a template, click Run test to view the detection results.

Enable additional detection features

If Test Results shows Not enabled next to Sensitive content detection or Prompt injection detection, those features are inactive. Enable them directly from the results panel.

  1. In Test Results, if the status of Sensitive content detection or Prompt injection detection is Not enabled, click Proceed to enable to activate the feature.

    image

  2. On the check item configuration page, select the features to enable.

    image

  3. Confirm the activation. If you enable Sensitive content detection or Prompt injection detection, a dialog box appears to notify you that the feature is billed separately. For details, see Billing overview.