All Products
Search
Document Center

Web Application Firewall:Asset management

Last Updated:Oct 22, 2025

The AI application protection feature of Web Application Firewall (WAF) uses assets to locate and inspect user inputs (requests) and model outputs (responses) in AI applications. After you create an asset, you can enable prompt attack prevention or content moderation for it.

Add an asset

Before you begin, ensure your website or domain has been added to WAF for protection. That is, a protected object exists. If you have not added your services to WAF, see Onboarding overview.

  1. Go to the Asset Management page. From the top menu bar, select the resource group and region for your WAF instance (Chinese Mainland or Outside Chinese Mainland), and then click Add Asset.

  2. Specify the Match Conditions. Match Conditions define the AI API endpoints you want to protect and help WAF accurately identify the target traffic.

    Match field

    Match content

    Domain Name

    Enter a domain name or IP address. For example, if the domain name of your protected object is domain.com, enter domain.com.

    URL Path

    Enter the URL path of the API. For example, if you enter /chat/message here and domain.com for Domain Name, the protected target is domain.com/chat/message.

    HTTP Request Method

    Options: POST, GET, PUT.

  3. In the Prompt Position and Response Content sections, enter fields to specify where WAF inspects content in an HTTP message. You must use JSONPath expressions.

    Important

    This configuration determines which content the protection modules inspect.

    • To inspect both user requests and model responses, configure both Prompt Position and Response Content.

    • To use the Redact action, configure Response Content.

    Under Response Content, select the option that matches your service's characteristics.

    • Non-streaming response: The server sends the complete JSON response body in a single package after it finishes processing. The client must wait for all data to be generated before receiving the result.

    • Streaming response: The server continuously pushes chunks of response data. The client can receive and process partial results in real time until the connection is closed. Only the server-sent events (SSE) protocol is supported.

      • Deep thinking: Before generating the final answer, the model explicitly outputs its reasoning process. This output displays the model's reasoning in structured steps, improving the interpretability and accuracy of the results.

    Note

    If you are unsure what to enter in the content position fields, refer to the following examples or click Test next to the input box to validate your configuration.

    • Request prompt position examples

      Example 1

      In the following HTTP request body, the JSONPath for the prompt position is: $.messages[0].content.parts[0].

      {
        "action": "next",
        "messages": [{
          "id": "c86043d3-6657-4a9e-85df-a22c98666367",
          "create_time": 1742977262.085,
          "content": {
            "content_type": "text",
            "parts": ["What is a large language model prompt?"]
          }
        }]
      }

      Example 2

      In the following HTTP request body, the JSONPath for the prompt position is: $.messages[1].content.

      {
        "model": "gpt-3.5-turbo",
        "messages": [
          {
            "role": "system",
            "content": "You are an assistant."
          },
          {
            "role": "user",
            "content": "Help me write a thank-you letter."
          }
        ],
        "temperature": 0.7
      }

      Example 3

      In the following HTTP request body, the JSONPath for the user prompt in the last turn is: $.messages[-1].content.

      {
        "messages": [
          {
            "role": "user",
            "content": "Explain neural networks."
          },
          {
            "role": "assistant",
            "content": "A neural network is a computational model that simulates the structure of the human brain..."
          },
          {
            "role": "user",
            "content": "What about a Transformer?"
          }
        ]
      }
    • Response content position examples

      Non-streaming response

      In the following HTTP response body, the JSONPath for the response content position is: $.choices[0].message.content.

      {
        "choices": [
          {
            "message": {
              "role": "assistant",
              "content": "A large language model prompt is the input text that guides the model to generate a specific output."
            }
          }
        ]
      }

      Streaming response

      The following HTTP response body contains five content chunks. The JSONPath for the content path of each chunk is: $.answer.

        data: {"event": "message", "message_id": "5adxxx6290", "conversation_id": "457xxx55f2", "answer": "Very", "created_at": 1679586595}
        data: {"event": "message", "message_id": "5adxxx6290", "conversation_id": "457xxx55f2", "answer": "glad", "created_at": 1679586595}
        data: {"event": "message", "message_id": "5adxxx6290", "conversation_id": "457xxx55f2", "answer": "to", "created_at": 1679586595}
        data: {"event": "message", "message_id": "5adxxx6290", "conversation_id": "457xxx55f2", "answer": "see", "created_at": 1679586595}
        data: {"event": "message", "message_id" : "5adxxx6290", "conversation_id": "457xxx55f2", "answer": "you", "created_at": 1679586595}
        data: {"event": "message_end", "id": "5adxxx6290", "conversation_id": "457xxx55f2", "metadata": {} }

      Deep thinking

      In the following HTTP response body, the JSONPath for the deep thinking position is: $.choices[0].delta.reasoning_content.

      data: {"choices":[{"delta":{"content":null,"role":"assistant","reasoning_content":""},"index":0,"logprobs":null,"finish_reason":null}],"object":"chat.completion.chunk","usage":null,"created":1758787252,"system_fingerprint":null,"model":"deepseek-v3.1","id":"chatcmpl-xxx-e30c1"}
      data: {"choices":[{"finish_reason":null,"logprobs":null,"delta":{"content":null,"reasoning_content":"Hmm"},"index":0}],"object":"chat.completion.chunk","usage":null,"created":1758787252,"system_fingerprint":null,"model":"deepseek-v3.1","id":"chatcmpl-xxx-e30c1"}
      data: {"choices":[{"delta":{"content":null,"reasoning_content":", the user"},"finish_reason":null,"index":0,"logprobs":null}],"object":"chat.completion.chunk","usage":null,"created":1758787252,"system_fingerprint":null,"model":"deepseek-v3.1","id":"chatcmpl-xxx-e30c1"}
      data: {"choices":[{"delta":{"content":null,"reasoning_content":" asked a"},"finish_reason":null,"index":0,"logprobs":null}],"object":"chat.completion.chunk","usage":null,"created":1758787252,"system_fingerprint":null,"model":"deepseek-v3.1","id":"chatcmpl-xxx-e30c1"}
      data: {"choices":[{"delta":{"content":null,"reasoning_content":" basic"},"finish_reason":null,"index":0,"logprobs":null}],"object":"chat.completion.chunk","usage":null,"created":1758787252,"system_fingerprint":null,"model":"deepseek-v3.1","id":"chatcmpl-xxx-e30c1"}
      data: {"choices":[{"delta":{"content":null,"reasoning_content":" self-introduction question."},"finish_reason":null,"index":0,"logprobs":null}],"object":"chat.completion.chunk","usage":null,"created":1758787252,"system_fingerprint":null,"model":"deepseek-v3.1","id":"chatcmpl-xxx-e30c1"}
      data: {"choices":[{"delta":{"content":"I am DeepSeek","reasoning_content":null},"finish_reason":null,"index":0,"logprobs":null}],"object":"chat.completion.chunk","usage":null,"created":1758787252,"system_fingerprint":null,"model":"deepseek-v3.1","id":"chatcmpl-xxx-e30c1"}
      data: {"choices":[{"delta":{"content":"How can","reasoning_content":null},"finish_reason":null,"index":0,"logprobs":null}],"object":"chat.completion.chunk","usage":null,"created":1758787252,"system_fingerprint":null,"model":"deepseek-v3.1","id":"chatcmpl-xxx-e30c1"}
      data: {"choices":[{"delta":{"content":" I","reasoning_content":null},"finish_reason":null,"index":0,"logprobs":null}],"object":"chat.completion.chunk","usage":null,"created":1758787252,"system_fingerprint":null,"model":"deepseek-v3.1","id":"chatcmpl-xxx-e30c1"}
      data: {"choices":[{"delta":{"content":" help","reasoning_content":null},"finish_reason":null,"index":0,"logprobs":null}],"object":"chat.completion.chunk","usage":null,"created":1758787252,"system_fingerprint":null,"model":"deepseek-v3.1","id":"chatcmpl-xxx-e30c1"}
      data: {"choices":[{"finish_reason":"stop","delta":{"content":" you?"},"index":0,"logprobs":null}],"object":"chat.completion.chunk","usage":null,"created":1758787252,"system_fingerprint":null,"model":"deepseek-v3.1","id":"chatcmpl-xxx-e30c1"}
      data: [DONE]
  4. Set the Associate with Protected Object. Each asset can be associated with only one protected object.

View and manage assets

On the Asset Management page, you can view and manage your assets.

  • View an asset: In the Protection Status column, you can view the protections configured for the asset. If no protections are configured, the status is Not Protected.

  • Edit an asset: In the Actions column, click Edit to modify the Prompt Position and Response Content for an asset.

  • Delete an asset: In the Actions column, click Delete to remove the asset. Once deleted, the asset is no longer protected.

image

Next steps

After creation, an asset has no mitigation capabilities. Configure a prompt attack prevention or content moderation protection template for the asset as needed.

Limitations

  • Once an asset is created, its Match Condition and Associate with Protected Object cannot be modified.

  • Each asset can be associated with only one protected object.

  • Streaming responses support only the SSE protocol.