This topic describes the scenes, label, and suggestion parameters used in Content Moderation SDKs.

Parameters scenes and label

To send a moderation request by calling the Content Moderation API, you must set the scenes parameter to specify the moderation scenario. In the response, the label parameter indicates the risk category of the moderation image.

Content Moderation SDKs allow you to specify multiple moderation scenarios in a request. To view the values of the scenes parameter representing different moderation scenarios and the values of the label parameter in each moderation scenario, see the documentation of different Content Moderation API operations.

For example, if you need to call the ImageSyncScanRequest operation to moderate an image for pornography, specify porn in the scenes parameter. You can send requests to detect risky content in other moderation scenarios in a similar way.

You can also specify multiple moderation scenarios at the same time. For example, you can specify porn and ad in the scenes parameter if you need to moderate an image for pornography and ad violations.

Parameter suggestion

The suggestion parameter in the response indicates the recommended action on the moderated image if the server detected risky content in the moderated image.
  • If the value of the scenes parameter is porn, ad, or terrorism, the suggestion parameter has the following valid values:
    • pass: The moderated image is normal.
    • review: The image contains suspected violations and requires human review.
    • block: The moderated image contains violations and can be deleted or blocked.
  • If the value of the scenes parameter is qrcode, the suggestion parameter has the following valid values:
    • pass: The moderated image does not contain a QR code.
    • review: The moderated image contains a QR code. In this case, check the value of the qrcodeData parameter in the response to obtain the detected content.