This topic describes the scenes, label, and suggestion parameters used in Content Moderation SDKs.

Parameters scenes and label

In a content moderation request, you must set the scenes parameter to specify the moderation scenario. In the response, the label parameter indicates the risk category of the moderated object.

Content Moderation SDKs allow you to specify multiple moderation scenarios in a request. To view the values of the scenes parameter representing different moderation scenarios and the values of the label parameter in each moderation scenario, see the documentation of different Content Moderation API operations.

For example, if you want to call the ImageSyncScanRequest operation to moderate an image for pornography, specify porn in the scenes parameter. You can send requests to detect risky content in other moderation scenarios in a similar way.

You can also specify multiple moderation scenarios at the same time. For example, you can specify porn and ad in the scenes parameter if you want to moderate an image for pornography and ad violations.

Parameter suggestion in the response

The suggestion parameter in the response indicates the recommended action on the moderated object if the server detected risky content in the moderated object.
  • If the value of the scenes parameter is porn, ad, or terrorism, valid values of the suggestion parameter are as follows:
    • pass: The moderated object is normal.
    • review: The moderated object requires human review.
    • block: The moderated object contains violations and can be deleted or blocked.
  • If the value of the scenes parameter is qrcode, valid values of the suggestion parameter are as follows:
    • pass: The moderated object does not contain a QR code.
    • review: The moderated object contains a QR code and requires human review. In this case, check the value of the qrcodeData parameter in the response to obtain the detected content.