This topic provides answers to the frequently asked questions (FAQ) about the moderation effects of the Content Moderation API.

Why is normal content mistaken as abusive content during text moderation? Why does abusive content fail to be detected during text moderation?

Abusive content in text can be classified into Severe Offensive Content, Offensive Content, and Impolite Content based on the abuse severity. You can modify the policy for machine-assisted text moderation for your business scenario in the Content Moderation console. If the abusive content in specific text fails to be detected or the normal content is mistaken as abusive content, we recommend that you create a custom text pattern or term library. Then, specify an ignore list or a review list to ignore or review specific terms.

For more information, see Customize policies for machine-assisted moderation and Manage custom text libraries.

Why is normal content mistaken as pornographic content during text moderation? Why does pornographic content fail to be detected during text moderation?

Pornographic content in text can be classified into Explicit Pornographic Content, Suggestive Content, and Expression about Sexual Organs based on the pornography severity. You can modify the policy for machine-assisted text moderation for your business scenario in the Content Moderation console. If the pornographic content in specific text fails to be detected or the normal content is mistaken as pornographic content, we recommend that you create a custom text pattern or term library. Then, specify an ignore list or a review list to ignore or review specific terms.

For more information, see Customize policies for machine-assisted moderation and Manage custom text libraries.

Why do ads such as QR codes fail to be detected during text moderation?

Ads in text are classified into Telephone Numbers, WeChat/QQ ID, URL, and Advertising Slogans. You can modify the policy for machine-assisted text moderation for your business scenario in the Content Moderation console. If the ads in specific text fail to be detected, we recommend that you create a custom text pattern or term library. Then, specify a blacklist or a review list to block or review specific terms.

For more information, see Customize policies for machine-assisted moderation and Manage custom text libraries.

Why do pornographic images fail to be detected during image moderation?

Pornographic images can be classified into Explicit Sexual Content, Suggestive Sexual Content, and Partial Nudity based on the sexual explicitness. You can modify the policy for machine-assisted image moderation for your business scenario in the Content Moderation console. If specific pornographic images fail to be detected, we recommend that you create a custom image library and select a scenario for detecting pornographic content in images. Then, specify a blacklist or a review list to block or review the specific pornographic content in images.

For more information, see Customize policies for machine-assisted moderation and Manage custom text libraries.

Why are the terms added to the ignore list still blocked during text moderation?

The terms added to a custom text library may not take effect due to the following reasons:

  • New terms take about 15 minutes to be effective after they are added to a custom text library. We recommend that you try again later.
  • Make sure that the text type and matching method are valid.
    • We recommend that you manage the content that contains less than or equal to five characters as terms and use Check after Preprocess Texts.
    • We recommend that you manage the content that contains more than five characters as text patterns.
  • The custom text library must be applied to the corresponding business scenario before it takes effect.

For more information, see Manage custom text libraries.

Why are the images that contain national flags and emblems not blocked during image moderation?

Content Moderation provides the following image moderation scenarios: Pornographic Content Detection, Symbolic Content or Event Detection, Inappropriate Content, and Texts or Objects on Image. If you want to block images that contain national flags and emblems, select the scenario of Symbolic Content or Event Detection. Then, modify the policy for machine-assisted image moderation for your business scenario in the Content Moderation console. In the scenario of terrorist content detection, the following types of images can be detected: Specified Face, Symbol, Weapons, Event, Religious Content, Uniforms, and Currency. In this case, select The National Flag and Emblem of the People's Republic of China for Symbol. In addition, you must select the detection content for the Symbol parameter in the moderation request.

For more information, see Customize policies for machine-assisted moderation and /green/image/scan.

What do I do if a specific part of a human body in a medical image is detected as pornographic content in Content Moderation?

Whether an image is a medical image cannot be identified during image moderation for pornographic content. We recommend that you set the dataId parameter in the moderation request to mark medical images.This way, human review can be further performed on image moderation results.

For more information, see /green/image/scan.