This topic provides answers to the frequently asked questions (FAQ) about the moderation effects of the Content Moderation API.

Why is normal content mistaken as abusive content during text moderation? Why does abusive content fail to be detected during text moderation?

Abusive content in text can be classified into serious abuse, mild abuse, and colloquialism based on the abusive severity. You can modify the policy for machine-assisted text moderation for your business scenario in the Content Moderation console. If the abusive content in specific text fails to be detected or the normal content is mistaken as abusive content, we recommend that you create a custom text pattern or term library. Then, specify an ignore list or a review list to ignore or review specific terms.

For more information, see Customize policies for machine-assisted moderation and Manage custom text libraries.

Why is normal content mistaken as pornographic content during text moderation? Why does pornographic content fail to be detected during text moderation?

Pornographic content in text can be classified into serious pornography, vulgar content, and sexual knowledge based on the pornographic severity. You can modify the policy for machine-assisted text moderation for your business scenario in the Content Moderation console. If the pornographic content in specific text fails to be detected or the normal content is mistaken as pornographic content, we recommend that you create a custom text pattern or term library. Then, specify an ignore list or a review list to ignore or review specific terms.

For more information, see Customize policies for machine-assisted moderation and Manage custom text libraries.

Why do ads such as QR codes fail to be detected during text moderation?

Ads in text are classified into phone numbers, WeChat accounts, QQ accounts, URLs, and slogans. You can modify the policy for machine-assisted text moderation for your business scenario in the Content Moderation console. If the ads in specific text fail to be detected, we recommend that you create a custom text pattern or term library. Then, specify a blacklist or a review list to block or review specific terms.

For more information, see Customize policies for machine-assisted moderation and Manage custom text libraries.

Why do pornographic images fail to be detected during image moderation?

Pornographic images can be classified into purely pornographic, vulgar and indecent, and sexy images based on the sexual explicitness. You can modify the policy for machine-assisted image moderation for your business scenario in the Content Moderation console. If specific pornographic images fail to be detected, we recommend that you create a custom image library and select a scenario for detecting pornographic content in images. Then, specify a blacklist or a review list to block or review the specific pornographic content in images.

For more information, see Customize policies for machine-assisted moderation and Manage custom text libraries.

Why are the terms added to the ignore list still blocked during text moderation?

The terms added to a custom text library may not take effect due to the following reasons:

  • New terms take about 15 minutes to be effective after they are added to a custom text library. We recommend that you try again later.
  • Make sure that the text type and matching method are valid.
    • We recommend that you manage the content that contains less than or equal to five characters as terms and use fuzzy match.
    • We recommend that you manage the content that contains more than five characters as text patterns and use exact match.
  • The custom text library must be applied to the corresponding business scenario before it takes effect.

For more information, see Manage custom text libraries.

Why are the images that contain national flags and emblems not blocked during image moderation?

Content Moderation provides the following image moderation scenarios: pornography detection, terrorist content detection, ad violation detection, and undesirable scene detection. If you want to block images that contain national flags and emblems, select the scenario of terrorist content detection. Then, modify the policy for machine-assisted image moderation for your business scenario in the Content Moderation console. In the scenario of terrorist content detection, the following types of images can be detected: figure, symbol, ordnance, incident, religion, public service, and ticket. In this case, select The National Flag and Emblem of the People's Republic of China for Symbol Recognition. In addition, you must set the scenes parameter to terrorism in the moderation request.

For more information, see Customize policies for machine-assisted moderation and Synchronous moderation.

What do I do if a specific part of a human body in a medical image is detected as pornographic content in Content Moderation?

Whether an image is a medical image cannot be identified during image moderation for pornographic content. We recommend that you set the dataId parameter in the moderation request to mark medical images. This way, human review can be further performed on image moderation results.

For more information, see Synchronous moderation.