This topic provides answers to the frequently asked questions (FAQ) about Content Moderation features.

What are the features of Content Moderation?

Content Moderation provides the Content Moderation API, Object Storage Service (OSS) violation detection, and site detection features that apply to various scenarios.

For more information, see Features.

What is a custom text library of Content Moderation?

Content Moderation supports custom text libraries of different types. You can use custom text libraries to ensure that moderation results meet specific business needs.

For more information, see Manage custom text libraries.

Can I customize configurations for image moderation in Content Moderation?

Yes, Content Moderation supports custom image libraries. You can use custom image libraries to ensure that moderation results meet specific business needs.

For more information, see Manage custom image libraries.

How do I use the human review feature in Content Moderation?

The Content Moderation console displays the moderation results that are returned after Content Moderation operations are called. You can use the human review feature to review the machine-assisted moderation results again as needed.

For more information, see Review data.

Why does the custom text library that I configured in Content Moderation not take effect?

Custom text libraries in Content Moderation include term libraries and text pattern libraries. Matching methods include exact match and fuzzy match. Check whether your custom text library meets the requirements for custom text libraries. For example, make sure that each text pattern in the custom text library contains at least 10 characters. If you use the custom text library when you call Content Moderation operations, make sure that the custom text library is applied to the corresponding business scenario. Otherwise, the custom text library does not take effect.

For more information about how to use custom text libraries, see Manage custom text libraries.

Can Content Moderation detect spelling or grammatical errors?

No, Content Moderation cannot detect spelling or grammatical errors.

What is the scope of terrorist content detection in Content Moderation?

Terrorist content detection allows you to moderate objects such as images for terrorist content, including bloody content, explosion and smoke, special costumes, logos, weapons, political content, violence, crowds, parades, car accidents, flags, and landmarks. In actual content moderation, you can define the moderation scope of your business by customizing policies for machine-assisted moderation.

For more information, see Customize policies for machine-assisted moderation.

What does the rate parameter indicate in image moderation?

We recommend that you determine whether an image contains violations based on the values of the suggestion and label parameters instead of the value of the rate parameter. The rate parameter indicates only the confidence level that the Content Moderation model calculates for an image. It does not reflect the risk level of an image.

For more information about the parameters, see Synchronous moderation.

Can I export a custom text library from Content Moderation?

Yes, you can export multiple terms from a custom text library at a time in the Content Moderation console.

For more information, see Manage custom text libraries.

When will the human review results take effect in the Content Moderation background?

You can use the human review feature to review machine-assisted moderation results in the Content Moderation console. The human review results take effect in real time. The review results of images and text are automatically added to sample libraries. The newly added samples in the sample libraries take about 15 minutes to be effective.

For more information, see Review data.

How long does a custom image or text library take effect in Content Moderation?

Content Moderation supports custom image libraries. You can use custom image libraries to manage the images that you want to block or skip. You can add and remove image samples in custom image libraries. All these operations take effect about 15 minutes later after they are performed. Content Moderation also supports custom text libraries. You can use custom text libraries to manage the text that you want to block or skip. You can add and remove text patterns and terms in custom text libraries. All these operations take effect about 15 minutes later after they are performed.

For more information, see Manage custom text libraries.

Why am I unable to receive callback notifications after a callback URL is specified in Content Moderation?

The Content Moderation API supports callback notifications. When you create a notification plan in the Content Moderation console, you must specify the notification type such as machine-assisted moderation results or human review results. Then, you must associate the created notification plan with the corresponding business scenario before the notification plan takes effect.

If you still cannot receive a callback notification from Content Moderation after you create a valid notification plan, we recommend that you check whether your server can respond to the POST requests sent to the specified callback URL. This ensures that no 403 or 502 error occurs when the callback URL is requested. You can also set the callback parameter in an API request for content moderation to specify a callback URL. After a Content Moderation operation is called, Content Moderation sends a callback notification to the specified callback URL.

For more information, see Enable callback notification.

Why am I unable to moderate a long image after I add long image samples to a custom image library?

Content Moderation can moderate images whose height does not exceed 400 pixels or whose aspect ratio does not exceed 2.5. For a long image whose height or aspect ratio exceeds the upper limit, Content Moderation crops the long image and then moderates it. As a result, the cropped long image does not hit the long image samples added to a custom image library or feedback-based image library. Therefore, the long image cannot be moderated.

For more information about the interval and maxframes parameters for image moderation, see Synchronous moderation.

Why does a custom policy for machine-assisted moderation configured in the Content Moderation console not take effect when I call a Content Moderation operation?

After you customize a policy for machine-assisted moderation or add a sample to a custom sample library in the Content Moderation console, the custom policy or sample takes 15 minutes to be effective. We recommend that you try again later. In addition, after you customize a policy for machine-assisted moderation based on a business scenario, you must specify the business scenario in an API request for content moderation. Then, the corresponding moderation policy takes effect.

For more information, see Customize policies for machine-assisted moderation.