This topic provides answers to the frequently asked questions (FAQ) about Content Moderation features.

After the data such as images, videos, and text submitted by users is reviewed in Content Moderation, will the data be deleted?

For images, videos, and text that are submitted by using Content Moderation API, Content Moderation retains the images, videos, text, and the machine-assisted moderation results for seven days. During this period, you can query the data and results. After seven days, the data and results are deleted. For live streams and voice streams that are submitted by using Content Moderation API, temporary addresses are generated for media segment files during the review process. These temporary addresses are valid for 30 minutes. After 30 minutes, the addresses become invalid, and the media segment files are deleted.

If you want to retain the machine-assisted moderation results and media segment files, you must save them before they are deleted. If you have activated Alibaba Cloud Object Storage Service (OSS), we recommend that you enable the feature of storing evidence in OSS buckets.

What are the features of Content Moderation?

Content Moderation provides Content Moderation API, OSS violation detection, and site detection features that apply to various scenarios.

For more information, see Functions and features.

What is a custom text library of Content Moderation?

Content Moderation supports custom text libraries of diverse types. You can use custom text libraries to ensure that moderation results meet specific business needs.

For more information, see Manage custom text libraries.

Can I customize configurations for image moderation in Content Moderation?

Yes, you can use JindoFS in a Hadoop cluster of a data center. Content Moderation allows you to configure custom image libraries to ensure that moderation results meet specific business needs.

For more information, see Manage custom image libraries.

How do I use the human review feature in Content Moderation?

The Content Moderation console displays the moderation results that are returned by Content Moderation API. You can use the human review feature to review the machine-assisted moderation results as needed.

For more information, see Review machine-assisted moderation results.

Why does not the custom text library that I configured in Content Moderation take effect?

Custom text libraries in Content Moderation include term libraries and text pattern libraries. Matching methods include exact match and fuzzy match. Check whether your custom text library meets the requirements. For example, each text pattern in the custom text library must contain at least 10 characters. If you use a custom text library when you call the Content Moderation API, make sure that the custom text library is applied to the specified business scenario. Otherwise, the custom text library does not take effect.

For more information about how to use custom text libraries, see Manage custom text libraries.

Can Content Moderation detect spelling or grammatical errors?

No, Content Moderation cannot detect spelling or grammatical errors.

What is the scope of terrorist content detection in Content Moderation?

Terrorist content detection allows you to moderate objects such as images for terrorist content, including bloody content, explosion and smoke, special costumes, logos, weapons, political content, violence, crowds, parades, car accidents, flags, and landmarks. You can define the moderation scope for your business by customizing policies for machine-assisted moderation.

For more information, see Customize policies for machine-assisted moderation.

What does the rate parameter mean in image moderation?

We recommend that you determine whether an image contains violations based on the values of the suggestion and label parameters instead of the value of the rate parameter. The rate parameter indicates only the confidence level that the Content Moderation model generates for an image. This parameter does not reflect the risk level of an image.

For more information about the parameters, see Synchronous moderation.

Can I export a custom text library from Content Moderation?

Yes, you can export multiple terms from a custom text library at a time in the Content Moderation console.

For more information, see Manage custom text libraries.

When do the human review results take effect in Content Moderation?

You can use the human review feature to review machine-assisted moderation results in the Content Moderation console. The human review results take effect in real time. The review results of images and text are automatically added to sample libraries. The newly added samples in the sample libraries take about 15 minutes to be effective.

For more information, see Review machine-assisted moderation results.

How long does it take for a custom image or text library to take effect in Content Moderation?

Content Moderation supports custom image libraries. You can use custom image libraries to manage the images that you want to block or allow. You can add and remove image samples in custom image libraries. All these operations take effect about 15 minutes after they are performed. Content Moderation also supports custom text libraries. You can use custom text libraries to manage the text that you want to block or ignore. You can add and remove text patterns and terms in custom text libraries. All these operations take effect about 15 minutes after they are performed.

For more information, see Manage custom text libraries.

Why am I unable to receive callback notifications after a callback URL is specified in Content Moderation?

The Content Moderation APIsupports callback notifications. When you create a notification plan in the Content Moderation console, you must specify the notification type such as machine-assisted moderation results and human review results.Then, you must associate the created notification plan with a business scenario for the notification plan to take effect.

If you still cannot receive a callback notification from Content Moderation after the preceding operations, we recommend that you check whether your server can respond to the POST requests sent to the specified callback URL. Make sure that no 403 or 502 error occurs when the callback URL is requested. You can also set the callback parameter in an API request to specify a callback URL. After the Content Moderation API operation is called, Content Moderation sends a callback notification to the specified callback URL.

For more information, see Enable callback notifications.

Why am I unable to moderate a long image after I add long image samples to a custom image library?

Content Moderation cannot directly moderate images whose height exceeds 400 pixels or images whose aspect ratio exceeds 2.5. For a long image whose height or aspect ratio exceeds the upper limit, Content Moderation crops the long image and then moderates it. As a result, the cropped long image does not hit the long image samples added to a custom image library or feedback-based image library. Therefore, the long image cannot be moderated.

For more information about the interval and maxframes parameters for long image moderation, see Synchronous moderation.

Why does not a custom policy for machine-assisted moderation configured in the Content Moderation console take effect when I call the Content Moderation API?

After you customize a policy for machine-assisted moderation or add a sample to a custom sample library in the Content Moderation console, the custom policy or sample takes 15 minutes to be effective. We recommend that you try again later. In addition, after you customize a policy for machine-assisted moderation based on a business scenario, you must specify the business scenario in an API request for content moderation. Then, the corresponding moderation policy takes effect.

For more information, see Customize policies for machine-assisted moderation.

When can I access Content Moderation by using an internal endpoint?

If the Elastic Compute Service (ECS) instance on which your service is deployed resides in the same region as Content Moderation, you can access Content Moderation by using an internal endpoint.

For example, if the ECS instance resides in the China (Beijing) region in which Content Moderation is deployed, you can use the internal endpoint green-vpc.cn-beijing.aliyuncs.com to access Content Moderation. Compared with a public endpoint, an internal endpoint provides faster access to Content Moderation. However, if the ECS instance does not reside in the same region as Content Moderation, you can use only a public endpoint to access Content Moderation.

For more information about the endpoints of Content Moderation, see Endpoints.