All Products
Search
Document Center

:FAQ about the Content Moderation API

Last Updated:Nov 10, 2023

This topic provides answers to the frequently asked questions (FAQ) about the Content Moderation API.

Can multiple moderation results be returned each time an asynchronous operation of Content Moderation is called?

No, multiple moderation results cannot be returned each time an asynchronous operation of Content Moderation is called.

How can I query the statistics on Content Moderation API calls?

The Content Moderation console collects statistics on Content Moderation API calls. You can query the number of times that the Content Moderation API is called to moderate images, videos, and text over the last year. For more information, see View statistics.

Which operation can I call to moderate text?

You can call the /green/text/scan operation to moderate text for violations. For more information about the /green/text/scan operation, see /green/text/scan.

How do I give feedback on the errors in text moderation results?

If you find that text moderation results are not as expected, you can call the /green/text/feedback operation to provide feedback. For more information, see /green/text/feedback.

How do I view the descriptions of the parameters that are returned after a Content Moderation operation is called?

You can view the descriptions of common response parameters and common HTTP status codes, which are provided in Common parameters.

Can I include the signature information about an API request in the request body?

No, the signature information must be included in the request header. You must set the signature parameter in the Authorization header of an HTTP request to specify the signature information for verification. We recommend that you use Content Moderation SDKs. Content Moderation provides SDKs in various programming languages such as Java, Python, and PHP. For more information, see Common parameters and SDK overview.

Can URLs that link to undesirable content in text be detected during text moderation?

No, text moderation can detect only text-based violations, rather than URLs that link to undesirable content.

How long does it take to moderate content by calling a Content Moderation operation?

The moderation duration varies depending on the type of object to be moderated.

  • Images: It takes about 300 ms to moderate an image, excluding the download duration.

  • Videos:

    • Video files: A video file can be moderated at a speed about one to six times the playback speed, excluding the download duration. If the moderation speed is six times the playback speed, it takes 1 minute to moderate a 6-minute video.

    • Video streams: The moderation duration of video streams varies depending on the time interval at which a frame is captured. In general, moderation results are returned within 1s after a frame is captured.

  • Text: In general, moderation results are returned within 50 ms.

Can I use Composer to download the Content Moderation SDK for PHP?

Yes, you can use Composer to download the Content Moderation SDK for PHP, provided that your Composer supports PHP 5.3 or later. No tutorial is provided for installing Composer. For more information about how to install the Content Moderation SDK for PHP, see Installation.

Can I call the /green/text/scan operation to moderate English text in Content Moderation?

Yes, you can call the /green/text/scan operation to moderate English text. For more information, see /green/text/scan.

Can I call a video moderation operation to moderate a video whose size is larger than 2 GB in Content Moderation?

By default, the size of a single video to be moderated cannot exceed 200 MB. You can contact the technical support to raise the size limit up to 2 GB as needed. If you want to moderate a video whose size is larger than 2 GB, we recommend that you segment the video into multiple parts and then moderate them. By default, a maximum of 200 frames can be captured from a video. If you want to capture more frames from a large video for moderation, you must set the maxFrames parameter. The maximum value of the maxFrames parameter is 3600 frames. For more information, see /green/video/asyncscan and /green/video/results.

What permissions do I need to call a Content Moderation operation? How are the permissions granted?

You can use the AccessKey ID and AccessKey secret of a Resource Access Management (RAM) user to call a Content Moderation operation. Before you call a Content Moderation operation as a RAM user, permissions must be granted to the RAM user. For more information, see Authorize a RAM user to call the Content Moderation API.

What is the size limit of an image to be moderated in Content Moderation?

Content Moderation can moderate an image of which the size does not exceed 20 MB, the height or width does not exceed 30,000 pixels, and the total resolution does not exceed 0.25 billion pixels. For more information, see /green/image/scan.

What is a concurrency limit for calling Content Moderation operations?

A concurrency limit specifies the total number of images, videos, or text entries that can be moderated at the same time. This limit applies to both the pay-as-you-go billing method and subscription plans.

The following table describes the concurrency limits for calling Content Moderation operations to moderate different types of objects.

Moderation object

Default concurrency limit

Unit

Description

Image

50

N/A

The maximum number of images that can be moderated per second.

Video

20

N/A

The maximum number of videos that can be moderated at the same time. Files and streams are not differentiated.

Text

100

N/A

The maximum number of text entries that can be moderated per second. Each text entry contains less than 200 characters.

Note
  • Images, videos, or text entries moderated within the default concurrency limits are free of charge. To adjust the concurrency limits, contact your sales manager. If you raise the concurrency limits, you will have extra charges.

  • By default, if an object is simultaneously moderated in multiple scenarios, the concurrency is still 1. For example, if you send an API request to moderate an image for pornography and terrorist content at the same time, the concurrency is 1.

Can I submit the internal URLs of objects for moderation?

No, only public URLs are supported for moderation. To prevent data leak risks, we recommend that you set a short validity period for the public URLs. For example, you can set the validity period to 10 minutes.

How do I moderate images that exceed the size limit?

We recommend that you compress the images before you submit them for moderation. When the resolution of an image is greater than 256 × 256 pixels, the resolution has little impact on the moderation results.

Can I create HTTP requests to call Content Moderation operations?

Yes, you can create HTTP requests to call Content Moderation operations. However, you must sign each request. We recommend that you use the SDKs provided on the Alibaba Cloud official website to call Content Moderation operations.

Can I use the AccessKey pair of a RAM user to call Content Moderation operations?

  • You can use the AccessKey pair of a RAM user to call Content Moderation operations whose API version is V20160621 or later. For more information about the required dependencies, see SDK overview.

  • You cannot use the AccessKey pair of a RAM user to call Content Moderation operations whose API version is earlier than V20160621. Otherwise, an AccessDenied error is returned.

Does the Content Moderation API provide call examples?

Yes, the Content Moderation API provides call examples. Call examples are provided in the SDK reference. For more information, see SDK overview.

Note

The version date of a call example changes as the API is updated. We recommend that you check the official documentation on a regular basis.

Can I use the SDK for .NET to call Content Moderation operations?

No, we recommend that you use Content Moderation SDKs for other programming languages. Alternatively, you can create HTTP requests to call Content Moderation operations. For more information, see SDK overview and Request syntax.

Can Content Moderation moderate images in the GIF format?

Yes, Content Moderation can moderate images in the PNG, JPG, JPEG, BMP, GIF, or WEBP format. For more information, see /green/image/scan.

Can I extend the maximum download duration from 3s to a longer period of time when I call a Content Moderation operation?

No, you cannot extend the maximum download duration. If download errors frequently occur when you call a Content Moderation operation to moderate an image, check whether the image URL is accessible or whether the image can be downloaded within 3s. When you make API requests, you can use the endpoint for Content Moderation in a region nearest to the region where your server resides. For more information, see Endpoints.

How many images can be moderated at most each time the /green/image/scan operation is called in Content Moderation?

A maximum of 100 images can be moderated at a time. To moderate 100 images at a time, you must raise the concurrency limit to a value greater than 100. By default, a maximum of 50 images, 100 text entries, or 20 videos can be moderated at a time. For more information, see /green/image/scan and Pricing.

Can I call a single Content Moderation operation to simultaneously moderate content in multiple scenarios, such as pornography detection and terrorist content detection?

Yes, you can call a single Content Moderation operation to simultaneously moderate content in multiple scenarios. To do this, set the scenes parameter in an API request to multiple scenarios. For example, you can set the scenes parameter to ["porn","terrorism"] to detect pornography and terrorist content in images. If you specify multiple scenarios for moderation at a time, you are charged the cumulative fee of all scenarios. The fee of each scenario equals the number of images that are moderated in the scenario multiplied by the unit price of the scenario. For more information, see /green/image/scan and Pricing.

What domain names and ports are available for calling Content Moderation operations?

If you need to configure a network security policy, we recommend that you enable access from *.aliyuncs.com and enable ports 80 and 443.

How do I call a video moderation operation in Content Moderation to moderate a video in ApsaraVideo VOD?

To call a video moderation operation in Content Moderation to moderate a video in ApsaraVideo VOD, you cannot directly submit the ID of the video. You must submit a sequence of frames that are captured from the video or a URL that can be used to access the video. For more information, see /green/video/asyncscan and /green/video/results.

Can Content Moderation moderate M3U8 video files?

No, Content Moderation cannot moderate M3U8 video files. Content Moderation can moderate video files in the following formats: AVI, FLV, MP4, MPG, ASF, WMV, MOV, WMA, RMVB, RM, FLASH, and TS. For more information, see /green/video/asyncscan and /green/video/results.

How long does an asynchronous task for video moderation take?

The duration of an asynchronous task for video moderation varies depending on the type of object to be moderated in addition to the duration of downloading the object.

  • Video files: A video file can be moderated at a speed about one to six times the playback speed, excluding the download duration. If the moderation speed is six times the playback speed, it takes 1 minute to moderate a 6-minute video.

  • Video streams: The moderation duration of video streams varies depending on the time interval at which a frame is captured. In general, moderation results are returned within 1s after a frame is captured.

Can I call Content Moderation operations in a region of the United States to moderate videos?

Yes, you can call Content Moderation operations in a region of the United States to moderate videos. For more information, see Endpoints.

What are the differences between the /green/video/syncscan and /green/video/asyncscan operations in Content Moderation?

To call the /green/video/syncscan operation, you must submit a sequence of frames that are captured from the video to be moderated. If you want to submit a video URL to specify the video to be moderated, we recommend that you call the /green/video/asyncscan operation.

The /green/video/asyncscan operation can be called to moderate video files and video streams. To moderate a video file, you can submit a sequence of frames that are captured from the video file or specify the URL of the video file. However, you cannot obtain the moderation results of an asynchronous moderation task in real time. To obtain the moderation results, you can set the callback parameter in the API request, or call the /green/video/results operation to poll the moderation results. For more information, see /green/video/syncscan and /green/video/asyncscan and /green/video/results.

Can I set the callback parameter in an API request for asynchronous image moderation that is implemented by using the Content Moderation SDK for Java?

Yes, you can set the callback parameter in an API request for asynchronous image moderation. For more information, see Image moderation.

What does the bizType parameter specify in Content Moderation?

The bizType parameter specifies a business scenario. Each business scenario corresponds to a moderation policy. Before you use the Content Moderation API, we recommend that you create custom business scenarios based on your business requirements. After you customize a moderation policy for your business scenario, you can specify the business scenario in an API request for content moderation. In this case, the corresponding moderation policy takes effect. For more information, see Customize policies for machine-assisted moderation.

What is the purpose of associating a text library with multiple business scenarios in Content Moderation?

When you create a custom text or image library, we recommend that you associate the custom text or image library with a business scenario to which the library applies. For example, your text library is associated with Business Scenario A, and you have specified Business Scenario A in an API request for text moderation. In this case, the text library that is associated with Business Scenario A is used for text moderation. Otherwise, all enabled text libraries are used for text moderation. For more information, see Manage custom text libraries.

Why is the value of the checksum parameter in the callback notification different from the calculated value after I call the /green/video/asyncscan operation?

The value of the checksum parameter is a string in the <UID> + <Seed> + <Content> format. It is generated by using the Secure Hash Algorithm 256 (SHA-256) algorithm. UID indicates the ID of your Alibaba Cloud account. You can obtain the ID in the Alibaba Cloud Management Console. To prevent data in the callback notification from being tampered with, you can use the SHA-256 algorithm to generate a string when your server receives the callback notification. Then, you can verify the string against the received checksum parameter. For more information, see Enable callback notifications.

Why is no dataId returned after I call the /green/text/scan operation in Content Moderation?

If you have specified dataId in the API request for the /green/text/scan operation, dataId is returned after you call the operation. For more information, see /green/text/scan.

Why are different labels returned for the same image in single-scenario moderation and multi-scenario moderation?

In general, this is because the configurations in multiple scenarios are different from those in a single scenario. Therefore, the configurations conflict when the same image is moderated in a single scenario and in multiple scenarios. We recommend that you contact algorithm engineers to check whether the scenario configurations are different. Alternatively, you can moderate the same image separately in different scenarios. For more information, see /green/image/scan and Image moderation.

Why is the context parameter not returned in text moderation results?

The context parameter indicates the risky terms that the moderated text hits. If the moderated text hits other policies such as algorithm models or text patterns, this parameter is not returned. For more information, see /green/text/scan.

Why is the filteredContent parameter returned but the context parameter not returned in text moderation results?

The filteredContent parameter indicates the text that is returned after hit terms in the moderated text are redacted with asterisks (*). If the moderated text hits specific terms or text patterns in your custom text library, this parameter is returned. The context parameter indicates the risky terms that the moderated text hits. If the moderated text hits other policies such as algorithm models or text patterns, this parameter is not returned. For more information, see /green/text/scan.

Why do text moderation results contain no emojis?

Content Moderation cannot recognize emojis in text. Emoji characters are filtered out in the returned text moderation results.

Do the accuracy and recall rate of synchronous image moderation differ from those of asynchronous image moderation in Content Moderation?

No, synchronous image moderation and asynchronous image moderation have the same moderation effects. The only difference is that they are implemented by different operations.

Why am I unable to download the ClientUploader utility class that is used to moderate local files and binary files for the Content Moderation SDK for Java?

You must download the ClientUploader utility class for the Content Moderation SDK for Java and import it to your project. For more information about the download URL and procedure, see Installation.

Why does the aliyunsdkcore library fail to be installed for the Content Moderation SDK for Python 3.5.4 and 3.8.8?

We recommend that you install the Content Moderation SDK for Python 3.x of mainstream versions. In this case, if the aliyunsdkcore library still fails to be installed, you can download the aliyunsdkcore library and import it to your project. For more information, see Installation

How do I install the aliyunsdkgreenextension utility class of the Content Moderation SDK for Python?

You must download the aliyunsdkgreenextension utility class and import it to your project. For more information, see Installation

You must import the aliyunsdkgreenextension utility class to your project by using the following code:

from aliyunsdkgreenextension.request.extension import HttpContentHelper

Can a custom term library in Content Moderation contain terms in languages other than English?

No, a custom term library in Content Moderation can contain only English letters and digits.

How long is the validity period of an OSS URL for an image or a video that is uploaded from a local machine to OSS for content moderation?

The validity period of an Object Storage Service (OSS) URL is 1 hour.

How am I charged for moderating a live video stream in Content Moderation?

The expense for moderating a live video stream depends on the number of frames captured from the live video stream. To calculate the number of captured frames, divide the duration of the live video stream by the time interval at which a frame is captured.

For example, the duration of a live video stream is 1 hour and a frame is captured every 5s. In this case, the number of captured frames is calculated by using the following formula: 3,600 seconds/5 seconds = 720. Therefore, you are charged for the 720 captured frames.

How are the moderation results of a live stream returned to the callback URL in Content Moderation?

In Content Moderation, the moderation results of a live video stream are separately returned. Each time a violation is detected, a moderation result is returned. After the moderation of the live stream is complete, the overall moderation results are returned.

Why does my callback URL still receive data after Content Moderation stops moderating a live stream?

After the API call for moderating a live stream stops, the corresponding moderation task stops. However, your callback URL can still receive specific data because a low latency exists.

Does a task for moderating a live stream stop if the live stream is interrupted or unavailable after the task is submitted?

If the live stream is interrupted or unavailable after the task is submitted, Content Moderation requests the live stream for three times at specific intervals. The minimum interval is 10s. If the live stream still fails to be obtained within 30s, the task stops.

When is the status code 200 returned in live stream moderation?

If the moderation is successful, the status code 200 is returned. If the live streaming is going on, the status code 280 is returned. If the live streaming ends or is interrupted, the status code 200 is returned.