All Products
Search
Document Center

Content Moderation:Introduction to Image Moderation 2.0 and its billing method

Last Updated:Nov 02, 2023

This topic describes the features and billing method of Image Moderation 2.0.

1. Introduction to Image Moderation 2.0

Feature description

You can call the Image Moderation 2.0 API to identify whether an image contains content or elements that violate relevant regulations on network content dissemination, affect the content order on a specific platform, or impact the user experience. Image Moderation 2.0 supports over 40 content risk labels and over 40 risk control items. Image Moderation 2.0 of Content Moderation allows you to develop further moderation or governance measures for specific image content based on business scenarios, platform-specific content governance rules, or rich risk labels and scores of confidence levels returned by API calls.

Version comparison

Compared with Image Moderation 1.0, Image Moderation 2.0 provides more risk types and richer risk labels, and supports more flexible console configurations.

Comparison item

Image Moderation 1.0

Image Moderation 2.0

Default QPS

50

100

Supported scope of risk moderation

Pornography, terrorist content, prohibited text in images, undesirable scenes, special logos, and QR codes.

  • Common baseline moderation: moderates pornography (including pornographic text in images), suggestive content, terrorist content (including terrorism text in images), prohibited content (including prohibited text in images), special flags, undesirable scenes, abusive content, and special elements.

Supported labels for risk moderation

40+

Note

For more information about returned labels, see label.

100+

Note

For more information about returned labels, see Descriptions of risk labels.

Console

  1. Supports settings of moderation items.

  2. Supports settings of custom image libraries.

  1. Supports settings of moderation items.

  2. Supports settings of custom image libraries.

Billing

You are charged based on risk scenes.

Fees = Number of moderated images × Number of risk scenes

You are separately charged based on the following risk scenes:

  • Pornography

  • Terrorist content

  • Prohibited text in images

  • Undesirable scenes

  • Special logos

  • QR codes

You are charged by service.

Fees = Number of moderated images × Number of services

You are separately charged for the following services:

  • Common baseline moderation

Service description

The following table describes the services supported by Image Moderation 2.0.

Service

Content to be moderated

Use scenario

Common baseline moderation (baselineCheck_global)

Illegal content in images, such as pornography, suggestive content, terrorist content, prohibited content, special logs, undesirable content, abusive content, and special elements, including graphic content and text content (Chinese, English, French, German, Indonesian, Malay, Portuguese, Spanish, Thai, Vietnamese, Japanese, Arabic, Filipino, Hindi, Turkish, Russian, Italian, and Dutch are supported) in the images. For more information about the items that can be moderated, see the Content Moderation console.

To detect prohibited or inappropriate content in images. We recommend that you perform moderation on all images that are accessible over the Internet.

2. Billing method

Pay-as-you-go

After you activate the Image Moderation 2.0 service, the default billing method is pay-as-you-go. You are not charged if you do not call the service. The following table describes the billing rules for Image Moderation 2.0.

Moderation type

Supported business scenario (service)

Unit price

Common image moderation (image_standard)

Common baseline moderation: baselineCheck_global

USD 0.6 per thousand calls

Note

You are charged every time you call the common baseline moderation service.

Note

The billing frequency of the pay-as-you-go billing method for Content Moderation 2.0 is 24 hours per time. In the billing details, moderationType corresponds to the moderation type column in the preceding table. You can view Bill Details.

3. Usage notes

Description

You can call the Image Moderation 2.0 service by using the SDKs or the Image Moderation 2.0 API:

Operations in the console

  • We recommend that you modify the image moderation policy in the Content Moderation console before you use the Image Moderation 2.0 service for the first time.

  • You can modify the image moderation scope, configure differentiated policies for different services, view call results, and query service usage in the console.

For more information, see Console guide.