All Products
Search
Document Center

Content Moderation:Introduction and billing instructions for enhanced image moderation

Last Updated:Dec 05, 2025

This topic introduces the features and billing of Image Moderation 2.0.

Introduction to Image Moderation 2.0

Features

The Image Moderation 2.0 API identifies whether images contain content or elements that violate regulations related to internet content distribution, affect platform content order, or impact user experience. It supports 40+ content risk labels and 40+ risk control items. Through Content Moderation's Image Moderation 2.0, you can establish further review or governance measures for specific image content based on industry scenario regulations or platform content governance rules, using the rich risk labels and confidence scores returned by the API.

Version comparison

Image Moderation 2.0 provides more risk types and richer risk labels compared to Image Moderation 1.0, and supports more comprehensive and flexible console configuration features.

Comparison item

Image Moderation 2.0

Image Moderation 1.0

Default QPS

100

50

Supported risk detection scope

  • Baseline Check: Simultaneously detects pornography (including text-image pornography), sexy content, terrorist content (including text-image terrorist content), prohibited content (including text-image prohibited content), flags, inappropriate content, abuse, special elements content.

  • Image Moderation for Large and Small Model Integration: Comprehensively applies image moderation large models and expert model capabilities to simultaneously detect pornography, sexy content, politically sensitive content, terrorist content, prohibited content, religious content, advertising traffic diversion, inappropriate content, and other risk content.

  • AIGC image detection: Detects whether the image is generated by AIGC.

Pornography, politically sensitive and terrorist content, text violations, inappropriate scenes, special logos, QR codes

Supported risk detection labels

100+

Note

For detailed return labels, see Risk label interpretation table.

40+

Note

For detailed return labels, see label.

Console

  1. Supports detection item settings

  2. Supports custom image library settings

  3. Supports custom dictionary settings

  4. Supports Service copying (for multiple business requirements)

  5. Supports 30-day result query

Note

For more feature introductions, see Console operation guide

  1. Supports detection item settings

  2. Supports custom image library settings

  3. Supports 7-day result query

Supported image formats

PNG, JPG, JPEG, BMP, WEBP, TIFF, SVG, GIF, ICO, HEIC

PNG, JPG, JPEG, BMP, GIF, WEBP

Billing

Billed by service, with each service detecting multiple risks simultaneously.

Cost = Number of images × Number of services

Billed separately for the following services:

  • Baseline Check

  • Image Moderation for Large and Small Model Integration

  • AIGC image detection

Billed by risk scenario (scene).

Cost = Number of images × Number of risk scenarios

Billed separately for the following risk scenarios:

  • Pornography detection

  • Politically sensitive and terrorist content

  • Text violations

  • Inappropriate scenes

  • Special logos

  • QR codes

Service description

All services supported by Image Moderation 2.0 are as follows:

Scenario

Service

Detection content

Scenarios

Common scenario

Baseline Check (baselineCheck_global)

Supports detection of pornography, sexy content, terrorist content, prohibited content, flags, inappropriate content, abuse, special elements, and other risk content in images, including both image content and text content in images (supports Chinese, English, French, German, Indonesian, Malay, Portuguese, Spanish, Thai, Vietnamese, Japanese, Arabic, Filipino, Hindi, Turkish, Russian, Italian, Dutch). For detailed detectable items, see the Content Moderation Console.

Detects whether images contain illegal or inappropriate content for distribution. It is recommended to perform this detection on all images that involve open internet access.

Moderation large model

Image Moderation for Large and Small Model Integration (postImageCheckByVL_global)

Comprehensively applies image moderation large models and expert model capabilities to comprehensively identify pornography, sexy content, politically sensitive content, terrorist content, prohibited content, religious content, flags, advertising traffic diversion, inappropriate content, abuse, and other violations in images, and can return detailed labels. For detailed detectable items, see the Content Moderation Console.

For image moderation scenarios where the best effect is the priority. It is recommended to choose this service when high effectiveness needs to be guaranteed. For more information, see Large model-based image moderation enhanced service.

AIGC scenario

AIGC image detection (aigcDetector_global)

Detects the input image during the request to determine whether the image is suspected to be generated by AIGC.

For various scenarios, determines whether images are generated by AIGC. It is recommended to use this when you need to identify the source of images. For more information, see Image Moderation Enhanced 2.0 AIGC scenario detection service.

Billing instructions

Pay-as-you-go

When you activate the Image Moderation 2.0 service, the default billing method is pay-as-you-go. You are not charged if you do not call the service. The billing for the Image Moderation 2.0 API is as follows.

Moderation type

Supported business scenarios (services)

Unit price

Standard image moderation (image_standard)

  • General baseline detection: baselineCheck_global

  • AIGC image generation determination: aigcDetector_global

0.6 USD/thousand calls

Note

One call to the General baseline detection service is billed once. Billing is based on actual call volume. For example, 100 calls to the General baseline detection service costs 0.06 USD.

Advanced image moderation (image_advanced)

  • Large and small model fusion image moderation service: postImageCheckByVL_global

1.2 USD/thousand calls

Note

One call to the General baseline detection service is billed once. Billing is based on actual call volume. For example, 100 calls to the Large and small model fusion image moderation service costs 0.12 USD.

Note

The metering and billing frequency for Content Moderation 2.0 pay-as-you-go is 24 hour(s) per time. In the billing details, moderationType corresponds to the moderation type field above. You can view bill details.

Usage instructions

Connection instructions

Image Moderation Enhanced supports both SDK integration and HTTPS native integration:

Console operation instructions

  • It is recommended that you modify the image moderation policy in the user console when using it for the first time.

  • You can modify image moderation detection scope, configure differentiated policies for different businesses, configure custom image libraries and custom dictionaries, view call results, and query usage in the console.

  • You can perform visual effect testing in the console. It is recommended to perform effect testing after adjusting the image moderation policy.

For specific operations, see Console operation guide.