All Products
Search
Document Center

Content Moderation:LLM-based image moderation service

Last Updated:Dec 01, 2025

Alibaba Cloud Content Moderation Image Moderation 2.0 introduces a Tongyi-based large model for image moderation. This service comprehensively detects non-compliant content in images, such as pornography, suggestive content, political content, terrorism, prohibited items, religious content, spam ads, and undesirable content. The service also supports returning the raw results from the large model. This topic describes how to use the large model service for image moderation.

Note

The Content Moderation image moderation service based on the large model is undergoing rapid upgrades. If you have any feedback or suggestions, you can contact your business manager.

1. Use case

Alibaba Cloud has customized and trained the Tongyi moderation large model for image content risk scenarios. This model is applied to the Content Moderation service to provide the following image moderation service based on large models:

  • Image Moderation for Large and Small Model Integration: This service integrates the capabilities of large image moderation models and expert models to comprehensively detect non-compliant content in images, such as pornography, suggestive content, politically sensitive content, terrorism and violence, prohibited items, religious content, spam and ads, and other undesirable content. This fusion of models provides more effective image moderation with stronger detection capabilities and richer labels.

2. Service description

The large model-based moderation of Image Moderation 2.0 supports the following services:

Service

Detection Content

Use case

Service name: Image Moderation for Large and Small Model Integration

Service: postImageCheckByVL_global

This service combines the capabilities of the large image moderation model and expert models to comprehensively detect non-compliant content, such as pornography, sexually suggestive material, political information, terrorism, prohibited items, religious content, flags, promotional ads, undesirable content, and abuse. The service can return detailed labels. For a list of detectable items, see the Content Moderation console.

Note

This service is currently available only in the Singapore region. When you invoke this service, select Singapore as the region. Support for other regions is being gradually rolled out.

You can use this service for image moderation scenarios where the best possible detection result is the top priority. This service is recommended when high-quality results are required.

3. Billing

The Image Moderation 2.0 large model service supports the pay-as-you-go billing methods.

Pay-as-you-go

After you enable the Image Moderation 2.0 service, the default billing method is pay-as-you-go. Fees are settled daily based on your actual usage. You are not charged if you do not invoke the service.

Moderation type

Supported business scenarios (services)

Unit price

Image moderation advanced (image_advanced)

  • Image Moderation for Large and Small Model Integration: postImageCheckByVL_global

USD 1.20 per 1,000 calls

Note

Each call to any of the services on the left is billed once. Billing is based on the actual number of calls. For example, 100 calls to the Image Moderation for Large and Small Model Integration cost $0.12.

Note

The billing frequency for the pay-as-you-go of Content Moderation 2.0 is once every 24 hours. In the billing details, moderationType corresponds to the moderation type field. You can view the Bill Details.

4. User guide

Step 1: Activate the service

Go to Activate Service to activate the Image Moderation 2.0 service.

After you activate the Image Moderation 2.0 service, the default billing method is pay-as-you-go. Fees are settled daily based on your actual usage. You are not charged for the service if you do not invoke it. After you start using the API, the system automatically generates bills based on your usage. For more information, see Billing details.

Step 2: Grant permissions to a RAM user

Before you use an SDK or invoke an API operation, you must grant permissions to a RAM user. You can create an AccessKey pair for an Alibaba Cloud account or a RAM user. When you invoke an API operation, you must use an AccessKey pair to complete identity verification. For more information, see Obtain an AccessKey pair.

  1. Log on to the RAM console using your Alibaba Cloud account or as a RAM administrator.

  2. Create a RAM user. For more information, see Create a RAM user.

  3. Grant the RAM user the AliyunYundunGreenWebFullAccess system policy permission. For more information, see Grant permissions to a RAM user. After you complete the preceding configurations, you can use the RAM user to invoke the Content Moderation API.

Step 3: Install and integrate the SDK

For more information, see Image Moderation Enhanced 2.0 SDK and Integration Guide. The supported access regions are as follows:

Region

Public endpoint

VPC endpoint

Supported services

Singapore

https://green-cip.ap-southeast-1.aliyuncs.com

https://green-cip-vpc.ap-southeast-1.aliyuncs.com

postImageCheckByVL_global

Step 4: Adjust image moderation rules (Optional)

In the Content Moderation console, you can adjust the detection rules for the large image moderation model. These adjustments include enabling or disabling detection scopes, copying a service, configuring custom image libraries, configuring custom vocabularies, querying detection records, and querying usage. For more information, see Console Operation Guide.

5. API operations

Instructions

You can invoke this API to create an image content moderation task. For information about how to construct an HTTP request, see Integration Guide. You can also directly use a pre-constructed HTTP request. For more information, see Integration Guide.

  • API operation: ImageModeration

  • Billing information: This is a paid API. You are charged only for requests that return an HTTP status code of 200. You are not charged for requests that return other error codes. For more information about the billing method, see Billing details.

  • Image requirements:

    • Supported image formats: PNG, JPG, JPEG, BMP, WEBP, TIFF, SVG, HEIC (the longest edge must be less than 8,192 pixels), GIF (the first frame is used), and ICO (the last image is used).

    • The image size cannot exceed 20 MB. The height or width cannot exceed 16,384 pixels, and the total number of pixels cannot exceed 250 million. We recommend that the image resolution be greater than 200 × 200 pixels because low resolution can affect the performance of the content moderation algorithm.

    • The image download time is limited to 3 seconds. If the download time exceeds 3 seconds, a download timeout error is returned.

QPS limits

The queries per second (QPS) limit for a single user for this API operation is 20 calls/second. If you exceed this limit, API calls are throttled, which may affect your business. You should make calls reasonably. If you have a high volume of traffic or require an urgent capacity expansion, you can contact your business manager.

Debugging

Before integration, you can also use Alibaba Cloud OpenAPI to test the and Image Moderation 2.0 APIs online to view sample code, view SDK dependency information, and obtain an overview of the API usage and parameters.

Important

The online debugging feature invokes the Content Moderation API operation using your current logon account. Therefore, the number of calls is included in your billed usage.

Request parameters

For information about the common request parameters that must be included in requests, see the Access Guide.

The request body is a JSON struct that contains the following fields:

Name

Type

Required

Example

Description

Service

String

Yes

postImageCheckByVL_global

The detection service. Valid values:

  • postImageCheckByVL_global: Image Moderation for Large and Small Model Integration

ServiceParameters

JSONString

Yes

A set of parameters for the content detection object, provided as a JSON string. For a description of each parameter, see ServiceParameters.

Table 1. ServiceParameters

Name

Type

Required

Example

Description

imageUrl

String

Yes. Image Moderation 2.0 supports the following three methods for submitting images:

  • To moderate an image using its URL, pass the imageUrl parameter.

  • To moderate an image using Object Storage Service (OSS) authorization, you must pass the ossBucketName, ossObjectName, and ossRegionId parameters.

  • Use local images for detection. This method does not use your OSS storage space, and the uploaded files are stored for only 30 minutes. The SDK has an integrated feature for uploading local images. For code examples, see the Image Moderation Enhanced Edition 2.0 SDK and Access Guide.

https://img.alicdn.com/tfs/TB1U4r9AeH2gK0jSZJnXXaT1FXa-2880-480.png

The URL of the object to be moderated. Make sure that the URL is accessible over the public network. The URL cannot exceed 2,048 characters in length.

Note The URL cannot contain Chinese characters. Make sure to pass only one URL per request.

ossBucketName

String

bucket_01

The name of the authorized OSS bucket.

Note

To use the internal URL of an OSS image, you must first use your Alibaba Cloud account to go to the Cloud Resource Access Authorization page and grant the AliyunCIPScanOSSRole permission.

ossObjectName

String

2022023/04/24/test.jpg

The name of the file in the authorized OSS bucket.

ossRegionId

String

cn-beijing

The region where the OSS bucket is located.

dataId

String

No

img123****

The data ID that corresponds to the moderated object.

The ID can contain uppercase and lowercase letters, digits, underscores (_), hyphens (-), and periods (.). It can be up to 64 characters in length and can be used to uniquely identify your business data.

infoType

String

No

vlContent

The auxiliary information to retrieve. Valid values:

  • customImage: information about hits in custom image libraries

referer

String

No

www.aliyun.com

The referer request header, used for scenarios such as hotlink protection. The value can be up to 256 characters in length.

Response data

Name

Type

Example

Description

RequestId

String

70ED13B0-BC22-576D-9CCF-1CC12FEAC477

The ID of the current request. This is a unique identifier generated by Alibaba Cloud for the request and can be used for troubleshooting and issue tracking.

Data

Object

The results of the image content detection. For more information, see Data.

Code

Integer

200

The returned status code. For more information, see Status codes.

Msg

String

OK

The response message for the current request.

Table 2. Data

Name

Type

Example

Description

Result

Array

The result parameters of the image moderation, such as the risk label and confidence score. For more information, see result.

RiskLevel

String

high

The risk level of the image, based on the label with the highest risk. Valid values:

  • high: high risk

  • medium: medium risk

  • low: low risk

  • none: no risk detected

Note

Handle high-risk content immediately. Manually review medium-risk content. For low-risk content, process it only when high recall is required. Otherwise, handle it in the same way as content with no detected risk. You can configure risk scores in the Content Moderation console.

DataId

String

img123******

The data ID that corresponds to the moderated object.

Note

If you passed a dataId in the request parameters, the corresponding dataId is returned here.

Ext

Object

Supplementary information for the image. For more information, see Supplementary Information.

Table 3. result

Name

Type

Example

Description

Label

String

tm_auto

The label returned after image content moderation. Multiple labels and scores may be detected for the same image. For information about supported labels, see:

Confidence

Float

81.22

The confidence score ranges from 0 to 100 and is accurate to two decimal places. Some tags do not have a confidence score. For more information, see Glossary of risk tags.

Description

String

Fireworks content

A description of the Label field.

Important

This field provides an explanation for the Label field and may be subject to change. When processing results, we recommend that you use the Label field, not this field, to determine the action to take.

RiskLevel

String

high

The risk level of the current label, based on the configured high and low risk scores. Valid values:

  • high: high risk

  • medium: medium risk

  • low: low risk

  • none: no risk detected

Auxiliary information in the response (click to expand)

Table 4. Ext

Name

Type

Example

Description

CustomImage

JSONArray

If a custom image library is hit, information about the hit custom image library is returned. For more information, see CustomImage.

Table 5. CustomImage

Name

Type

Example

Description

LibId

String

lib0001

The ID of the custom image library that was hit.

LibName

String

Custom Image Library A

The name of the custom image library that was hit.

ImageId

String

20240307

The ID of the custom image that was hit.

Examples

Request example

{
    "Service": "postImageCheckByVL_global",
    "ServiceParameters": {
        "imageUrl": "https://img.alicdn.com/tfs/TB1U4r9AeH2gK0jSZJnXXaT1FXa-2880-480.png",
        "dataId": "img0307****"
    }
}

Response examples

Note

The request and response examples in this document are formatted for readability. The actual responses do not include line breaks or indentation.

Risk label definitions

The following are the risk tag values, their corresponding score ranges, and their meanings. You can enable or disable each risk tag in the console. Some risk tags also provide configuration options for a more granular detection scope. For more information, see the Console User Guide.

Note

We recommend that you store the risk labels and confidence scores returned by the system for a certain period. This data can be used as a reference for future content administration. You can set priorities for manual review or annotation and implement tiered and categorized content administration measures based on the risk labels.

Table 6. Supported labels for the Image Moderation for Large and Small Model Integration (postImageCheckByVL_global)

Tag Value (label)

Confidence Score Range (confidence)

Description

pornographic_adultContent

A score from 0 to 100, where a higher score indicates a higher confidence level.

The image may contain adult pornographic content.

pornographic_cartoon

A score from 0 to 100, where a higher score indicates a higher confidence level.

The image may contain pornographic cartoon content.

pornographic_adultToys

A score from 0 to 100, where a higher score indicates a higher confidence level.

Indicates that the image may contain adult toys.

pornographic_artwork

A score from 0 to 100, where a higher score indicates a higher confidence level.

The image may contain pornographic artwork.

pornographic_underage

The score ranges from 0 to 100. A higher score indicates a higher confidence level.

The image may contain child pornography.

pornographic_adultContent_tii

A score from 0 to 100, where a higher score indicates a higher confidence level.

Possible pornographic content in image text.

pornographic_suggestive_tii

A score from 0 to 100, where a higher score indicates a higher confidence level.

The text in the image is classified as vulgar.

pornographic_o_tii

A score from 0 to 100, where a higher score indicates a higher confidence level.

The text in the image contains LGBT-related content.

pornographic_organs_tii

A score from 0 to 100. A higher score indicates a higher confidence level.

The image contains text that describes sexual organs.

pornographic_adultToys_tii

A score from 0 to 100, where a higher score indicates a higher confidence level.

The text in the image is related to adult toys.

sexual_suggestiveContent

The score ranges from 0 to 100. A higher score indicates a higher confidence level.

The image may contain vulgar or sexually suggestive content.

sexual_femaleUnderwear

A score from 0 to 100, where a higher score indicates a higher confidence level.

The image may contain female underwear or swimwear.

sexual_cleavage

A score from 0 to 100. A higher score indicates a higher confidence level.

The image may contain sexually suggestive female cleavage.

sexual_maleTopless

A score from 0 to 100. A higher score indicates a higher confidence level.

Indicates that the image contains topless males.

sexual_cartoon

A score from 0 to 100, where a higher score indicates a higher confidence level.

The image may contain sexually suggestive cartoon content.

sexual_femaleShoulder

A score from 0 to 100. A higher score indicates a higher confidence level.

The image may contain sexually suggestive content featuring female shoulders.

sexual_femaleLeg

A score from 0 to 100, where a higher score indicates a higher confidence level.

The image may contain sexually suggestive content featuring female legs.

sexual_pregnancy

A score from 0 to 100, where a higher score indicates a higher confidence level.

The image may contain maternity photos or breastfeeding content.

sexual_feet

A score from 0 to 100, where a higher score indicates a higher confidence level.

The image may contain sexually suggestive content featuring feet.

sexual_kiss

A score from 0 to 100, where a higher score indicates a higher confidence level.

The image may depict kissing.

sexual_intimacy

A score from 0 to 100, where a higher score indicates a higher confidence level.

The image may contain intimate behavior.

sexual_intimacyCartoon

A score from 0 to 100. A higher score indicates a higher confidence level.

The image may contain intimate scenes in cartoons or anime.

political_historicalNihility

A score from 0 to 100. A higher score indicates a higher confidence level.

The image may contain content related to historical nihilism or sensitive historical events.

political_historicalNihility_tii

A score from 0 to 100, where a higher score indicates a greater confidence level.

The text in the image may contain historical nihilism.

political_politicalFigure_1

The score is a value from 0 to 100, where a higher score indicates a higher confidence level.

The image may contain current or former political leaders.

political_politicalFigure_2

A score from 0 to 100. A higher score indicates a higher confidence level.

The image may contain family members of political leaders.

political_politicalFigure_3

A score from 0 to 100. A higher score indicates a higher confidence level.

The image may contain provincial or municipal government officials.

political_politicalFigure_4

The score is a value from 0 to 100. A higher score indicates a higher confidence level.

The image may contain foreign leaders and their family members.

political_politicalFigure_name_tii

A score from 0 to 100. A higher score indicates a higher confidence level.

Indicates that the text in the image contains names of political leaders.

political_prohibitedPerson_1

A score from 0 to 100. A higher score indicates a higher confidence level.

The image may contain disgraced national officials.

political_prohibitedPerson_2

A score from 0 to 100. A higher score indicates a higher confidence level.

The image may contain disgraced officials at the provincial or municipal level.

political_prohibitedPerson_tii

The score ranges from 0 to 100. A higher score indicates a higher confidence level.

The text in the image may contain names of disgraced officials.

political_taintedCelebrity

The score ranges from 0 to 100. A higher score indicates a higher confidence level.

The image may contain public figures associated with scandals or significant negative press.

political_taintedCelebrity_tii

A score from 0 to 100. A higher score indicates a higher confidence level.

The text in the image may contain the names of celebrities involved in scandals.

political_CNFlag

The score is on a scale of 0 to 100, where a higher score indicates a higher confidence level.

The image may contain the Chinese national flag.

political_CNMap

A score from 0 to 100, where a higher score indicates a higher confidence level.

Indicates that the image may contain a map of China.

political_logo

A score from 0 to 100, where a higher score indicates a higher confidence level.

The image may contain logos of banned media outlets.

political_outfit

A score from 0 to 100, where a higher score indicates a higher confidence level.

The image may contain military uniforms, police uniforms, or combat attire.

political_badge

A score from 0 to 100, where a higher score indicates a higher confidence level.

The image contains national or party emblems.

political_racism_tii

A score from 0 to 100. A higher score indicates a higher confidence level.

The text in the image may contain special expressions. For more information, see the (China) or the Content Moderation console (International).

violent_explosion

A score from 0 to 100. A higher score indicates a higher confidence level.

Indicates that the image may contain elements of explosions or fireworks.

violent_armedForces

The score ranges from 0 to 100, where a higher score indicates a higher confidence level.

The image may contain terrorist organizations.

violent_burning

The score ranges from 0 to 100. A higher score indicates a higher confidence level.

The image may contain burning scenes.

violent_weapon

A score from 0 to 100, where a higher score indicates a higher confidence level.

The image may contain military equipment.

violent_crowding

A score from 0 to 100, where a higher score indicates a higher confidence level.

The image may contain a crowd gathering.

violent_gun

A score from 0 to 100. A higher score indicates a greater confidence level.

The image contains guns.

violent_knives

The score ranges from 0 to 100. A higher score indicates a higher confidence level.

The image may contain knives.

violent_horrific

A score from 0 to 100, where a higher score indicates a higher confidence level.

The image may contain violent or horrific content.

violent_nazi

A score from 0 to 100, where a higher score indicates a higher confidence level.

The image may contain Nazi-related content.

violent_bloody

A score from 0 to 100, where a higher score indicates a higher confidence level.

The image may contain gory scenes.

violent_extremistGroups_tii

A score from 0 to 100. A higher score indicates a higher confidence level.

The text in the image contains content related to violent extremist organizations.

violent_extremistIncident_tii

A score from 0 to 100, where a higher score indicates a higher confidence level.

The text in the image is related to terrorist incidents.

violence_weapons_tii

A score from 0 to 100. A higher score indicates a higher confidence level.

Text within the image that describes guns, ammunition, or weapons.

violent_ACU

A score from 0 to 100. A higher score indicates a higher confidence level.

Indicates that the image may contain combat uniforms.

contraband_drug

A score from 0 to 100, where a higher score indicates a higher confidence level.

Indicates that the image may contain drugs or medications.

contraband_drug_tii

A score from 0 to 100. A higher score indicates a higher confidence level.

The text in the image may contain content related to illegal drugs.

contraband_gamble

A confidence score from 0 to 100, where a higher score indicates a higher confidence level.

Indicates that the image may contain gambling-related content.

contraband_gamble_tii

A score from 0 to 100. A higher score indicates a higher confidence level.

The text in the image may be related to gambling.

contraband_certificate_tii

A score from 0 to 100. A higher score indicates a higher confidence level.

The text in the image may contain advertisements for obtaining certificates or offers to cash out.

religion_flag

A score on a scale of 0 to 100. A higher score indicates a greater confidence level.

Indicates that the image may contain religious flags or elements.

religion_clothing

The score is a value from 0 to 100, where a higher score indicates a higher confidence level.

The image may contain specific clothing or symbols. For more information, see the Content Moderation console.

religion_logo

A score from 0 to 100, where a higher score indicates a higher confidence level.

religion_taboo1_tii

The score ranges from 0 to 100. A higher score indicates a higher confidence level.

religion_taboo2_tii

The score is on a scale of 0 to 100. A higher score indicates a higher confidence level.

flag_country

A score from 0 to 100. A higher score indicates a higher confidence level.

The image may contain flag-related content.

pt_logotoSocialNetwork

A score from 0 to 100. A higher score indicates a higher confidence level.

The image contains watermarks from popular social media platforms.

QR code

A score from 0 to 100, where a higher score indicates a higher confidence level.

The image contains a QR code.

pt_logo

A score on a scale of 0 to 100, where a higher score indicates a higher confidence level.

The image may contain logos.

pt_toDirectContact_tii

A score from 0 to 100. A higher score indicates a higher confidence level.

The text in the image contains contact information intended to direct users to external communication channels.

pt_custom_01

A score from 0 to 100, where a higher score indicates a higher confidence level.

Custom tag 01.

pt_custom_02

A score from 0 to 100, where a higher score indicates a higher confidence level.

Custom tag 02.

inappropriate_smoking

A score from 0 to 100, where a higher score indicates a higher confidence level.

The image may contain smoking-related content.

inappropriate_drinking

A score from 0 to 100, where a higher score indicates a higher confidence level.

The image may contain alcohol-related content.

inappropriate_tattoo

A score from 0 to 100. A higher score indicates a higher confidence level.

The image contains inappropriate tattoos.

inappropriate_middleFinger

A score from 0 to 100. A higher score indicates a higher confidence level.

The image may contain the middle finger gesture.

inappropriate_foodWasting

A score from 0 to 100. A higher score indicates a higher confidence level.

The image contains content related to food waste.

profanity_oral_tii

A score from 0 to 100. A higher score indicates a higher confidence level.

The text in the image contains profanity or vulgar slang.

profanity_offensive_tii

A score from 0 to 100. A higher score indicates a higher confidence level.

The text in the image contains profanity or highly offensive language.

You can also configure a custom image library for each threat label. If a detected image is highly similar to an image in your custom library, the system returns a corresponding threat label with the `_lib` suffix. For example, a match from a custom library for the violent_explosion label returns violent_explosion_lib in the label parameter. The corresponding confidence parameter provides a score indicating the degree of similarity.

If the system finds no threats in an image, or if the image is highly similar to one you have exempted from review, the system returns the label values and confidence scores described in the following table.

Label value (label)

Confidence score range (confidence)

Meaning

nonLabel

This field is not returned.

No threat was detected in the image, or you have disabled all check items. For more information, see the Content Moderation console.

nonLabel_lib

0 to 100. A higher score indicates a higher confidence level.

The image is highly similar to an image you have exempted from review. For more information, see the Content Moderation console.

Code descriptions

This section describes the response codes returned by the API. Only requests that return a 200 status code are billed.

Code

Description

200

The request is successful.

400

A request parameter is empty.

401

A request parameter is invalid.

402

The length of a request parameter does not meet API requirements. Check and modify the parameter.

403

The request exceeds the queries per second (QPS) limit. Check and adjust the concurrency.

404

An error occurred while downloading the input image. Check the image or retry the request.

405

The download of the input image timed out. This may be because the image is inaccessible. Verify that the image is accessible and then retry the request.

406

The input image is too large. Resize the image and retry the request.

407

The image format is not supported. Use a supported format and retry the request.

408

The account does not have permission to call the API. This can happen if the service is not activated for the account, the account has an overdue payment, or the account is not authorized to access the API.

500

A system error occurred.