All Products
Search
Document Center

Content Moderation:Synchronous moderation API for Image Moderation 2.0

Last Updated:Dec 25, 2025

API description

You can use the Image Moderation 2.0 API to detect whether an image contains content that violates regulations, disrupts platform order, or negatively affects the user experience. The API supports more than 40 content risk labels and more than 40 risk control items. Using Image Moderation 2.0 of Content Moderation, you can implement further moderation or administration measures for specific image content based on your business scenario, platform content administration rules, and the risk labels and confidence scores returned by the API. For more information, see Introduction to Image Moderation 2.0 and Billing.

Connection guide

  1. Create an Alibaba Cloud account. Register now and follow the on-screen instructions.

  2. Activate the pay-as-you-go Content Moderation service. Make sure that you have activated the service. You are not charged for activating the service. After you use the service, you are automatically charged based on your usage. For more information, see Billing details.

  3. Create an AccessKey. Make sure that you have used RAM to create an AccessKey. If you use the AccessKey of a RAM user, you must use your Alibaba Cloud account to grant the AliyunYundunGreenWebFullAccess permission to the RAM user. For more information, see RAM authorization.

  4. Developer integration. We recommend that you use SDKs to call the API. For more information, see Image Moderation 2.0 SDKs and usage guide.

Usage notes

You can call this Image Moderation 2.0 API to create a task for image moderation. For more information about how to construct an HTTP request, see Make HTTPS calls. You can also use an SDK to make requests. For more information, see Image Moderation 2.0 SDKs and usage guide.

  • API Operation: ImageModeration

  • Supported Regions and Endpoints:

    Region

    Public endpoint

    VPC endpoint

    Supported services

    Singapore

    https://green-cip.ap-southeast-1.aliyuncs.com

    https://green-cip-vpc.ap-southeast-1.aliyuncs.com

    baselineCheck_global, aigcDetector_global

    UK (London)

    https://green-cip.eu-west-1.aliyuncs.com

    Not available

    US (Virginia)

    https://green-cip.us-east-1.aliyuncs.com

    https://green-cip-vpc.us-east-1.aliyuncs.com

    baselineCheck_global, aigcDetector_global

    US (Silicon Valley)

    https://green-cip.us-west-1.aliyuncs.com

    Not available

    Germany (Frankfurt)

    green-cip.eu-central-1.aliyuncs.com

    Not available

    Note

    The UK (London) region reuses the console configurations of the Singapore region. The US (Silicon Valley) and Germany (Frankfurt) regions reuse the console configurations of the US (Virginia) region.

  • Billing:

    This is a billable API operation. You are charged only for requests that return an HTTP status code of 200. Requests that return other error codes are not billed. For more information about billing methods, see Billing details.

  • Image Requirements:

    • The following image formats are supported: PNG, JPG, JPEG, BMP, WEBP, TIFF, SVG, HEIF (the longest edge must be less than 8,192 px), GIF (only the first frame is used), and ICO (only the last image is used).

    • An image cannot exceed 20 MB in size. The height or width cannot exceed 16,384 px, and the total number of pixels cannot exceed 167 million. We recommend that the image resolution be greater than 200 × 200 px. A low resolution may affect the performance of the moderation algorithm.

    • The image download time is limited to 3 seconds. If the download takes longer than 3 seconds, a timeout error is returned.

QPS limit

The queries per second (QPS) limit for a single user is 100 calls/second. If you exceed this limit, API calls are throttled, which may affect your business. If your business requires a higher QPS or you have an urgent scale-out need, contact your business manager.

Debug

Before you connect, you can also use Alibaba Cloud OpenAPI Explorer to debug the Image Moderation 2.0 API online. You can view sample code for calls and SDK dependency information to get an overview of how to use the API and its parameters.

Important

The online debugging feature calls the Content Moderation API based on the currently logged-on account. Therefore, the number of calls is included in the billable usage of the account.

Request parameters

For more information about the common request parameters that must be included in a request, see Common parameters.

The request body is a JSON struct that contains the following fields:

Name

Type

Required

Example

Description

Service

String

Yes

baselineCheck_global

The moderation service supported by Image Moderation 2.0. Valid values:

  • baselineCheck_global: Baseline Check

  • aigcDetector_global: AIGC image detection

Note

For the differences between services, see Service Description. For the AIGC-dedicated service, see AIGC Scenario Detection Service. The international version can be used only in regions outside China.

ServiceParameters

JSONString

Yes

The parameters related to the content moderation object. The value is a JSON string. For more information about the description of each string, see ServiceParameters.

Table 1. ServiceParameters

Name

Type

Required

Example

Description

imageUrl

String

Yes. Image Moderation 2.0 supports three ways to pass images. Select one of the following:

  • Pass the image URL for moderation by setting `imageUrl`.

  • Use OSS authorization for moderation. You must pass `ossBucketName`, `ossObjectName`, and `ossRegionId` at the same time.

  • Use a local image for detection. This method does not use your OSS storage space, and the file is stored for only 30 minutes. The SDKs have integrated the local image upload feature. For code examples, see Image Moderation Enhanced Edition 2.0 SDK and Access Guide.

https://img.alicdn.com/tfs/TB1U4r9AeH2gK0jSZJnXXaT1FXa-2880-480.png

The URL of the object to be moderated. Make sure the URL is accessible over the public network and its length does not exceed 2,048 characters.

Note
  • Do not include Chinese characters in URLs, and submit only one URL per request.

  • If a download times out, use OSS authorization to troubleshoot the issue.

ossBucketName

String

bucket_01

The name of the authorized OSS bucket.

Note

Before you use the VPC endpoint of an OSS image, you must use your Alibaba Cloud account to access the Cloud Resource Access Authorization page to grant the AliyunCIPScanOSSRole role.

ossObjectName

String

2022023/04/24/test.jpg

The name of the file in the authorized OSS bucket.

Note

1. Pass the original filename from OSS. You cannot add image processing parameters. To add image processing parameters, use the imageUrl address.

2. If a filename contains Chinese characters or spaces, pass it as is. It does not need to be URL-encoded.

ossRegionId

String

cn-beijing

The region where the OSS bucket is located.

dataId

String

No

img123****

The data ID of the object to be moderated.

It can consist of uppercase and lowercase letters, digits, underscores (_), hyphens (-), and periods (.). It can be up to 64 characters long and can be used to uniquely identify your business data.

referer

String

No

www.aliyun.com

The referer request header, used for scenarios such as hotlink protection. The length cannot exceed 256 characters.

infoType

String

Yes

customImage,textInImage

The auxiliary information to obtain. Valid values:

  • customImage: Information about hits in custom image libraries

  • textInImage: Information about text in the image

  • publicFigure: Information about figures that are hit

  • logoData: Information about logos

You can specify multiple values, separated by commas. For example, "customImage,textInImage" indicates that both information about custom image library hits and text in the image are returned.

Note

Public figure information and logo information can be returned in an advanced image moderation service. For more information, see Service description.

Response data

Name

Type

Example

Description

RequestId

String

70ED13B0-BC22-576D-9CCF-1CC12FEAC477

The ID of this request. Alibaba Cloud generates a unique ID for each request, which can be used for troubleshooting and issue tracking.

Data

Object

The result of image moderation. For more information, see Data.

Code

Integer

200

The returned HTTP status code. For more information, see Response codes.

Msg

String

OK

The response message for this request.

Table 2. Data

Name

Type

Example

Description

Result

Array

The parameter results of the image moderation, such as threat labels and confidence scores. For more information, see result.

RiskLevel

String

high

The risk level of the image, returned based on the label with the highest risk. Valid values:

  • high: high risk

  • medium: medium risk

  • low: low risk

  • none: no risk detected

Note

We recommend that you take immediate action on high-risk content and manually review medium-risk content. For low-risk content, process it only when you have high recall requirements. Otherwise, you can treat it the same as content for which no risk is detected. You can configure risk scores in the Content Moderation console.

DataId

String

img123******

The data ID of the moderated object.

Note

If you passed a dataId in the request, the corresponding dataId is returned here.

Ext

Object

Auxiliary reference information for the image. For more information, see Auxiliary information.

Table 3. result

Name

Type

Example

Description

Label

String

violent_explosion

The label returned after image content moderation. Multiple labels and scores may be returned for a single image. For supported labels, see:

Confidence

Float

81.22

The confidence score. Valid values: 0 to 100. The value is accurate to two decimal places. Some labels do not have a confidence score. For more information, see Descriptions of risk labels.

Description

String

Fireworks content

A description of the Label field.

Important

This field explains the Label field and may be subject to change. When processing results, we recommend that you handle the Label field and do not base your actions on this field.

RiskLevel

String

high

The risk level of the current label, returned based on the configured high and low risk scores. Valid values:

  • high: high risk

  • medium: medium risk

  • low: low risk

  • none: no risk detected

Auxiliary information returned (click to expand)

Table 4. Ext

Name

Type

Example

Description

CustomImage

JSONArray

If a custom image library is hit, information about the custom image library is returned. For more information, see CustomImage.

TextInImage

Object

Returns information about the text hit in the image. For more information, see TextInImage.

PublicFigure

JSONArray

If an image contains a specific figure, the ID of the identified figure is returned. For more information, see PublicFigure.

LogoData

JSONArray

Returns information about the hit logos. For more information, see LogoData.

Table 5. CustomImage

Name

Type

Example

Description

LibId

String

lib0001

The ID of the custom image library that was hit.

LibName

String

Custom Image Library A

The name of the custom image library that was hit.

ImageId

String

20240307

The ID of the custom image that was hit.

Table 6. TextInImage

Name

Type

Example

Description

OcrResult

JSONArray

Returns the information about each line of text recognized in the image. For more information, see OcrResult.

RiskWord

StringArray

[ "risk_word_1",

Sensitive word 2

The risk fragments that are hit in the text. This is returned when a `tii` type label is hit.

CustomText

JSONArray

If a custom term library is hit, information about the hit library is returned. For more information, see CustomText.

Table 7. OcrResult

Name

Type

Example

Description

Text

String

Identified text line 1

Returns the content of the text line identified in the image.

Table 8. CustomText

Name

Type

Example

Description

LibId

String

test20240307

The ID of the custom keyword library that was hit.

LibName

String

Custom Keyword Library A

The name of the custom keyword library that was hit.

KeyWords

String

Keyword 1

The custom keyword that was hit.

Table 9. PublicFigure

Name

Type

Example

Description

FigureName

String

John Doe

The information of the identified person.

FigureId

String

xxx001

The code of the identified person.

Note

A code is returned for specific individuals, while information is returned for others. We recommend that you retrieve the person's information first. If the information is empty, then retrieve the person's code.

Location

JSONArray

The location information of the figure. For more information, see Location.

Table 10. LogoData

Name

Type

Example

Description

Logo

JSONArray

Logo information. For more information, see Location.

Location

Object

The location information of the identity. For more information, see Location.

Table 11. Logo

Name

Type

Example

Description

Name

String

DingTalk

The logo name.

Label

String

logo_sns

The label that was hit.

Confidence

Float

88.18

The confidence level.

Table 12. Location

Name

Type

Example

Description

X

Float

41

The distance from the upper-left corner of the text area to the y-axis, with the origin at the upper-left corner of the image. Unit: pixel.

Y

Float

84

The distance from the upper-left corner of the text area to the x-axis, with the origin at the upper-left corner of the image. Unit: pixel.

W

Float

83

The width of the text area. Unit: pixel.

H

Float

26

The height of the text area. Unit: pixel.

Examples

Request example

{
    "Service": "baselineCheck_global",
    "ServiceParameters": {
        "imageUrl": "https://img.alicdn.com/tfs/TB1U4r9AeH2gK0jSZJnXXaT1FXa-2880-480.png",
        "dataId": "img123****"
    }
}

Response example

  • If the system detects risky content, the following response is returned:

    {
        "Msg": "OK",
        "Code": 200,
        "Data": {
            "DataId": "img123****",
            "Result": [
                {
                    "Label": "pornographic_adultContent",
                    "Confidence": 81,
                    "Description": "Adult pornographic content"
                },
                {
                    "Label": "sexual_partialNudity",
                    "Confidence": 98,
                    "Description": "Partial nudity or sexy"
                },
                {
                    "Label": "violent_explosion",
                    "Confidence": 70,
                    "Description": "Fireworks content"
                },
                {
                    "Label": "violent_explosion_lib",
                    "Confidence": 81,
                    "Description": "Fireworks content_Hit custom library"
                }
            ],
            "RiskLevel": "high"
        },
        "RequestId": "ABCD1234-1234-1234-1234-1234XYZ"
    }
  • If the system does not detect any risky content, the following response is returned:

    {
        "Msg": "OK",
        "Code": 200,
        "Data": {
            "DataId": "img123****",
            "Result": [
                {
                    "Label": "nonLabel",
                    "Description": "No risk detected"
                }
            ],
            "RiskLevel": "none"
        },
        "RequestId": "ABCD1234-1234-1234-1234-1234XYZ"
    }
  • If the system detects that the image you submitted matches an image in your configured allowlist, the following response is returned:

    {
        "Msg": "OK",
        "Code": 200,
        "Data": {
            "DataId": "img123****",
            "Result": [
                {
                    "Label": "nonLabel_lib",
                    "Confidence": 83,
                    "Description": "Hit allowlist"
                }
            ],
            "RiskLevel": "none"
        },
        "RequestId": "ABCD1234-1234-1234-1234-1234XYZ"
    }
  • Auxiliary information response example (click to expand)

    • When a custom image library is matched, the following response is returned:

    {
        "Code": 200,
        "Data": {
            "DataId": "",
            "Ext": {
                "CustomImage": [
                    {
                        "ImageId": "12345",
                        "LibId": "TEST20240307",
                        "LibName": "Risk Image Library A"
                    }
                ]
            },
            "Result": [
                {
                    "Confidence": 100.0,
                    "Label": "pornographic_adultContent_lib",
                    "Description": "Adult pornographic content_Hit custom library"
                }
            ],
            "RiskLevel": "high"
        },
        "Msg": "success",
        "RequestId": "5F572704-4C03-51DF-8957-D77BF6E7444E"
    }
    • When a custom keyword library is matched, the following response is returned:

    {
        "Code": 200,
        "Data": {
            "DataId": "",
            "Ext": {
                "TextInImage": {
                    "CustomText": [
                        {
                            "KeyWords": "Custom Keyword 1",
                            "LibId": "TEST20240307",
                            "LibName": "Text Blacklist A"
                        }
                    ],
                    "OcrResult": [
                        {
                            "Text": "Text line 1"
                        },
                        {
                            "Text": "Text line 2"
                        },
                        {
                            "Text": "Text line 3 with custom keyword"
                        }
                    ],
                    "RiskWord": null
                }
            },
            "Result": [
                {
                    "Confidence": 99.0,
                    "Label": "pornographic_adultContent_tii_lib",
                    "Description": "Text contains pornographic content_Hit custom library"
                }
            ],
            "RiskLevel": "high"
        },
        "Msg": "success",
        "RequestId": "TESTZGL-0307-2024-0728-FOREVER"
    }
    • When a text violation in an image is detected, the following response is returned:

    {
        "Code": 200,
        "Data": {
            "DataId": "",
            "Ext": {
                "TextInImage": {
                    "CustomText": null,
                    "OcrResult": [
                        {
                            "Text": "Text line 1"
                        },
                        {
                            "Text": "Text line 2"
                        },
                        {
                            "Text": "Text line 3 with risk content"
                        }
                    ],
                    "RiskWord": [
                        "Risk Word 1"
                    ]
                }
            },
            "Result": [
                {
                    "Confidence": 89.15,
                    "Label": "political_politicalFigure_name_tii",
                    "Description": "Text contains leader's name"
                }
            ],
            "RiskLevel": "high"
        },
        "Msg": "success",
        "RequestId": "TESTZGL-0307-2024-0728-FOREVER"
    }
    • When logo information is detected, the following response is returned:

    {
        "Code": 200,
        "Data": {
            "DataId": "",
            "Ext": {
                "LogoData": [
                    {
                        "Location": {
                            "H": 44,
                            "W": 100,
                            "X": 45,
                            "Y": 30
                        },
                        "Logo": [
                            {
                                "Confidence": 96.15,
                                "Label": "pt_logotoSocialNetwork",
                                "Name": "CCTV"
                            }
                        ]
                    }
                ]
            },
            "Result": [
                {
                    "Confidence": 96.15,
                    "Label": "pt_logotoSocialNetwork",
                    "Description": "Social platform logo"
                }
            ],
            "RiskLevel": "high"
        },
        "Msg": "success",
        "RequestId": "TESTZGL-0307-2024-0728-FOREVER"
    }
    • When person information is detected, the following response is returned:

    {
        "Code": 200,
        "Data": {
            "DataId": "",
            "Ext": {
                "PublicFigure": [
                    {
                        "FigureId": null,
                        "FigureName": "Yang San",
                        "Location": [
                            {
                                "H": 520,
                                "W": 13,
                                "X": 14,
                                "Y": 999
                            }
                        ]
                    }
                ]
            },
            "Result": [
                {
                    "Confidence": 92.05,
                    "Label": "political_politicalFigure_3",
                    "Description": "Provincial and municipal government personnel"
                }
            ],
            "RiskLevel": "high"
        },
        "Msg": "success",
        "RequestId": "TESTZGL-0307-2024-0728-FOREVER"
    }
Note

The request and response examples in this document are formatted for readability. The actual results are not formatted with line breaks or indentation.

Risk label definitions

The following describes the risk label values, their corresponding score ranges, and their meanings. You can enable or disable each risk label in the console. For some risk labels, you can also configure a more granular detection scope. For more information, see the Console User Guide. The labels that are supported by each image service are listed below.

Scenario

Service and labels

General scenarios

General baseline check (baselineCheck_global) supports tags

AIGC scenarios

Supported labels for AI-generated image detection (aigcDetector_global)

For labels returned when there is no risk or the review-free gallery is matched, see Supported labels when there is no risk or the review-free gallery is matched.

Note

We recommend that you store the risk labels and confidence scores returned by the system for a certain period. This lets you reference them for subsequent content governance. You can set priorities for manual review or annotation, and implement layered and categorized content governance measures based on the risk labels.

Table 4. Labels supported by general baseline check (baselineCheck_global)

Tag value

Confidence score range (confidence)

Meaning in Chinese

pornographic_adultContent

0 to 100. A higher score indicates a higher confidence level.

The image may contain adult or pornographic content.

pornographic_cartoon

0 to 100. A higher score indicates a higher confidence level.

The image may contain pornographic cartoon content.

pornographic_adultToys

0 to 100. A higher score indicates a higher confidence level.

The image may contain adult toy content.

pornographic_art

0 to 100. A higher score indicates a higher confidence level.

The image may contain pornographic artwork.

pornographic_adultContent_tii

0 to 100. A higher score indicates a higher confidence level.

The text in the image may contain pornographic content.

pornographic_suggestive_tii

0 to 100. A higher score indicates a higher confidence level.

The text in the image may contain vulgar content.

pornographic_o_tii

0 to 100. A higher score indicates a higher confidence level.

The text in the image may contain inappropriate content. For more information, see the Content Moderation console.

pornographic_organs_tii

0 to 100. A higher score indicates a higher confidence level.

The text in the image may describe sexual organs.

pornographic_adultToys_tii

0 to 100. A higher score indicates a higher confidence level.

The text in the image may contain content about adult toys.

sexual_suggestiveContent

0 to 100. A higher score indicates a higher confidence level.

The image may contain vulgar or sexually suggestive content.

sexual_femaleUnderwear

0 to 100. A higher score indicates a higher confidence level.

The image may contain underwear or swimwear.

sexual_cleavage

0 to 100. A higher score indicates a higher confidence level.

The image may feature female cleavage.

sexual_maleTopless

0 to 100. A higher score indicates a higher confidence level.

The image may show shirtless men.

sexual_cartoon

0 to 100. A higher score indicates a higher confidence level.

The image may contain sexually suggestive animated content.

sexual_shoulder

0 to 100. A higher score indicates a higher confidence level.

The image may show sexually suggestive shoulders.

sexual_femaleLeg

0 to 100. A higher score indicates a higher confidence level.

The image may show sexually suggestive legs.

sexual_pregnancy

0 to 100. A higher score indicates a higher confidence level.

The image may contain pregnancy photos or breastfeeding.

sexual_feet

0 to 100. A higher score indicates a higher confidence level.

The image may show sexually suggestive feet.

sexual_kiss

0 to 100. A higher score indicates a higher confidence level.

The image may contain kissing.

sexual_intimacy

0 to 100. A higher score indicates a higher confidence level.

The image may contain intimate behavior.

sexual_intimacyCartoon

0 to 100. A higher score indicates a higher confidence level.

The image may contain intimate actions in cartoons or anime.

violent_explosion

0 to 100. A higher score indicates a higher confidence level.

The image may contain content related to smoke or fire. For more information, see the Content Moderation console.

violent_burning

0 to 100. A higher score indicates a higher confidence level.

The image may contain burning content.

violent_armedForces

0 to 100. A higher score indicates a higher confidence level.

The image is suspected of containing content related to a terrorist organization.

violent_weapon

0 to 100. A higher score indicates a higher confidence level.

The image may contain military equipment.

violent_crowding

0 to 100. A higher score indicates a higher confidence level.

The image may show a crowd gathering.

violent_gun

0 to 100. A higher score indicates a higher confidence level.

The image may contain guns.

violent_knives

0 to 100. A higher score indicates a higher confidence level.

The image may contain knives.

violent_horrific

0 to 100. A higher score indicates a higher confidence level.

The image may contain horrific content.

violent_nazi

0 to 100. A higher score indicates a higher confidence level.

The image may contain Nazi-related content.

violent_bloody

0 to 100. A higher score indicates a higher confidence level.

The image may contain bloody content.

violent_extremistGroups_tii

0 to 100. A higher score indicates a higher confidence level.

The text in the image may contain content about extremist groups.

violent_extremistIncident_tii

0 to 100. A higher score indicates a higher confidence level.

The text in the image may contain content about extremist incidents.

violence_weapons_tii

0 to 100. A higher score indicates a higher confidence level.

The text in the image may describe guns and knives.

violent_ACU

0 to 100. A higher score indicates a higher confidence level.

The image may contain combat uniforms.

contraband_drug

0 to 100. A higher score indicates a higher confidence level.

The image may contain drug-related content.

contraband_drug_tii

0 to 100. A higher score indicates a higher confidence level.

The text in the image may describe illegal drugs.

contraband_gamble

0 to 100. A higher score indicates a higher confidence level.

The image may contain gambling-related content.

contraband_gamble_tii

0 to 100. A higher score indicates a higher confidence level.

The text in the image may describe gambling.

inappropriate_smoking

0 to 100. A higher score indicates a higher confidence level.

The image may contain smoking-related content.

inappropriate_drinking

0 to 100. A higher score indicates a higher confidence level.

The image may contain alcohol-related content.

inappropriate_tattoo

0 to 100. A higher score indicates a higher confidence level.

The image may contain tattoos.

inappropriate_middleFinger

0 to 100. A higher score indicates a higher confidence level.

The image may show a middle finger gesture.

inappropriate_foodWasting

0 to 100. A higher score indicates a higher confidence level.

The image may contain content about wasting food.

profanity_Offensive_tii

0 to 100. A higher score indicates a higher confidence level.

The text in the image may contain severe profanity, verbal attacks, or offensive content.

profanity_Oral_tii

0 to 100. A higher score indicates a higher confidence level.

The text in the image may contain colloquial profanity.

religion_clothing

0 to 100. A higher score indicates a higher confidence level.

The image may contain special logos and elements. For more information, see the Content Moderation console.

religion_logo

0 to 100. A higher score indicates a higher confidence level.

religion_flag

0 to 100. A higher score indicates a higher confidence level.

religion_taboo1_tii

0 to 100. A higher score indicates a higher confidence level.

The text in the image may contain prohibited content. For more information, see the Content Moderation console.

religion_taboo2_tii

0 to 100. A higher score indicates a higher confidence level.

flag_country

0 to 100. A higher score indicates a higher confidence level.

The image may contain flag-related content.

political_historicalNihility

0 to 100. A higher score indicates a higher confidence level.

The image may contain specific content. For more information, see the Content Moderation console.

political_historicalNihility_tii

The score is on a scale of 0 to 100. A higher score indicates a higher confidence level.

political_politicalFigure_1

The score ranges from 0 to 100, with higher scores indicating a higher confidence level.

political_politicalFigure_2

0 to 100. A higher score indicates a higher confidence level.

political_politicalFigure_3

0 to 100. A higher score indicates a higher confidence level.

political_politicalFigure_4

0 to 100. A higher score indicates a higher confidence level.

political_politicalFigure_name_tii

The score is on a scale of 0 to 100, where a higher score indicates a higher confidence level.

political_prohibitedPerson_1

The score ranges from 0 to 100, with higher scores indicating a higher confidence level.

political_prohibitedPerson_2

The score is on a scale of 0 to 100. A higher score indicates a higher confidence level.

political_prohibitedPerson_tii

The score ranges from 0 to 100, with higher scores indicating a higher confidence level.

political_taintedCelebrity

The score ranges from 0 to 100, with a higher score indicating a higher confidence level.

political_taintedCelebrity_tii

The score ranges from 0 to 100. A higher score indicates a higher confidence level.

political_CNFlag

The score ranges from 0 to 100, where a higher score indicates a higher confidence level.

political_CNMap

The score ranges from 0 to 100. A higher score indicates a higher confidence level.

political_logo

The score ranges from 0 to 100, with higher scores indicating a higher confidence level.

political_outfit

The score ranges from 0 to 100, with a higher score indicating a higher confidence level.

political_badge

The score ranges from 0 to 100, with a higher score indicating a higher confidence level.

pt_logo

0 to 100. A higher score indicates a higher confidence level.

The image may contain a logo.

QRCode

0 to 100. A higher score indicates a higher confidence level.

The image may contain a QR code.

pt_custom_01

0 to 100. A higher score indicates a higher confidence level.

Custom label 01.

pt_custom_02

0 to 100. A higher score indicates a higher confidence level.

Custom label 02.

Note

tii is an abbreviation for "text in image". A label ending in `tii` indicates that a text violation was detected in the image.

In addition, you can configure custom image libraries for each of the risk labels above. If a moderated image has a high similarity to an image in a custom library, the system returns the corresponding risk label. To distinguish them, the label value is formatted as OriginalRiskLabel_lib. For example, if you configure a custom image library for "violent_explosion", and a moderated image matches any image in that library with high similarity, the system returns violent_explosion_lib in the label parameter. The corresponding confidence parameter will represent the degree of similarity as a score.

If the system detects no anomalies in the submitted image, or if it has a high similarity to any image in your configured allowlist, the returned label and confidence score are returned as shown in the table below.

Label (label)

Confidence score range (confidence)

Meaning in Chinese

nonLabel

This field is not present.

No risk was detected in this image, or you have disabled all moderation items. For more information, see the Content Moderation console.

nonLabel_lib

0 to 100. A higher score indicates a higher confidence level.

This image has a high similarity to an image in your configured allowlist. For more information, see the Content Moderation console.

Code descriptions

The following describes the meaning of the codes returned by the API. You are charged only for requests that return an HTTP status code of 200. Requests that return other error codes are not billed.

Code

Description

200

The request is successful.

400

A request parameter is empty.

401

A request parameter is invalid.

402

The length of a request parameter does not meet the API requirements. Check and modify it.

403

The request exceeds the QPS limit. Check and adjust the concurrency.

404

An error occurred while downloading the passed image. Check or retry.

405

The download of the passed image timed out. This may be because the image is inaccessible. Check, adjust, and retry.

406

The passed image is too large. Check and adjust the image size, then retry.

407

The format of the passed image is not supported. Check and adjust, then retry.

408

The account does not have permission to call this API. This may be because the account is not activated, has overdue payments, or the calling account is not authorized.

500

A system exception occurred.