API description
You can use the Image Moderation 2.0 API to detect whether an image contains content that violates regulations, disrupts platform order, or negatively affects the user experience. The API supports more than 40 content risk labels and more than 40 risk control items. Using Image Moderation 2.0 of Content Moderation, you can implement further moderation or administration measures for specific image content based on your business scenario, platform content administration rules, and the risk labels and confidence scores returned by the API. For more information, see Introduction to Image Moderation 2.0 and Billing.
Connection guide
Create an Alibaba Cloud account. Register now and follow the on-screen instructions.
Activate the pay-as-you-go Content Moderation service. Make sure that you have activated the service. You are not charged for activating the service. After you use the service, you are automatically charged based on your usage. For more information, see Billing details.
Create an AccessKey. Make sure that you have used RAM to create an AccessKey. If you use the AccessKey of a RAM user, you must use your Alibaba Cloud account to grant the AliyunYundunGreenWebFullAccess permission to the RAM user. For more information, see RAM authorization.
Developer integration. We recommend that you use SDKs to call the API. For more information, see Image Moderation 2.0 SDKs and usage guide.
Usage notes
You can call this Image Moderation 2.0 API to create a task for image moderation. For more information about how to construct an HTTP request, see Make HTTPS calls. You can also use an SDK to make requests. For more information, see Image Moderation 2.0 SDKs and usage guide.
API Operation: ImageModeration
Supported Regions and Endpoints:
Region
Public endpoint
VPC endpoint
Supported services
Singapore
https://green-cip.ap-southeast-1.aliyuncs.com
https://green-cip-vpc.ap-southeast-1.aliyuncs.com
baselineCheck_global, aigcDetector_global
UK (London)
https://green-cip.eu-west-1.aliyuncs.com
Not available
US (Virginia)
https://green-cip.us-east-1.aliyuncs.com
https://green-cip-vpc.us-east-1.aliyuncs.com
baselineCheck_global, aigcDetector_global
US (Silicon Valley)
https://green-cip.us-west-1.aliyuncs.com
Not available
Germany (Frankfurt)
green-cip.eu-central-1.aliyuncs.com
Not available
NoteThe UK (London) region reuses the console configurations of the Singapore region. The US (Silicon Valley) and Germany (Frankfurt) regions reuse the console configurations of the US (Virginia) region.
Billing:
This is a billable API operation. You are charged only for requests that return an HTTP status code of 200. Requests that return other error codes are not billed. For more information about billing methods, see Billing details.
Image Requirements:
The following image formats are supported: PNG, JPG, JPEG, BMP, WEBP, TIFF, SVG, HEIF (the longest edge must be less than 8,192 px), GIF (only the first frame is used), and ICO (only the last image is used).
An image cannot exceed 20 MB in size. The height or width cannot exceed 16,384 px, and the total number of pixels cannot exceed 167 million. We recommend that the image resolution be greater than 200 × 200 px. A low resolution may affect the performance of the moderation algorithm.
The image download time is limited to 3 seconds. If the download takes longer than 3 seconds, a timeout error is returned.
QPS limit
The queries per second (QPS) limit for a single user is 100 calls/second. If you exceed this limit, API calls are throttled, which may affect your business. If your business requires a higher QPS or you have an urgent scale-out need, contact your business manager.
Debug
Before you connect, you can also use Alibaba Cloud OpenAPI Explorer to debug the Image Moderation 2.0 API online. You can view sample code for calls and SDK dependency information to get an overview of how to use the API and its parameters.
The online debugging feature calls the Content Moderation API based on the currently logged-on account. Therefore, the number of calls is included in the billable usage of the account.
Request parameters
For more information about the common request parameters that must be included in a request, see Common parameters.
The request body is a JSON struct that contains the following fields:
Name | Type | Required | Example | Description |
Service | String | Yes | baselineCheck_global | The moderation service supported by Image Moderation 2.0. Valid values:
Note For the differences between services, see Service Description. For the AIGC-dedicated service, see AIGC Scenario Detection Service. The international version can be used only in regions outside China. |
ServiceParameters | JSONString | Yes | The parameters related to the content moderation object. The value is a JSON string. For more information about the description of each string, see ServiceParameters. |
Table 1. ServiceParameters
Name | Type | Required | Example | Description |
imageUrl | String | Yes. Image Moderation 2.0 supports three ways to pass images. Select one of the following:
| https://img.alicdn.com/tfs/TB1U4r9AeH2gK0jSZJnXXaT1FXa-2880-480.png | The URL of the object to be moderated. Make sure the URL is accessible over the public network and its length does not exceed 2,048 characters. Note
|
ossBucketName | String | bucket_01 | The name of the authorized OSS bucket. Note Before you use the VPC endpoint of an OSS image, you must use your Alibaba Cloud account to access the Cloud Resource Access Authorization page to grant the AliyunCIPScanOSSRole role. | |
ossObjectName | String | 2022023/04/24/test.jpg | The name of the file in the authorized OSS bucket. Note 1. Pass the original filename from OSS. You cannot add image processing parameters. To add image processing parameters, use the imageUrl address. 2. If a filename contains Chinese characters or spaces, pass it as is. It does not need to be URL-encoded. | |
ossRegionId | String | cn-beijing | The region where the OSS bucket is located. | |
dataId | String | No | img123**** | The data ID of the object to be moderated. It can consist of uppercase and lowercase letters, digits, underscores (_), hyphens (-), and periods (.). It can be up to 64 characters long and can be used to uniquely identify your business data. |
referer | String | No | www.aliyun.com | The referer request header, used for scenarios such as hotlink protection. The length cannot exceed 256 characters. |
infoType | String | Yes | customImage,textInImage | The auxiliary information to obtain. Valid values:
You can specify multiple values, separated by commas. For example, "customImage,textInImage" indicates that both information about custom image library hits and text in the image are returned. Note Public figure information and logo information can be returned in an advanced image moderation service. For more information, see Service description. |
Response data
Name | Type | Example | Description |
RequestId | String | 70ED13B0-BC22-576D-9CCF-1CC12FEAC477 | The ID of this request. Alibaba Cloud generates a unique ID for each request, which can be used for troubleshooting and issue tracking. |
Data | Object | The result of image moderation. For more information, see Data. | |
Code | Integer | 200 | The returned HTTP status code. For more information, see Response codes. |
Msg | String | OK | The response message for this request. |
Table 2. Data
Name | Type | Example | Description |
Result | Array | The parameter results of the image moderation, such as threat labels and confidence scores. For more information, see result. | |
RiskLevel | String | high | The risk level of the image, returned based on the label with the highest risk. Valid values:
Note We recommend that you take immediate action on high-risk content and manually review medium-risk content. For low-risk content, process it only when you have high recall requirements. Otherwise, you can treat it the same as content for which no risk is detected. You can configure risk scores in the Content Moderation console. |
DataId | String | img123****** | The data ID of the moderated object. Note If you passed a dataId in the request, the corresponding dataId is returned here. |
Ext | Object | Auxiliary reference information for the image. For more information, see Auxiliary information. |
Table 3. result
Name | Type | Example | Description |
Label | String | violent_explosion | The label returned after image content moderation. Multiple labels and scores may be returned for a single image. For supported labels, see: |
Confidence | Float | 81.22 | The confidence score. Valid values: 0 to 100. The value is accurate to two decimal places. Some labels do not have a confidence score. For more information, see Descriptions of risk labels. |
Description | String | Fireworks content | A description of the Label field. Important This field explains the Label field and may be subject to change. When processing results, we recommend that you handle the Label field and do not base your actions on this field. |
RiskLevel | String | high | The risk level of the current label, returned based on the configured high and low risk scores. Valid values:
|
Examples
Request example
{
"Service": "baselineCheck_global",
"ServiceParameters": {
"imageUrl": "https://img.alicdn.com/tfs/TB1U4r9AeH2gK0jSZJnXXaT1FXa-2880-480.png",
"dataId": "img123****"
}
}Response example
If the system detects risky content, the following response is returned:
{ "Msg": "OK", "Code": 200, "Data": { "DataId": "img123****", "Result": [ { "Label": "pornographic_adultContent", "Confidence": 81, "Description": "Adult pornographic content" }, { "Label": "sexual_partialNudity", "Confidence": 98, "Description": "Partial nudity or sexy" }, { "Label": "violent_explosion", "Confidence": 70, "Description": "Fireworks content" }, { "Label": "violent_explosion_lib", "Confidence": 81, "Description": "Fireworks content_Hit custom library" } ], "RiskLevel": "high" }, "RequestId": "ABCD1234-1234-1234-1234-1234XYZ" }If the system does not detect any risky content, the following response is returned:
{ "Msg": "OK", "Code": 200, "Data": { "DataId": "img123****", "Result": [ { "Label": "nonLabel", "Description": "No risk detected" } ], "RiskLevel": "none" }, "RequestId": "ABCD1234-1234-1234-1234-1234XYZ" }
If the system detects that the image you submitted matches an image in your configured allowlist, the following response is returned:
{ "Msg": "OK", "Code": 200, "Data": { "DataId": "img123****", "Result": [ { "Label": "nonLabel_lib", "Confidence": 83, "Description": "Hit allowlist" } ], "RiskLevel": "none" }, "RequestId": "ABCD1234-1234-1234-1234-1234XYZ" }
The request and response examples in this document are formatted for readability. The actual results are not formatted with line breaks or indentation.
Risk label definitions
The following describes the risk label values, their corresponding score ranges, and their meanings. You can enable or disable each risk label in the console. For some risk labels, you can also configure a more granular detection scope. For more information, see the Console User Guide. The labels that are supported by each image service are listed below.
Scenario | Service and labels |
General scenarios | |
AIGC scenarios | Supported labels for AI-generated image detection (aigcDetector_global) |
For labels returned when there is no risk or the review-free gallery is matched, see Supported labels when there is no risk or the review-free gallery is matched.
We recommend that you store the risk labels and confidence scores returned by the system for a certain period. This lets you reference them for subsequent content governance. You can set priorities for manual review or annotation, and implement layered and categorized content governance measures based on the risk labels.
Table 4. Labels supported by general baseline check (baselineCheck_global)
Tag value | Confidence score range (confidence) | Meaning in Chinese |
pornographic_adultContent | 0 to 100. A higher score indicates a higher confidence level. | The image may contain adult or pornographic content. |
pornographic_cartoon | 0 to 100. A higher score indicates a higher confidence level. | The image may contain pornographic cartoon content. |
pornographic_adultToys | 0 to 100. A higher score indicates a higher confidence level. | The image may contain adult toy content. |
pornographic_art | 0 to 100. A higher score indicates a higher confidence level. | The image may contain pornographic artwork. |
pornographic_adultContent_tii | 0 to 100. A higher score indicates a higher confidence level. | The text in the image may contain pornographic content. |
pornographic_suggestive_tii | 0 to 100. A higher score indicates a higher confidence level. | The text in the image may contain vulgar content. |
pornographic_o_tii | 0 to 100. A higher score indicates a higher confidence level. | The text in the image may contain inappropriate content. For more information, see the Content Moderation console. |
pornographic_organs_tii | 0 to 100. A higher score indicates a higher confidence level. | The text in the image may describe sexual organs. |
pornographic_adultToys_tii | 0 to 100. A higher score indicates a higher confidence level. | The text in the image may contain content about adult toys. |
sexual_suggestiveContent | 0 to 100. A higher score indicates a higher confidence level. | The image may contain vulgar or sexually suggestive content. |
sexual_femaleUnderwear | 0 to 100. A higher score indicates a higher confidence level. | The image may contain underwear or swimwear. |
sexual_cleavage | 0 to 100. A higher score indicates a higher confidence level. | The image may feature female cleavage. |
sexual_maleTopless | 0 to 100. A higher score indicates a higher confidence level. | The image may show shirtless men. |
sexual_cartoon | 0 to 100. A higher score indicates a higher confidence level. | The image may contain sexually suggestive animated content. |
sexual_shoulder | 0 to 100. A higher score indicates a higher confidence level. | The image may show sexually suggestive shoulders. |
sexual_femaleLeg | 0 to 100. A higher score indicates a higher confidence level. | The image may show sexually suggestive legs. |
sexual_pregnancy | 0 to 100. A higher score indicates a higher confidence level. | The image may contain pregnancy photos or breastfeeding. |
sexual_feet | 0 to 100. A higher score indicates a higher confidence level. | The image may show sexually suggestive feet. |
sexual_kiss | 0 to 100. A higher score indicates a higher confidence level. | The image may contain kissing. |
sexual_intimacy | 0 to 100. A higher score indicates a higher confidence level. | The image may contain intimate behavior. |
sexual_intimacyCartoon | 0 to 100. A higher score indicates a higher confidence level. | The image may contain intimate actions in cartoons or anime. |
violent_explosion | 0 to 100. A higher score indicates a higher confidence level. | The image may contain content related to smoke or fire. For more information, see the Content Moderation console. |
violent_burning | 0 to 100. A higher score indicates a higher confidence level. | The image may contain burning content. |
violent_armedForces | 0 to 100. A higher score indicates a higher confidence level. | The image is suspected of containing content related to a terrorist organization. |
violent_weapon | 0 to 100. A higher score indicates a higher confidence level. | The image may contain military equipment. |
violent_crowding | 0 to 100. A higher score indicates a higher confidence level. | The image may show a crowd gathering. |
violent_gun | 0 to 100. A higher score indicates a higher confidence level. | The image may contain guns. |
violent_knives | 0 to 100. A higher score indicates a higher confidence level. | The image may contain knives. |
violent_horrific | 0 to 100. A higher score indicates a higher confidence level. | The image may contain horrific content. |
violent_nazi | 0 to 100. A higher score indicates a higher confidence level. | The image may contain Nazi-related content. |
violent_bloody | 0 to 100. A higher score indicates a higher confidence level. | The image may contain bloody content. |
violent_extremistGroups_tii | 0 to 100. A higher score indicates a higher confidence level. | The text in the image may contain content about extremist groups. |
violent_extremistIncident_tii | 0 to 100. A higher score indicates a higher confidence level. | The text in the image may contain content about extremist incidents. |
violence_weapons_tii | 0 to 100. A higher score indicates a higher confidence level. | The text in the image may describe guns and knives. |
violent_ACU | 0 to 100. A higher score indicates a higher confidence level. | The image may contain combat uniforms. |
contraband_drug | 0 to 100. A higher score indicates a higher confidence level. | The image may contain drug-related content. |
contraband_drug_tii | 0 to 100. A higher score indicates a higher confidence level. | The text in the image may describe illegal drugs. |
contraband_gamble | 0 to 100. A higher score indicates a higher confidence level. | The image may contain gambling-related content. |
contraband_gamble_tii | 0 to 100. A higher score indicates a higher confidence level. | The text in the image may describe gambling. |
inappropriate_smoking | 0 to 100. A higher score indicates a higher confidence level. | The image may contain smoking-related content. |
inappropriate_drinking | 0 to 100. A higher score indicates a higher confidence level. | The image may contain alcohol-related content. |
inappropriate_tattoo | 0 to 100. A higher score indicates a higher confidence level. | The image may contain tattoos. |
inappropriate_middleFinger | 0 to 100. A higher score indicates a higher confidence level. | The image may show a middle finger gesture. |
inappropriate_foodWasting | 0 to 100. A higher score indicates a higher confidence level. | The image may contain content about wasting food. |
profanity_Offensive_tii | 0 to 100. A higher score indicates a higher confidence level. | The text in the image may contain severe profanity, verbal attacks, or offensive content. |
profanity_Oral_tii | 0 to 100. A higher score indicates a higher confidence level. | The text in the image may contain colloquial profanity. |
religion_clothing | 0 to 100. A higher score indicates a higher confidence level. | The image may contain special logos and elements. For more information, see the Content Moderation console. |
religion_logo | 0 to 100. A higher score indicates a higher confidence level. | |
religion_flag | 0 to 100. A higher score indicates a higher confidence level. | |
religion_taboo1_tii | 0 to 100. A higher score indicates a higher confidence level. | The text in the image may contain prohibited content. For more information, see the Content Moderation console. |
religion_taboo2_tii | 0 to 100. A higher score indicates a higher confidence level. | |
flag_country | 0 to 100. A higher score indicates a higher confidence level. | The image may contain flag-related content. |
political_historicalNihility | 0 to 100. A higher score indicates a higher confidence level. | The image may contain specific content. For more information, see the Content Moderation console. |
political_historicalNihility_tii | The score is on a scale of 0 to 100. A higher score indicates a higher confidence level. | |
political_politicalFigure_1 | The score ranges from 0 to 100, with higher scores indicating a higher confidence level. | |
political_politicalFigure_2 | 0 to 100. A higher score indicates a higher confidence level. | |
political_politicalFigure_3 | 0 to 100. A higher score indicates a higher confidence level. | |
political_politicalFigure_4 | 0 to 100. A higher score indicates a higher confidence level. | |
political_politicalFigure_name_tii | The score is on a scale of 0 to 100, where a higher score indicates a higher confidence level. | |
political_prohibitedPerson_1 | The score ranges from 0 to 100, with higher scores indicating a higher confidence level. | |
political_prohibitedPerson_2 | The score is on a scale of 0 to 100. A higher score indicates a higher confidence level. | |
political_prohibitedPerson_tii | The score ranges from 0 to 100, with higher scores indicating a higher confidence level. | |
political_taintedCelebrity | The score ranges from 0 to 100, with a higher score indicating a higher confidence level. | |
political_taintedCelebrity_tii | The score ranges from 0 to 100. A higher score indicates a higher confidence level. | |
political_CNFlag | The score ranges from 0 to 100, where a higher score indicates a higher confidence level. | |
political_CNMap | The score ranges from 0 to 100. A higher score indicates a higher confidence level. | |
political_logo | The score ranges from 0 to 100, with higher scores indicating a higher confidence level. | |
political_outfit | The score ranges from 0 to 100, with a higher score indicating a higher confidence level. | |
political_badge | The score ranges from 0 to 100, with a higher score indicating a higher confidence level. | |
pt_logo | 0 to 100. A higher score indicates a higher confidence level. | The image may contain a logo. |
QRCode | 0 to 100. A higher score indicates a higher confidence level. | The image may contain a QR code. |
pt_custom_01 | 0 to 100. A higher score indicates a higher confidence level. | Custom label 01. |
pt_custom_02 | 0 to 100. A higher score indicates a higher confidence level. | Custom label 02. |
tii is an abbreviation for "text in image". A label ending in `tii` indicates that a text violation was detected in the image.
In addition, you can configure custom image libraries for each of the risk labels above. If a moderated image has a high similarity to an image in a custom library, the system returns the corresponding risk label. To distinguish them, the label value is formatted as OriginalRiskLabel_lib. For example, if you configure a custom image library for "violent_explosion", and a moderated image matches any image in that library with high similarity, the system returns violent_explosion_lib in the label parameter. The corresponding confidence parameter will represent the degree of similarity as a score.
If the system detects no anomalies in the submitted image, or if it has a high similarity to any image in your configured allowlist, the returned label and confidence score are returned as shown in the table below.
Label (label) | Confidence score range (confidence) | Meaning in Chinese |
nonLabel | This field is not present. | No risk was detected in this image, or you have disabled all moderation items. For more information, see the Content Moderation console. |
nonLabel_lib | 0 to 100. A higher score indicates a higher confidence level. | This image has a high similarity to an image in your configured allowlist. For more information, see the Content Moderation console. |
Code descriptions
The following describes the meaning of the codes returned by the API. You are charged only for requests that return an HTTP status code of 200. Requests that return other error codes are not billed.
Code | Description |
200 | The request is successful. |
400 | A request parameter is empty. |
401 | A request parameter is invalid. |
402 | The length of a request parameter does not meet the API requirements. Check and modify it. |
403 | The request exceeds the QPS limit. Check and adjust the concurrency. |
404 | An error occurred while downloading the passed image. Check or retry. |
405 | The download of the passed image timed out. This may be because the image is inaccessible. Check, adjust, and retry. |
406 | The passed image is too large. Check and adjust the image size, then retry. |
407 | The format of the passed image is not supported. Check and adjust, then retry. |
408 | The account does not have permission to call this API. This may be because the account is not activated, has overdue payments, or the calling account is not authorized. |
500 | A system exception occurred. |