Queries the results of a content moderation job.

Usage notes

In the content moderation results, the moderation results of the video are sorted in ascending order by time into a timeline. If the video is long, the content moderation results are paginated, and the first page is returned. You can call this operation to query the full moderation results of the video.

QPS limit

You can call this operation up to 100 times per second per account. If the number of calls per second exceeds the limit, throttling is triggered. As a result, your business may be affected. We recommend that you take note of the limit when you call this operation. For more information, see QPS limit.

Debugging

OpenAPI Explorer automatically calculates the signature value. For your convenience, we recommend that you call this operation in OpenAPI Explorer. OpenAPI Explorer dynamically generates the sample code of the operation for different SDKs.

Request parameters

Parameter Type Required Example Description
Action String Yes QueryMediaCensorJobDetail

The operation that you want to perform. Set the value to QueryMediaCensorJobDetail.

JobId String Yes 2288c6ca184c0e47098a5b665e2a12****

The ID of the content moderation job. You can obtain the job ID from the response parameters of the SubmitMediaCensorJob operation.

NextPageToken String No ae0fd49c0840e14daf0d66a75b83****

The token that is used to retrieve the next page of the query results. You can leave this parameter empty when you call this operation to query the results of a content moderation job for the first time. The token of the next page is returned after you call this operation to query the results of a content moderation job for the first time.

MaximumPageSize Long No 30

The maximum number of entries to return on each page.

  • Default value: 30.
  • Valid values: 1 to 300.

Response parameters

Parameter Type Example Description
RequestId String B42299E6-F71F-465F-8FE9-4FC2E3D3C2CA

The ID of the request.

MediaCensorJobDetail Object

The results of the content moderation job.

CreationTime String 2018-09-13T16:32:24Z

The time when the job was created.

FinishTime String 2018-09-21

The time when the job was complete.

Suggestion String Block

The overall result of the job. Valid values:

  • Pass: The content passes the moderation.
  • Review: The content needs to be manually reviewed again.
  • Block: The content needs to be blocked.

If the moderation result of any type of the moderated content is Review, the overall result is Review. If the moderation result of any type of the moderated content is Block, the overall result is Block.

CoverImageCensorResults Array of CoverImageCensorResult

The moderation results of thumbnails.

CoverImageCensorResult
Object String test/ai/censor/v2/vme-****.jpg

The Object Storage Service (OSS) object that is used as the video thumbnail.

Location String oss-cn-shanghai

The OSS region in which the video thumbnail resides.

Bucket String bucket-out-test-****

The OSS bucket that stores the video thumbnail.

Results Array of Result

The moderation results.

Result
Suggestion String Pass

The recommended subsequent operation. Valid values:

  • Pass: The content passes the moderation.
  • Review: The content needs to be manually reviewed again.
  • Block: The content needs to be blocked.
Label String Normal

The label of the moderation result. Valid values:

  • Normal: normal content
  • Spam: spam
  • Ad: ad
  • Politics: political content
  • Terrorism: terrorist content
  • Abuse: abuse
  • Flood: excessive junk content
  • Contraband: prohibited content
  • Meaningless: meaningless content
  • Porn: pornographic content
  • Sexy: sexy content
  • Outfit: special costume
  • Logo: special logo
  • Weapon: weapon
  • Politic: political content
  • Others: other content
Scene String Antispam

The moderation scenario. Valid values:

  • Antispam: text anti-spam
  • Porn: pornographic content detection
  • Terrorism: terrorist content detection
Rate String 100

The score. Valid values: 0 to 100.

State String Success

The state of the job.

TitleCensorResult Object

The moderation result of the title.

Suggestion String Block

The recommended subsequent operation. Valid values:

  • Pass: The content passes the moderation.
  • Review: The content needs to be manually reviewed again.
  • Block: The content needs to be blocked.
Label String Meaningless

The label of the moderation result. Valid values:

  • Normal: normal content
  • Spam: spam
  • Ad: ad
  • Politics: political content
  • Terrorism: terrorist content
  • Abuse: abuse
  • Flood: excessive junk content
  • Contraband: prohibited content
  • Meaningless: meaningless content
  • Porn: pornographic content
  • Sexy: sexy content
  • Outfit: special costume
  • Logo: special logo
  • Weapon: weapon
  • Politic: political content
  • Others: other content
Scene String Antispam

The moderation scenario. Valid values:

  • Antispam: text anti-spam
  • Porn: pornographic content detection
  • Terrorism: terrorist content detection
Rate String 99.91

The score.

Message String The resource operated cannot be found

The error message returned if the job fails. This parameter is not returned if the job is successful.

Input Object

The information about the input file.

Object String test/ai/censor/test-****.mp4

The OSS object that is used as the input file.

Location String oss-cn-shanghai

The OSS region in which the input file resides.

Bucket String bucket-test-in-****

The OSS bucket that stores the input file.

BarrageCensorResult Object

The moderation result of live comments.

Suggestion String Pass

The recommended subsequent operation. Valid values:

  • Pass: The content passes the moderation.
  • Review: The content needs to be manually reviewed again.
  • Block: The content needs to be blocked.
Label String Normal

The label of the moderation result. Valid values:

  • Valid values in the porn moderation scenario:
    • normal: normal content
    • sexy: sexy content
    • porn: pornographic content
  • Valid values in the terrorism moderation scenario:
    • normal: normal content
    • bloody: bloody content
    • explosion: explosion and smoke
    • outfit: special costume
    • logo: special logo
    • weapon: weapon
    • politics: political content
    • violence: violence
    • crowd: crowd
    • parade: parade
    • carcrash: car accident
    • flag: flag
    • location: landmark
    • others: other content
  • Valid values in the ad moderation scenario:
    • normal: normal content
    • ad: other ads
    • politics: political content in text
    • porn: pornographic content in text
    • abuse: abuse in text
    • terrorism: terrorist content in text
    • contraband: prohibited content in text
    • spam: spam in text
    • npx: illegal ad
    • qrcode: QR code
    • programCode: mini program code
  • Valid values in the live moderation scenario:
    • normal: normal content
    • meaningless: meaningless content, such as a black or white screen
    • PIP: picture-in-picture
    • smoking: smoking
    • drivelive: live broadcasting in a running vehicle
  • Valid values in the logo moderation scenario:
    • normal: normal content
    • TV: controlled TV station logo
    • trademark: trademark
Scene String Antispam

The moderation scenario. Valid values:

  • Antispam: text anti-spam
  • Porn: pornographic content detection
  • Terrorism: terrorist content detection
Rate String 99.91

The score.

DescCensorResult Object

The moderation result of the description.

Suggestion String Review

The recommended subsequent operation. Valid values:

  • Pass: The content passes the moderation.
  • Review: The content needs to be manually reviewed again.
  • Block: The content needs to be blocked.
Label String Terrorism

The label of the moderation result. Valid values:

  • Normal: normal content
  • Spam: spam
  • Ad: ad
  • Politics: political content
  • Terrorism: terrorist content
  • Abuse: abuse
  • Flood: excessive junk content
  • Contraband: prohibited content
  • Meaningless: meaningless content
  • Porn: pornographic content
  • Sexy: sexy content
  • Outfit: special costume
  • Logo: special logo
  • Weapon: weapon
  • Politic: political content
  • Others: other content
Scene String Terrorism

The moderation scenario. Valid values:

  • Antispam: text anti-spam
  • Porn: pornographic content detection
  • Terrorism: terrorist content detection
Rate String 100

The score.

VideoCensorConfig Object

The video moderation configurations.

OutputFile Object

The information about output snapshots.

Object String output{Count}.jpg

The OSS object that is generated as the output snapshot.

Note In the example, {Count} is a placeholder. The OSS objects that are generated as output snapshots are named output00001-****.jpg, output00002-****.jpg, and so on.
Location String oss-cn-shanghai

The OSS region in which the OSS bucket for storing the output snapshot resides.

Bucket String test-bucket-****

The OSS bucket that stores the output snapshot.

VideoCensor String true

Indicates whether the video content needs to be moderated. Default value: true. Valid values:

  • true: The video content needs to be moderated.
  • false: The video content does not need to be moderated.
BizType String common

The custom business type. Default value: common.

JobId String f8f166eea7a44e9bb0a4aecf9543****

The ID of the content moderation job.

UserData String example userdata ****

The custom data.

Code String InvalidParameter.ResourceNotFound

The error code returned if the job fails. This parameter is not returned if the job is successful.

VensorCensorResult Object

The moderation results of the video.

VideoTimelines Array of VideoTimeline

The moderation results that are sorted in ascending order by time.

VideoTimeline
Timestamp String 00:02:59.999

The position in the video.

Format: hh:mm:ss[.SSS].

Object String output{Count}.jpg

The OSS object that is generated as the output snapshot.

Note In the example, {Count} is a placeholder. The OSS objects that are generated as output snapshots are named output00001-****.jpg, output00002-****.jpg, and so on.
CensorResults Array of CensorResult

The moderation results that include information such as labels and scores.

CensorResult
Suggestion String Block

The recommended subsequent operation. Valid values:

  • Pass: The content passes the moderation.
  • Review: The content needs to be manually reviewed again.
  • Block: The content needs to be blocked.
Label String Flood

The label of the moderation result. Valid values:

  • Normal: normal content
  • Spam: spam
  • Ad: ad
  • Politics: political content
  • Terrorism: terrorist content
  • Abuse: abuse
  • Flood: excessive junk content
  • Contraband: prohibited content
  • Meaningless: meaningless content
  • Porn: pornographic content
  • Sexy: sexy content
  • Outfit: special costume
  • Logo: special logo
  • Weapon: weapon
  • Politic: political content
  • Others: other content
Scene String Porn

The moderation scenario. Valid values:

  • Antispam: text anti-spam
  • Porn: pornographic content detection
  • Terrorism: terrorist content detection
Rate String 99.99

The score.

NextPageToken String ea04afcca7cd4e80b9ece8fbb251****

The token that is used to retrieve the next page of the query results.

CensorResults Array of CensorResult

A collection of the moderation results. The information includes the summary about various scenarios such as Porn and Terrorism.

CensorResult
Suggestion String Review

The recommended subsequent operation. Valid values:

  • Pass: The content passes the moderation.
  • Review: The content needs to be manually reviewed again.
  • Block: The content needs to be blocked.
Label String Meaningless

The label of the moderation result. Valid values:

  • Normal: normal content
  • Spam: spam
  • Ad: ad
  • Politics: political content
  • Terrorism: terrorist content
  • Abuse: abuse
  • Flood: excessive junk content
  • Contraband: prohibited content
  • Meaningless: meaningless content
  • Porn: pornographic content
  • Sexy: sexy content
  • Outfit: special costume
  • Logo: special logo
  • Weapon: weapon
  • Politic: political content
  • Others: other content
Scene String Terrorism

The moderation scenario. Valid values:

  • Antispam: text anti-spam
  • Porn: pornographic content detection
  • Terrorism: terrorist content detection
Rate String 100

The score.

PipelineId String c5b30b7c0d0e4a0abde1d5f9e751****

The ID of the MPS queue that is used to run the job.

Examples

Sample requests

http(s)://mts.cn-shanghai.aliyuncs.com/?Action=QueryMediaCensorJobDetail
&ResourceOwnerId=1
&JobId=2288c6ca184c0e47098a5b665e2a12****
&NextPageToken=ae0fd49c0840e14daf0d66a75b83****
&MaximumPageSize=30
&<Common request parameters>

Sample success responses

XML format

HTTP/1.1 200 OK
Content-Type:application/xml

<QueryMediaCensorJobDetailResponse>
    <RequestId>B42299E6-F71F-465F-8FE9-4FC2E3D3C2CA</RequestId>
    <MediaCensorJobDetail>
        <CreationTime>2018-09-13T16:32:24Z</CreationTime>
        <FinishTime>2018-09-21</FinishTime>
        <Suggestion>Block</Suggestion>
        <CoverImageCensorResults>
            <Object>test/ai/censor/v2/vme-****.jpg</Object>
            <Location>oss-cn-shanghai</Location>
            <Bucket>bucket-out-test-****</Bucket>
            <Results>
                <Suggestion>Pass</Suggestion>
                <Label>Normal</Label>
                <Scene>Antispam</Scene>
                <Rate>100</Rate>
            </Results>
        </CoverImageCensorResults>
        <State>Success</State>
        <TitleCensorResult>
            <Suggestion>Block</Suggestion>
            <Label>Meaningless</Label>
            <Scene>Antispam</Scene>
            <Rate>99.91</Rate>
        </TitleCensorResult>
        <Message>The resource operated cannot be found</Message>
        <Input>
            <Object>test/ai/censor/test-****.mp4</Object>
            <Location>oss-cn-shanghai</Location>
            <Bucket>bucket-test-in-****</Bucket>
        </Input>
        <BarrageCensorResult>
            <Suggestion>Pass</Suggestion>
            <Label>Normal</Label>
            <Scene>Antispam</Scene>
            <Rate>99.91</Rate>
        </BarrageCensorResult>
        <DescCensorResult>
            <Suggestion>Review</Suggestion>
            <Label>Terrorism</Label>
            <Scene>Terrorism</Scene>
            <Rate>100</Rate>
        </DescCensorResult>
        <VideoCensorConfig>
            <OutputFile>
                <Object>output{Count}.jpg</Object>
                <Location>oss-cn-shanghai</Location>
                <Bucket>test-bucket-****</Bucket>
            </OutputFile>
            <VideoCensor>true</VideoCensor>
            <BizType>common</BizType>
        </VideoCensorConfig>
        <JobId>f8f166eea7a44e9bb0a4aecf9543****</JobId>
        <UserData>example userdata ****</UserData>
        <Code>InvalidParameter.ResourceNotFound</Code>
        <VensorCensorResult>
            <VideoTimelines>
                <Timestamp>00:02:59.999</Timestamp>
                <Object>output{Count}.jpg</Object>
                <CensorResults>
                    <Suggestion>Block</Suggestion>
                    <Label>Flood</Label>
                    <Scene>Porn</Scene>
                    <Rate>99.99</Rate>
                </CensorResults>
            </VideoTimelines>
            <NextPageToken>ea04afcca7cd4e80b9ece8fbb251****</NextPageToken>
            <CensorResults>
                <Suggestion>Review</Suggestion>
                <Label>Meaningless</Label>
                <Scene>Terrorism</Scene>
                <Rate>100</Rate>
            </CensorResults>
        </VensorCensorResult>
        <PipelineId>c5b30b7c0d0e4a0abde1d5f9e751****</PipelineId>
    </MediaCensorJobDetail>
</QueryMediaCensorJobDetailResponse>

JSON format

HTTP/1.1 200 OK
Content-Type:application/json

{
  "RequestId" : "B42299E6-F71F-465F-8FE9-4FC2E3D3C2CA",
  "MediaCensorJobDetail" : {
    "CreationTime" : "2018-09-13T16:32:24Z",
    "FinishTime" : "2018-09-21",
    "Suggestion" : "Block",
    "CoverImageCensorResults" : [ {
      "Object" : "test/ai/censor/v2/vme-****.jpg",
      "Location" : "oss-cn-shanghai",
      "Bucket" : "bucket-out-test-****",
      "Results" : [ {
        "Suggestion" : "Pass",
        "Label" : "Normal",
        "Scene" : "Antispam",
        "Rate" : "100"
      } ]
    } ],
    "State" : "Success",
    "TitleCensorResult" : {
      "Suggestion" : "Block",
      "Label" : "Meaningless",
      "Scene" : "Antispam",
      "Rate" : "99.91"
    },
    "Message" : "The resource operated cannot be found",
    "Input" : {
      "Object" : "test/ai/censor/test-****.mp4",
      "Location" : "oss-cn-shanghai",
      "Bucket" : "bucket-test-in-****"
    },
    "BarrageCensorResult" : {
      "Suggestion" : "Pass",
      "Label" : "Normal",
      "Scene" : "Antispam",
      "Rate" : "99.91"
    },
    "DescCensorResult" : {
      "Suggestion" : "Review",
      "Label" : "Terrorism",
      "Scene" : "Terrorism",
      "Rate" : "100"
    },
    "VideoCensorConfig" : {
      "OutputFile" : {
        "Object" : "output{Count}.jpg",
        "Location" : "oss-cn-shanghai",
        "Bucket" : "test-bucket-****"
      },
      "VideoCensor" : "true",
      "BizType" : "common"
    },
    "JobId" : "f8f166eea7a44e9bb0a4aecf9543****",
    "UserData" : "example userdata ****",
    "Code" : "InvalidParameter.ResourceNotFound",
    "VensorCensorResult" : {
      "VideoTimelines" : [ {
        "Timestamp" : "00:02:59.999",
        "Object" : "output{Count}.jpg",
        "CensorResults" : [ {
          "Suggestion" : "Block",
          "Label" : "Flood",
          "Scene" : "Porn",
          "Rate" : "99.99"
        } ]
      } ],
      "NextPageToken" : "ea04afcca7cd4e80b9ece8fbb251****",
      "CensorResults" : [ {
        "Suggestion" : "Review",
        "Label" : "Meaningless",
        "Scene" : "Terrorism",
        "Rate" : "100"
      } ]
    },
    "PipelineId" : "c5b30b7c0d0e4a0abde1d5f9e751****"
  }
}

Additional description of sample responses

// Sample {jobId}.video_timeline file
// After the moderation is complete, the system generates a JSON file named {jobId}.video_timeline and stores the file in the OSS bucket specified by the CensorConfig.OutputFile parameter of the SubmitMediaCensorJob operation. Sample content of the file:
[
    {
        "CensorResults":[
            {
                "Label":"ad",
                "Rate":"99.9100000000",
                "Scene":"ad"
            }
        ],
        "Object":"gaoshen/test8/example-00001.jpg",
        "Timestamp":"00:00:00.005"
    },
    {
        "CensorResults":[
            {
                "Label":"ad",
                "Rate":"99.9100000000",
                "Scene":"ad"
            }
        ],
        "Object":"gaoshen/test8/example-00002.jpg",
        "Timestamp":"00:00:05.005"
    },
    {
        "CensorResults":[
            {
                "Label":"ad",
                "Rate":"99.9100000000",
                "Scene":"ad"
            }
        ],
        "Object":"gaoshen/test8/example-00003.jpg",
        "Timestamp":"00:00:10.005"
    },
    {
        "CensorResults":[
            {
                "Label":"ad",
                "Rate":"99.9100000000",
                "Scene":"ad"
            }
        ],
        "Object":"gaoshen/test8/example-00004.jpg",
        "Timestamp":"00:00:15.005"
    },
    {
        "CensorResults":[
            {
                "Label":"politics",
                "Rate":"82.6500000000",
                "Scene":"terrorism",
                "SfaceData":[
                    {
                        "faces":[
                            {
                                "id":"AliFace_0006942",
                                "name":"Name",
                                "rate":82.65
                            }
                        ],
                        "h":51,
                        "w":40,
                        "x":744,
                        "y":121
                    }
                ]
            }
        ],
        "Object":"gaoshen/test8/example-00008.jpg",
        "Timestamp":"00:00:35.005"
    },
    {
        "CensorResults":[
            {
                "Label":"politics",
                "Rate":"93.6800000000",
                "Scene":"terrorism",
                "SfaceData":[
                    {
                        "faces":[
                            {
                                "id":"AliFace_0006942",
                                "name":"Name",
                                "rate":93.68
                            }
                        ],
                        "h":119,
                        "w":94,
                        "x":662,
                        "y":94
                    }
                ]
            }
        ],
        "Object":"gaoshen/test8/example-00011.jpg",
        "Timestamp":"00:00:50.005"
    },
    {
        "CensorResults":[
            {
                "Label":"ad",
                "Rate":"99.9100000000",
                "Scene":"ad"
            }
        ],
        "Object":"gaoshen/test8/example-00011.jpg",
        "Timestamp":"00:00:50.005"
    },
    {
        "CensorResults":[
            {
                "Label":"politics",
                "Rate":"89.2900000000",
                "Scene":"terrorism",
                "SfaceData":[
                    {
                        "faces":[
                            {
                                "id":"AliFace_0006942",
                                "name":"Name",
                                "rate":89.29
                            }
                        ],
                        "h":112,
                        "w":98,
                        "x":665,
                        "y":138
                    }
                ]
            }
        ],
        "Object":"gaoshen/test8/example-00012.jpg",
        "Timestamp":"00:00:55.005"
    },
    {
        "CensorResults":[
            {
                "Label":"ad",
                "Rate":"99.9100000000",
                "Scene":"ad"
            }
        ],
        "Object":"gaoshen/test8/example-00012.jpg",
        "Timestamp":"00:00:55.005"
    },
    {
        "CensorResults":[
            {
                "Label":"politics",
                "Rate":"92.1400000000",
                "Scene":"terrorism",
                "SfaceData":[
                    {
                        "faces":[
                            {
                                "id":"AliFace_0007536",
                                "name":"Name",
                                "rate":92.14
                            }
                        ],
                        "h":117,
                        "w":91,
                        "x":665,
                        "y":79
                    }
                ]
            }
        ],
        "Object":"gaoshen/test8/example-00013.jpg",
        "Timestamp":"00:01:00.005"
    },
    {
        "CensorResults":[
            {
                "Label":"ad",
                "Rate":"99.9100000000",
                "Scene":"ad"
            }
        ],
        "Object":"gaoshen/test8/example-00013.jpg",
        "Timestamp":"00:01:00.005"
    },
    {
        "CensorResults":[
            {
                "Label":"politics",
                "Rate":"93.0400000000",
                "Scene":"terrorism",
                "SfaceData":[
                    {
                        "faces":[
                            {
                                "id":"AliFace_0002750",
                                "name":"Name",
                                "rate":93.04
                            }
                        ],
                        "h":120,
                        "w":93,
                        "x":673,
                        "y":79
                    }
                ]
            }
        ],
        "Object":"gaoshen/test8/example-00014.jpg",
        "Timestamp":"00:01:05.005"
    }
]

Error codes

For a list of error codes, visit the API Error Center.