Use the SubmitIProductionJob API to submit an intelligent production job.
Try it now
Test
RAM authorization
|
Action |
Access level |
Resource type |
Condition key |
Dependent action |
|
ice:SubmitIProductionJob |
create |
*All Resource
|
None | None |
Request parameters
|
Parameter |
Type |
Required |
Description |
Example |
| Name |
string |
No |
The job name, up to 100 characters long. |
测试任务 |
| FunctionName |
string |
Yes |
The algorithm to use. Valid values:
|
Cover |
| Input |
object |
Yes |
The input media asset, specified by an Input file requirements vary by algorithm. For details, see the subsequent sections of this topic. |
|
| Type |
string |
Yes |
The input type. Valid values:
|
OSS |
| Media |
string |
Yes |
The
|
oss://bucket/object |
| Output |
object |
Yes |
The output media asset, specified by an The output files vary by algorithm. For details, see the subsequent sections of this topic. |
|
| Type |
string |
Yes |
The output type. Valid values:
|
OSS |
| Biz |
string |
No |
The service that owns the media asset. |
IMS |
| Media |
string |
Yes |
The output media asset. This value is an Use one of the following
Media asset ID:
Note
The
|
oss://bucket/object |
| OutputUrl |
string |
No |
The |
http(s)://bucket.oss-[RegionId].aliyuncs.com/object |
| TemplateId |
string |
No |
The template ID. |
****20b48fb04483915d4f2cd8ac**** |
| JobParams |
string |
No |
The algorithm job parameters, provided as a JSON string. Required parameters vary by algorithm. For details, see the subsequent sections of this topic. |
{"Model":"gif"} |
| ScheduleConfig |
object |
No |
The job scheduling configuration. |
|
| PipelineId |
string |
No |
The pipeline ID. |
5246b8d12a62433ab77845074039c3dc |
| Priority |
integer |
No |
The job priority. Valid values: 1 to 10. A smaller value indicates a higher priority. |
6 |
| UserData |
string |
No |
Custom user data. This data is returned in the response without modification. The value can be up to 256 characters in length. |
{"test":1} |
| ModelId |
string |
No |
The algorithm model ID. If this parameter is empty, the system uses the default model for the algorithm. You can usually leave this parameter empty. Non-default models are available for the following algorithms:
|
Input and output fields
Cover
Input: a video file. Output: multiple images (three by default, which must be distinguished using placeholders). The output format is PNG for static images or GIF for animated images, depending on the settings in JobParams.
VideoDelogo
Input: a video file. Output: a video in MP4 format with the logo removed.
VideoDetext
Input: a video file. Output: a video in MP4 format with captions removed.
CaptionExtraction
Input: a video file. Output: a caption file in SRT format.
HybridCaptionExtraction
Input: a video file. Output: a caption file in SRT format.
VideoGreenScreenMatting
Input: a video file. Output: a video with the green screen background removed. The format is MP4 or WebM, depending on the settings in JobParams.
FaceBeauty
Input: a video file. Output: a beautified video in MP4 format.
VideoH2V
Input: a video file. Output: a video in MP4 format converted from a horizontal to a vertical aspect ratio.
MusicSegmentDetect
Input: an audio file. Output: a JSON file containing the chorus detection results.
AudioBeatDetection
Input: an audio file. Output: a JSON file containing the beat detection results.
AudioQualityAssessment
Input: an audio file. No output file is generated. The audio quality assessment results are returned directly in the response of the QueryIProductionJob operation.
SpeechDenoise
Input: an audio file. Output: a noise-reduced audio file in WAV format.
AudioMixing
Input: an audio file. Output: a mixed audio file in WAV format. For details on how to specify additional audio files for mixing, see the JobParams parameters below.
MusicDemix
Input: an audio file (typically a song). Output: two audio files resulting from source separation. You must include the {resultType} placeholder in the output path to distinguish between the vocals and the accompaniment.
JobParams JSON fields
Cover
Model:string. The model for the smart cover. If this parameter is left empty, a static image is generated. If set togif, an animated image is generated.
VideoDelogo
LogoModel:string. The type of station logo to remove. Valid values aretv(for television station logos) andinternet(for online media logos). You can specify multiple values, separated by commas.Boxes:string. The bounding boxes for the target logos. The coordinates are normalized values relative to the top-left corner of the video, in the format[xmin, ymin, width, height]. Supports up to two bounding boxes. Example:"[[0, 0, 0.3, 0.3], [0.7, 0, 0.3, 0.3]]".
VideoDetext
LimitRegion:list. Specifies the region(s) for caption detection. The coordinates are normalized values relative to the top-left corner, specified as[xmin, ymin, width, height]. You can specify multiple detection regions. Example:[[0, 0, 0.3, 0.3], [0.7, 0, 0.3, 0.3]]. Note: If this parameter is not set, the default detection region is the bottom 30% of the video.Time:list. The time range for caption removal, specified in seconds as[start_time, end_time]. For example,[5, 20]removes captions between the 5-second and 20-second marks of the video.The
Timeparameter can be a one-dimensional array, such as[5, 20], to specify a single time range.The
Timeparameter can also be a two-dimensional array, such as[[5, 20], [25, 43], [51, 80]], to specify multiple time ranges (supported only whenmodelIdis set toalgo-video-detext-new).
CaptionExtraction
fps:integer(Optional). The sampling frame rate. Range: [2, 10]. Default: 5.roi:list. The region of interest (ROI) for caption extraction. Only captions within this region are extracted. The format is[[top, bottom], [left, right]], using normalized values. For example,[[0.5, 1], [0, 1]]specifies the bottom half of the video. If this parameter is not provided, the default region is the bottom 1/4 of the video.lang:string. The recognition language. Valid values:ch(Chinese),en(English), andch_ml(Chinese-English mixed). Default:ch.track:string. If set tomain, only the main caption track is extracted. If this parameter is not set, the system extracts all captions that appear in the specified region by default.
HybridCaptionExtraction
fps:integer(Optional). The sampling frame rate. Range: [2, 10]. Default: 5.roi:list. The bounding box of the target caption, in the format[bx, by, bw, bh]. If this parameter is not provided, the default region is the bottom 1/4 of the video.bx: The normalized x-coordinate of the top-left corner of the bounding box, relative to the video width. Example: 0.1.by: The normalized y-coordinate of the top-left corner of the bounding box, relative to the video height. Example: 0.0.bw: The normalized width of the bounding box, relative to the video width. Example: 0.3.bh: The normalized height of the bounding box, relative to the video height. Example: 0.2.
lang:string. The recognition language. Valid values:zh(Chinese) anden(English). Default:zh.track:string. If set tomain, only the main caption track is extracted. If this parameter is not set, the system extracts all captions that appear in the specified region by default.
VideoGreenScreenMatting
bgimage:string. The background image to replace the green screen. Example: http://example-image-****.example-location.aliyuncs.com/example/example.jpg. If you do not set this parameter, the service outputs a WebM video with an alpha channel.
FaceBeauty
beauty_params:string. The beautification parameters. Example: "whiten=20,smooth=50,face_thin=50". For more information, see Parameter field descriptions.
VideoH2V
None
MusicSegmentDetect
None
AudioBeatDetection
None
AudioQualityAssessment
None
SpeechDenoise
Input audio requirements: The audio file must be in WAV format with a sample rate of 16 kHz or 48 kHz.
AudioMixing
inputs:list. A list of URLs for the audio tracks to mix. Currently, only one audio track is supported. Example:{"file":"http://example-bucket-****.oss-cn-shanghai.aliyuncs.com/2.mp4"}
MusicDemix
None
Response elements
|
Element |
Type |
Description |
Example |
|
object |
The response object. |
||
| RequestId |
string |
The ID of the request. |
C1849434-FC47-5DC1-92B6-F7EAAFE3851E |
| JobId |
string |
The job ID. |
****20b48fb04483915d4f2cd8ac**** |
Examples
Success response
JSON format
{
"RequestId": "C1849434-FC47-5DC1-92B6-F7EAAFE3851E",
"JobId": "****20b48fb04483915d4f2cd8ac****"
}
Error codes
See Error Codes for a complete list.
Release notes
See Release Notes for a complete list.