All Products
Search
Document Center

Platform For AI:LVM-Caption-Video Mapper (DLC)

Last Updated:Nov 08, 2024

The LVM-Caption-Video Mapper (DLC) component of Platform for AI (PAI) is used to generate video text. Only MP4 videos can be processed.

Supported computing resources

Deep Learning Containers (DLC)

Algorithm

The LVM-Caption-Video Mapper (DLC) component uses the VideoBLIP model to generate video text based on the video frames that are sampled.

Inputs and outputs

Input ports

  • The Read File Data component is used to read the Object Storage Service (OSS) path in which the training data is stored.

  • You can configure the OSS Data Path parameter to select the OSS directory in which the video data is stored or select the video metadata file. For more information, see the parameter description in the following section.

  • You can use any component of LVM Data Processing (DLC) as the input.

Output port

The results. For more information, see the parameter description in the following section.

Configure the component

You can configure the parameters of the LVM-Caption-Video Mapper (DLC) component in Machine Learning Designer. The following table describes the parameters.

Tab

Parameter

Required

Description

Default value

Field Settings

Video Data OSS Path

No

If no upstream component exists the first time you run this component, you must manually select the OSS directory in which the video data is stored. When the component runs, the video metadata file video_meta.jsonl is generated in the upper-level directory of the directory specified by this parameter. When you use the component to process the video data later, you can directly select the file video_meta.jsonl.

No default value

Output File OSS Path

Yes

The OSS directory in which the results are stored. The results include the following files:

  • {name}.jsonl: the output file. The output file is specified by the Output Filename parameter.

  • dj_run_yaml.yaml: the parameter configuration file used when the algorithm runs.

No default value

Output Filename

Yes

The file name of the results.

result.jsonl

Parameter Settings

Number of Candidate Captions

Yes

The number of text candidates generated.

1

Number of Sampled Frames

Yes

The number of video frames that are sampled. The system evenly collects frames in a video for analysis based on the video duration.

3

Execution Tuning

Select Resource Group

Public Resource Group

No

The instance type (CPU or GPU) and virtual private cloud (VPC) that you want to use. You must select the GPU instance type for the algorithm.

No default value

Dedicated resource group

No

The number of vCPUs, memory, shared memory, and number of GPUs that you want to use.

No default value

Maximum Running Duration (seconds)

No

The maximum period of time for which the component can run. If the specified period of time is exceeded, the job is terminated.

No default value