All Products
Search
Document Center

Intelligent Media Services:Produce a video of face highlights

Last Updated:Jun 13, 2025

This tutorial describes how to create and edit face highlights by using the face search and video editing features of Intelligent Media Services (IMS) efficiently. To help you get started and create high-quality face highlights, this tutorial covers the basic operations of intelligent media asset retrieval, the API calling procedure of face search, timeline configurations, and the use of advanced templates.

Background information

With the rapid growth of video content, quickly identifying and producing video clips for key characters has become important work. As an effective method of video presentation, face highlights display specific activities in different scenes and are widely used in various fields, such as film and television production, news reports, and social entertainment. The following tutorial shows you how to produce a video of face highlights by using IMS efficiently.

Sample scenarios

Scenario 1: Athlete highlights in a sports event, such as a marathon

In sports events such as marathons, organizers or news agencies need to produce videos of athlete highlights in the game, such as sprinting moments and overtaking opponents. These video highlights are significant for promotion, review, and personal publicity of athletes.

Scene 2: Visitor highlights in amusement parks

To improve visitor experience, amusement parks tend to make personalized video clips for visitors to record their highlights in various amusement projects, such as screaming on roller coasters and laughing on merry-go-rounds. These video highlights can serve as souvenirs for visitors or promotional materials on social media for amusement parks.

Scene 3: Behind-the-scenes for ceremonies, such as wedding ceremonies

In the wedding service industry, wedding videos record important memories for newlyweds. Both the behind-the-scenes of the wedding day and videos of the couple's trips need to be carefully edited and remixed to show the love story of the newlyweds and romantic moments of the wedding.

Scene 4: Collections of fan faces at concerts

At large-scale concerts, organizers tend to produce a collection of fan faces to show the enthusiasm and engagement of the audience. These face highlights can be displayed on big screens to warm up the concert and can also be used as promotional material on social media.

Additionally, face highlights can be widely used in various fields, including personal memoirs, family memorial videos, tourist promotional videos, corporate annual meetings, and event reviews and interactions.

Procedure

Prerequisites

The IMS server-side SDK is installed. For more information, see Preparations. If the SDK is installed, skip this step.

Step 1: Query information about video clips by using face search

You can use the face search feature of Intelligent Media Services to retrieve video materials and obtain media segment information containing faces. For more information about this step, see Search for faces in many media assets.

Step 2: Edit the video clips that contain the specified face

This step describes three methods for producing face highlight videos, including use scenarios, procedures, SDK sample code, and sample videos.

Method 1: Use script-based automatic video production

Features
  • Convenient and efficient: The process of producing face highlight videos becomes more convenient through script-based automatic generation. Users can efficiently generate multiple similar videos through simple configurations.

  • Quick to learn and low barrier to entry: Even without video editing experience, users can easily produce impressive videos with the powerful one-click production feature.

Implementation through SDK

Show code

package com.example;

import java.util.*;

import com.alibaba.fastjson.JSONArray;
import com.alibaba.fastjson.JSONObject;

import com.aliyun.ice20201109.Client;
import com.aliyun.ice20201109.models.*;
import com.aliyun.teaopenapi.models.Config;


/**
 *  Add the following Maven dependencies:
 *   <dependency>
 *      <groupId>com.aliyun</groupId>
 *      <artifactId>ice20201109</artifactId>
 *      <version>2.3.0</version>
 *  </dependency>
 *  <dependency>
 *      <groupId>com.alibaba</groupId>
 *      <artifactId>fastjson</artifactId>
 *      <version>1.2.9</version>
 *  </dependency>
 */
public class SubmitFaceEditingJobService {

    static final String regionId = "cn-shanghai";
    static final String bucket = "<your-bucket>";

    /*** Submit a one-click face highlight production task based on face search results ****/
    public static void submitBatchEditingJob(String mediaId, SearchMediaClipByFaceResponse response) throws Exception {

        Client iceClient = initClient();

        JSONArray intervals = buildIntervals(response);
        JSONObject editingTimeline = buildEditingTimeline(mediaId, intervals);

        String openingMediaId = "icepublic-9a2df29956582a68a59e244a5915228c";
        String endingMediaId = "icepublic-1790626066bee650ac93bd12622a921c";
        String mainGroupName = "main";
        String openingGroupName = "opening";
        String endingGroupName = "ending";

        JSONObject inputConfig = buildInputConfig(mediaId, mainGroupName, openingMediaId, openingGroupName, endingMediaId, endingGroupName);
        JSONObject editingConfig = buildEditingConfig(mediaId, mainGroupName, intervals, openingMediaId, openingGroupName, endingMediaId, endingGroupName);
        JSONObject outputConfig = new JSONObject();
        outputConfig.put("MediaURL", "https://ice-auto-test.oss-cn-shanghai.aliyuncs.com/testBatch/" + System.currentTimeMillis() + "{index}.mp4");
        outputConfig.put("Width", 1280);
        outputConfig.put("Height", 720);
        outputConfig.put("FixedDuration", 18);
        outputConfig.put("Count", 2);

        SubmitBatchMediaProducingJobRequest request = new SubmitBatchMediaProducingJobRequest();
        request.setInputConfig(inputConfig.toJSONString());
        request.setEditingConfig(editingConfig.toJSONString());
        request.setOutputConfig(outputConfig.toJSONString());

        SubmitBatchMediaProducingJobResponse response = iceClient.submitBatchMediaProducingJob(request);
        System.out.println("JobId: " + response.getBody().getJobId());
    }

    public Client initClient() throws Exception {
        // The AccessKey pair of an Alibaba Cloud account has access permissions on all API operations. We recommend that you use the AccessKey pair of a Resource Access Management (RAM) user to call API operations or perform routine O&M.
        // In this example, the AccessKey ID and AccessKey secret are obtained from the environment variables. For information about how to configure environment variables to store the AccessKey ID and AccessKey secret, visit https://www.alibabacloud.com/help/zh/sdk/developer-reference/v2-manage-access-credentials
        com.aliyun.credentials.Client credentialClient = new com.aliyun.credentials.Client();

        Config config = new Config();
        config.setCredential(credentialClient);

        // To hard-code your AccessKey ID and AccessKey secret, use the following lines. However, we recommend that you do not hard-code your AccessKey ID and AccessKey secret for security concerns.
        // config.accessKeyId = <AccessKey ID created in step 2>;
        // config.accessKeySecret = <AccessKey Secret created in step 2>;
        config.endpoint = "ice." + regionId + ".aliyuncs.com";
        config.regionId = regionId;
        return new Client(config);
    }

    public JSONArray buildIntervals(SearchMediaClipByFaceResponse response) {
        JSONArray intervals = new JSONArray();
        List<SearchMediaClipByFaceResponseBodyMediaClipListOccurrencesInfos> occurrencesInfos =
        response.getBody().getMediaClipList().get(0).getOccurrencesInfos();
        for (SearchMediaClipByFaceResponseBodyMediaClipListOccurrencesInfos occurrence: occurrencesInfos) {
            Float startTime = occurrence.getStartTime();
            Float endTime = occurrence.getEndTime();

            // You can adjust the filtering logic
            // Filter out clips shorter than 2s
            if (endTime - startTime < 2) {
                continue;
            }
            // Truncate clips longer than 6s
            if (endTime - startTime > 6) {
                endTime = startTime + 6;
            }

            JSONObject interval = new JSONObject();
            interval.put("In", startTime);
            interval.put("Out", endTime);
            intervals.add(interval);
        }

        return intervals;
    }
    
    public static JSONObject buildSingleInterval(Float in, Float out) {
        JSONObject interval = new JSONObject();
        interval.put("In", in);
        interval.put("Out", out);
        return interval;
    }

    public static JSONObject buildMediaMetaData(String mediaId, String groupName, JSONArray intervals) {
        JSONObject mediaMetaData = new JSONObject();
        mediaMetaData.put("Media", mediaId);
        mediaMetaData.put("GroupName", groupName);
        mediaMetaData.put("TimeRangeList", intervals);
        return mediaMetaData;
    }

    public static JSONObject buildInputConfig(String mediaId, String mainGroupName, String openingMediaId, String openingGroupName, String endingMediaId, String endingGroupName) {
        JSONObject inputConfig = new JSONObject();
        JSONArray mediaGroupArray = new JSONArray();
        if (openingMediaId != null) {
            // Configure opening credits as needed
            JSONObject openingGroup = new JSONObject();
            openingGroup.put("GroupName", openingGroupName);
            openingGroup.put("MediaArray", Arrays.asList(openingMediaId));
            mediaGroupArray.add(openingGroup);
        }

        JSONObject mediaGroupMain = new JSONObject();
        mediaGroupMain.put("GroupName", mainGroupName);
        mediaGroupMain.put("MediaArray", Arrays.asList(mediaId));
        mediaGroupArray.add(mediaGroupMain);

        if (endingMediaId != null) {
            // Configure ending credits as needed
            JSONObject endingGroup = new JSONObject();
            endingGroup.put("GroupName", endingGroupName);
            endingGroup.put("MediaArray", Arrays.asList(endingMediaId));
            mediaGroupArray.add(endingGroup);
        }

        inputConfig.put("MediaGroupArray", mediaGroupArray);

        // Custom background music
        inputConfig.put("BackgroundMusicArray", Arrays.asList("icepublic-0c4475c3936f3a8743850f2da942ceee"));

        return inputConfig;
    }

    public static JSONObject buildEditingConfig(String mediaId, String mainGroupName, JSONArray intervals, String openingMediaId, String openingGroupName, String endingMediaId, String endingGroupName) {
        JSONObject editingConfig = new JSONObject();
        JSONObject mediaConfig = new JSONObject();
        JSONArray mediaMetaDataArray = new JSONArray();
        if (openingMediaId != null) {
            // Configure in and out points for opening material as needed
            JSONObject openingInterval = buildSingleInterval(1.5f, 5.5f);
            JSONArray openingIntervals = new JSONArray();
            openingIntervals.add(openingInterval);
            JSONObject metaData = buildMediaMetaData(openingMediaId, openingGroupName, openingIntervals);
            mediaMetaDataArray.add(metaData);
        }

        // Configure in and out points for main material (face appearance segments)
        JSONObject mainMediaMetaData = buildMediaMetaData(mediaId, mainGroupName, intervals);
        mediaMetaDataArray.add(mainMediaMetaData);

        if (endingMediaId != null) {
            // Configure in and out points for ending material as needed
            JSONObject endingInterval = buildSingleInterval(1.5f, 5.5f);
            JSONArray endingIntervals = new JSONArray();
            endingIntervals.add(endingInterval);
            JSONObject metaData = buildMediaMetaData(endingMediaId, endingGroupName, endingIntervals);
            mediaMetaDataArray.add(metaData);
        }
        mediaConfig.put("MediaMetaDataArray", mediaMetaDataArray);
        editingConfig.put("MediaConfig", mediaConfig);
        return editingConfig;

    }

}
Implementation through console

If you choose to use the console for video production, you can skip "Step 1: Query information about video clips by using face search" and follow these steps instead.

Step 1: Click "Intelligent Batch One-Click Production" in the left sidebar of the console, then click "Create Script-Based Automatic Production" in the "Script-Based Automatic Production" section, and select "Generation Mode" (both voice-over mode and group mode can be used, this example will use group mode). Enter the creation page.

image

Step 2: Click "Add Material" in the "Script Node Configuration" section. In the "Add Material" dialog box on the right, select "Face Search", then upload a face image to search for materials. The search results will appear as shown below.

Step 3: In the search results list, click the [View] button for a search result material to view the matched segments. Then, click [Import Matched Segments] and click [OK] at the bottom of the list to add the matched materials to the script node.

Step 4: If you want to view or adjust the face search matched segments, hover over the material and click the settings button to open the settings dialog box. Here, you can manually adjust the matched segments under [Video Content Trimming]. (If you don't need to adjust the in and out points, you can skip step 4.)

After completing these steps, follow the normal script-based production workflow (including title, background music, sticker logo, and video settings) to configure and submit the task.

Method 2: Use a timeline

Features
  • Highly customized video content: Timeline editing is highly customizable. You can control each trim, special effect, transition, and audio effect. You can combine and adjust materials based on your own inspirations and requirements to produce your one and only videos.

  • High flexibility: Timeline editing is not limited by templates. You can adjust the editing scheme at any time based on project requirements. Timeline editing helps you meet diversified video production requirements, such as adjusting the clip sequence, adding special effects, and managing audio effects.

Procedure

Based on the face search results from Step 1, you can use the SubmitMediaProducingJob operation and Timeline configurations to edit face highlight videos. You can refer to the SDK sample code in the following section. For advanced editing requirements, such as special effects, transitions, and audio processing, we recommend that you read and refer to Video editing basic parameters and Timeline configurations to gain a more comprehensive understanding of how to use the video editing feature to produce face highlight videos.

SDK sample code

Show code

Show timeline parameters

{
  "VideoTracks": [
    {
      "VideoTrackClips": [
        {
          "MediaId": "b5a003f0cd3f71ed919fe7e7c45b****",
          "In": 54.106018,
          "Effects": [
            {
              "Type": "Volume",
              "Gain": 0
            }
          ],
          "Out": 56.126015
        },
        {
          "MediaId": "b5a003f0cd3f71ed919fe7e7c45b****",
          "In": 271.47302,
          "Effects": [
            {
              "Type": "Volume",
              "Gain": 0
            }
          ],
          "Out": 277.393
        },
        {
          "MediaId": "b5a003f0cd3f71ed919fe7e7c45b****",
          "In": 326.03903,
          "Effects": [
            {
              "Type": "Volume",
              "Gain": 0
            }
          ],
          "Out": 331.959
        },
        {
          "MediaId": "b5a003f0cd3f71ed919fe7e7c45b****",
          "In": 372.20602,
          "Effects": [
            {
              "Type": "Volume",
              "Gain": 0
            }
          ],
          "Out": 375.126
        },
        {
          "MediaId": "b5a003f0cd3f71ed919fe7e7c45b****",
          "In": 383.03903,
          "Effects": [
            {
              "Type": "Volume",
              "Gain": 0
            }
          ],
          "Out": 388.959
        },
        {
          "MediaId": "b5a003f0cd3f71ed919fe7e7c45b****",
          "In": 581.339,
          "Effects": [
            {
              "Type": "Volume",
              "Gain": 0
            }
          ],
          "Out": 587.25903
        },
        {
          "MediaId": "b5a003f0cd3f71ed919fe7e7c45b****",
          "In": 602.339,
          "Effects": [
            {
              "Type": "Volume",
              "Gain": 0
            }
          ],
          "Out": 607.293
        }
      ]
    }
  ],
  "AudioTracks": [
    {
      "AudioTrackClips": [
        {
          "LoopMode": true,
          "MediaId": "icepublic-0c4475c3936f3a8743850f2da942ceee"
        }
      ]
    }
  ]
}

Method 3: Use an advanced template

Features
  • Efficient and high-quality: Advanced templates provide ready-made templates. You only need to add your own materials to the templates to complete video production, which greatly saves production time. You can also design your own templates to improve video quality.

  • Consistent video style: Templates ensure the consistency and professionalism of the video style. You can produce professional videos with high click-through rates even if you do not have professional skills in video production.

Procedure

Based on the face search results from Step 1, you can use the "Advanced Template + ClipsParam" in SubmitMediaProducingJob - Submit a media editing and production job to complete the editing of face highlight videos. You can refer to the SDK sample code below, which uses the public template IceSys_VETemplate_s100241 as an example to configure video segments based on face search results. If you need to use your own custom advanced templates for face highlight video creation, we recommend reading and referring to the User guide - How to use advanced templates.

SDK sample code

Show code

package com.example;

import java.util.*;

import com.alibaba.fastjson.JSONArray;
import com.alibaba.fastjson.JSONObject;

import com.aliyun.ice20201109.Client;
import com.aliyun.ice20201109.models.*;
import com.aliyun.teaopenapi.models.Config;


/**
 *  Add the following Maven dependencies:
 *   <dependency>
 *      <groupId>com.aliyun</groupId>
 *      <artifactId>ice20201109</artifactId>
 *      <version>2.3.0</version>
 *  </dependency>
 *  <dependency>
 *      <groupId>com.alibaba</groupId>
 *      <artifactId>fastjson</artifactId>
 *      <version>1.2.9</version>
 *  </dependency>
 */
public class SubmitFaceEditingJobService {

    static final String regionId = "cn-shanghai";
    static final String bucket = "<your-bucket>";

    /*** Submit the face highlight editing task based on the face search result ****/
    public static void submitEditingJob(String mediaId, SearchMediaClipByFaceResponse response) throws Exception {

        Client iceClient = initClient();
        JSONArray intervals = buildIntervals(response);

        // Submit an advanced template task. In this example, the IceSys_VETemplate_s100241 template is used to edit the video clips obtained by using face search.
        JSONObject clipParams = buildClipParams(mediaId, intervals);

        SubmitMediaProducingJobRequest request2 = new SubmitMediaProducingJobRequest();
        request2.setTemplateId("IceSys_VETemplate_s100241");
        request2.setClipsParam(clipParams.toJSONString());
        request2.setOutputMediaTarget("oss-object");
        outputConfig = new JSONObject();
        outputConfig.put("MediaURL",
                         "https://" + bucket + ".oss-" + regionId + "/testTemplate/" + System.currentTimeMillis() + ".mp4");
        request2.setOutputMediaConfig(outputConfig.toJSONString());

        SubmitMediaProducingJobResponse response2 = iceClient.submitMediaProducingJob(request2);
        System.out.println("JobId: " + response2.getBody().getJobId());
    }

    public Client initClient() throws Exception {
        // The AccessKey pair of an Alibaba Cloud account has access permissions on all API operations. We recommend that you use the AccessKey pair of a RAM user to call API operations or perform routine O&M.
        // In this example, the AccessKey ID and AccessKey secret are obtained from the environment variables. For information about how to configure environment variables, see: https://www.alibabacloud.com/help/zh/sdk/developer-reference/v2-manage-access-credentials
        com.aliyun.credentials.Client credentialClient = new com.aliyun.credentials.Client();

        Config config = new Config();
        config.setCredential(credentialClient);

        // To hard-code your AccessKey ID and AccessKey secret, use the following code. However, we recommend that you do not save the AccessKey ID and the AccessKey secret in your project code. Otherwise, the AccessKey pair may be leaked and the security of resources within your account may be compromised.
        // config.accessKeyId = <AccessKey ID created in step 2>;
        // config.accessKeySecret = <AccessKey Secret created in step 2>;
        config.endpoint = "ice." + regionId + ".aliyuncs.com";
        config.regionId = regionId;
        return new Client(config);
    }

    public JSONArray buildIntervals(SearchMediaClipByFaceResponse response) {
        JSONArray intervals = new JSONArray();
        List<SearchMediaClipByFaceResponseBodyMediaClipListOccurrencesInfos> occurrencesInfos =
        response.getBody().getMediaClipList().get(0).getOccurrencesInfos();
        for (SearchMediaClipByFaceResponseBodyMediaClipListOccurrencesInfos occurrence: occurrencesInfos) {
            Float startTime = occurrence.getStartTime();
            Float endTime = occurrence.getEndTime();

            // You can adjust the filtering logic
            // Filter out clips shorter than 2 seconds
            if (endTime - startTime < 2) {
                continue;
            }
            // Truncate clips longer than 6 seconds
            if (endTime - startTime > 6) {
                endTime = startTime + 6;
            }

            JSONObject interval = new JSONObject();
            interval.put("In", startTime);
            interval.put("Out", endTime);
            intervals.add(interval);
        }

        return intervals;
    }

    public JSONObject buildClipParams(String mediaId, JSONArray intervals) {
        JSONObject clipParams = new JSONObject();
        for (int i = 0; i< intervals.size(); i++) {
            JSONObject interval = intervals.getJSONObject(i);
            Float in = interval.getFloat("In");
            Float out = interval.getFloat("Out");
            clipParams.put("Media" + i, mediaId);
            clipParams.put("Media" + i + ".clip_start", in);
        }
        return clipParams;
    }


}

Show clipParams parameters

{
	"Media0": "b5a003f0cd3f71ed919fe7e7c45b****",
	"Media0.clip_start": 54.066017,
	"Media1": "b5a003f0cd3f71ed919fe7e7c45b****",
	"Media1.clip_start": 67.33301,
	"Media2": "b5a003f0cd3f71ed919fe7e7c45b****",
	"Media2.clip_start": 271.47302,
	"Media3": "b5a003f0cd3f71ed919fe7e7c45b****",
	"Media3.clip_start": 326.03903,
	"Media4": "b5a003f0cd3f71ed919fe7e7c45b****",
	"Media4.clip_start": 372.20602,
	"Media5": "b5a003f0cd3f71ed919fe7e7c45b****",
	"Media5.clip_start": 383.03903,
	"Media6": "b5a003f0cd3f71ed919fe7e7c45b****",
	"Media6.clip_start": 581.339,
	"Media7": "b5a003f0cd3f71ed919fe7e7c45b****",
	"Media7.clip_start": 587.25903,
	"Media8": "b5a003f0cd3f71ed919fe7e7c45b****",
	"Media8.clip_start": 602.339
}

Sample videos

Video created using Timeline editing

This is a face highlight video of a player in a white jersey with number 5.

Video created using advanced templates

The transitions and speed effects in the final video are all template effects.

Video created using script-based automatic production - SDK method

This is a face highlight video of a player in a white jersey with number 5.

Video created using script-based automatic production - console method

This is a face highlight video of a player in a white jersey with number 17.

References