All Products
Search
Document Center

Intelligent Media Management:Face detection

Last Updated:Nov 14, 2024

Enabled by computer vision, face detection is used to identify and locate human faces in images or videos. Face detection can be used in a variety of scenarios, such as authentication, system monitoring, intelligent album management, and customer behavior analysis. This topic describes how to use the face detection feature, including how to use this feature to identify faces and their associated characteristics in an image.

Overview

The face detection feature is based on image AI technology. It can detect one or more faces and face information in an image. If the image contains multiple faces, the system detects these faces and face information. The face information includes face ID, age, gender, mood, attractiveness, face quality, and face attributes. The face attributes include face position, head orientation, glasses, beard, and mask.

image

Scenarios

  • User authentication: Face detection can work with face similarity comparison to implement user authentication. User authentication is mostly required for face unlocking on mobile phones. For more information about face similarity comparison, see Face similarity comparison.

  • Facial expression analysis: Facial expressions are analyzed by using face detection and facial expression recognition technologies. Facial expression analysis applies to scenarios such as sentiment analysis, augmented reality (AR), and virtual characters.

Note
  • Background clutter: Complex backgrounds may be confused with facial characteristics, affecting detection results.

  • If an image contains multiple faces, face occlusion may occur, which may degrade the performance of face detection algorithms.

Prerequisites

  • An AccessKey pair is created and obtained. For more information, see Create an AccessKey pair.

  • OSS is activated, a bucket is created, and objects are uploaded to the bucket. For more information, see Upload objects.

  • IMM is activated. For more information, see Activate IMM.

  • A project is created in the IMM console. For more information, see Create a project.

    Note
    • You can call the CreateProject operation to create a project. For more information, see CreateProject.

    • You can call the ListProjects operation to query the existing projects in a specific region. For more information, see ListProjects.

Usage

Call the DetectImageFaces operation to detect faces in images and face information, including age and gender.

Example

  • IMM project: test-project

  • Image location: oss://test-bucket/test-object.jpg

  • Image:

    test-object

Sample request

{
    "ProjectName": "test-project",
    "SourceURI": "oss://test-bucket/test-object.jpg",
}

Sample response

{
  "RequestId": "47449201-245D-58A7-B56B-BDA483874B20",
  "Faces": [
    {
      "Beard": "none",
      "MaskConfidence": 0.724,
      "Gender": "male",
      "Boundary": {
        "Left": 138,
        "Top": 102,
        "Height": 19,
        "Width": 17
      },
      "BeardConfidence": 0.801,
      "FigureId": "b6525b63-cb12-4fab-a9f4-9c7de08b80c3",
      "Mouth": "close",
      "Emotion": "none",
      "Age": 36,
      "MouthConfidence": 0.984,
      "FigureType": "face",
      "GenderConfidence": 0.999,
      "HeadPose": {
        "Pitch": -9.386,
        "Roll": -3.478,
        "Yaw": 14.624
      },
      "Mask": "none",
      "EmotionConfidence": 0.998,
      "HatConfidence": 0.794,
      "GlassesConfidence": 0.999,
      "Sharpness": 0.025,
      "FigureClusterId": "figure-cluster-id-unavailable",
      "FaceQuality": 0.3,
      "Attractive": 0.002,
      "AgeSD": 8,
      "Glasses": "none",
      "FigureConfidence": 0.998,
      "Hat": "none"
    },
    {
      "Beard": "none",
      "MaskConfidence": 0.649,
      "Gender": "male",
      "Boundary": {
        "Left": 85,
        "Top": 108,
        "Height": 18,
        "Width": 14
      },
      "BeardConfidence": 0.975,
      "FigureId": "798ab164-ae05-4a9f-b8c9-4b69ca183c3f",
      "Mouth": "close",
      "Emotion": "none",
      "Age": 34,
      "MouthConfidence": 0.97,
      "FigureType": "face",
      "GenderConfidence": 0.917,
      "HeadPose": {
        "Pitch": -0.946,
        "Roll": -1.785,
        "Yaw": -39.264
      },
      "Mask": "mask",
      "EmotionConfidence": 0.966,
      "HatConfidence": 0.983,
      "GlassesConfidence": 1,
      "Sharpness": 0.095,
      "FigureClusterId": "figure-cluster-id-unavailable",
      "FaceQuality": 0.3,
      "Attractive": 0.022,
      "AgeSD": 9,
      "Glasses": "none",
      "FigureConfidence": 0.998,
      "Hat": "none"
    },
    {
      "Beard": "none",
      "MaskConfidence": 0.534,
      "Gender": "female",
      "Boundary": {
        "Left": 245,
        "Top": 128,
        "Height": 16,
        "Width": 13
      },
      "BeardConfidence": 0.998,
      "FigureId": "b9fb1552-cc98-454a-ac7c-18e5c55cc5bf",
      "Mouth": "close",
      "Emotion": "none",
      "Age": 6,
      "MouthConfidence": 0.999,
      "FigureType": "face",
      "GenderConfidence": 0.972,
      "HeadPose": {
        "Pitch": 21.686,
        "Roll": 16.806,
        "Yaw": 50.348
      },
      "Mask": "mask",
      "EmotionConfidence": 0.991,
      "HatConfidence": 0.999,
      "GlassesConfidence": 1,
      "Sharpness": 0.389,
      "FigureClusterId": "figure-cluster-id-unavailable",
      "FaceQuality": 0.3,
      "Attractive": 0.046,
      "AgeSD": 6,
      "Glasses": "none",
      "FigureConfidence": 0.991,
      "Hat": "none"
    },
    {
      "Beard": "none",
      "MaskConfidence": 0.654,
      "Gender": "male",
      "Boundary": {
        "Left": 210,
        "Top": 130,
        "Height": 18,
        "Width": 15
      },
      "BeardConfidence": 0.738,
      "FigureId": "a00154ad-6e5a-48a8-b79e-4cd3699e3281",
      "Mouth": "close",
      "Emotion": "none",
      "Age": 24,
      "MouthConfidence": 0.999,
      "FigureType": "face",
      "GenderConfidence": 0.999,
      "HeadPose": {
        "Pitch": -3.356,
        "Roll": 1.734,
        "Yaw": 12.431
      },
      "Mask": "none",
      "EmotionConfidence": 0.993,
      "HatConfidence": 1,
      "GlassesConfidence": 0.984,
      "Sharpness": 0.449,
      "FigureClusterId": "figure-cluster-id-unavailable",
      "FaceQuality": 0.3,
      "Attractive": 0.005,
      "AgeSD": 15,
      "Glasses": "none",
      "FigureConfidence": 0.985,
      "Hat": "none"
    }
  ]
}
Note

The sample response shows that the current image contains four faces and provides information on each face, including gender, age, and mood.

Sample code

The following sample code provides an example on how to use IMM SDK for Python to detect faces.

# -*- coding: utf-8 -*-
# This file is auto-generated, don't edit it. Thanks.
import sys
import os
from typing import List

from alibabacloud_imm20200930.client import Client as imm20200930Client
from alibabacloud_tea_openapi import models as open_api_models
from alibabacloud_imm20200930 import models as imm_20200930_models
from alibabacloud_tea_util import models as util_models
from alibabacloud_tea_util.client import Client as UtilClient


class Sample:
    def __init__(self):
        pass

    @staticmethod
    def create_client(
        access_key_id: str,
        access_key_secret: str,
    ) -> imm20200930Client:
        """
        Use your AccessKey ID and AccessKey secret to initialize the client. 
        @param access_key_id:
        @param access_key_secret:
        @return: Client
        @throws Exception
        """
        config = open_api_models.Config(
            access_key_id=access_key_id,
            access_key_secret=access_key_secret
        )
        # Specify the IMM endpoint. 
        config.endpoint = f'imm.cn-beijing.aliyuncs.com'
        return imm20200930Client(config)

    @staticmethod
    def main(
        args: List[str],
    ) -> None:
        # The AccessKey pair of an Alibaba Cloud account has permissions on all API operations. To prevent security risks, we recommend that you call API operations or perform routine O&M as a Resource Access Management (RAM) user. 
        # We recommend that you do not include your AccessKey pair (AccessKey ID and AccessKey secret) in your project code. Otherwise, the AccessKey pair may be leaked and the security of all resources within your account may be compromised. 
        # In this example, the AccessKey pair is read from the environment variables to implement identity verification for API access. For more information about how to configure environment variables, see https://help.aliyun.com/document_detail/2361894.html. 
        imm_access_key_id = os.getenv("AccessKeyId")
        imm_access_key_secret = os.getenv("AccessKeySecret")
        client = Sample.create_client(imm_access_key_id, imm_access_key_secret)
        detect_image_faces_request = imm_20200930_models.DetectImageFacesRequest(
            project_name='test-project',
            source_uri='oss://test-bucket/test-object.jpg'
        )
        runtime = util_models.RuntimeOptions()
        try:
            # You can choose to print the response of the API operation. 
            client.detect_image_faces_with_options(detect_image_faces_request, runtime)
        except Exception as error:
            # Print the error message if necessary. 
            UtilClient.assert_as_string(error.message)

    @staticmethod
    async def main_async(
        args: List[str],
    ) -> None:
        # The AccessKey pair of an Alibaba Cloud account has permissions on all API operations. To prevent security risks, we recommend that you call API operations or perform routine O&M as a RAM user. 
        # We recommend that you do not include your AccessKey pair (AccessKey ID and AccessKey secret) in your project code. Otherwise, the AccessKey pair may be leaked and the security of all resources within your account may be compromised. 
        # In this example, the AccessKey pair is read from the environment variables to implement identity verification for API access. For more information about how to configure environment variables, see https://help.aliyun.com/document_detail/2361894.html. 
        imm_access_key_id = os.getenv("AccessKeyId")
        imm_access_key_secret = os.getenv("AccessKeySecret")
        client = Sample.create_client(imm_access_key_id, imm_access_key_secret)
        detect_image_faces_request = imm_20200930_models.DetectImageFacesRequest(
            project_name='test-project',
            source_uri='oss://test-bucket/test-object.jpg'
        )
        runtime = util_models.RuntimeOptions()
        try:
            # You can choose to print the response of the API operation. 
            await client.detect_image_faces_with_options_async(detect_image_faces_request, runtime)
        except Exception as error:
            # Print the error message if necessary. 
            UtilClient.assert_as_string(error.message)


if __name__ == '__main__':
    Sample.main(sys.argv[1:])