Powered by computer vision, face detection identifies and locates human faces in images or videos. Face detection can be used in various scenarios, such as authentication, public surveillance, intelligent album management, and customer behavior analysis. This topic describes how to use the face detection feature to identify faces and their associated characteristics in an image.
Overview
The face detection feature uses image AI technology to detect one or more faces and face information in an image. If an image contains multiple faces, the system detects all faces and their information. Face information includes face ID, age, gender, mood, attractiveness, face quality, and face attributes. Face attributes include face position, head orientation, glasses, beard, and mask.
Scenarios
User authentication: Face detection can work with face similarity comparison to implement user authentication. User authentication is primarily required for face unlocking on mobile phones.
Facial expression analysis: Facial expressions are analyzed using face detection and facial expression recognition technologies. Facial expression analysis applies to scenarios such as sentiment analysis, augmented reality (AR), and virtual characters.
Background clutter: Complex backgrounds may be confused with facial characteristics, affecting detection results.
If an image contains multiple faces, face occlusion may occur, which may degrade the performance of face detection algorithms.
Prerequisites
An AccessKey pair is created and obtained. For more information, see Create an AccessKey pair.
OSS is activated, a bucket is created, and objects are uploaded to the bucket. For more information, see Upload objects.
IMM is activated. For more information, see Activate IMM.
A project is created in the IMM console. For more information about how to create a project by using the IMM console, see Create a project.
NoteYou can also call the CreateProject operation to create a project. For more information, see CreateProject.
You can call the ListProjects operation to query existing projects in a specific region. For more information, see ListProjects.
Usage
Call the DetectImageFaces operation to detect faces in images and face information, including age and gender.
Graph information
IMM project: test-project
Image address: oss://test-bucket/test-object.jpg
Image:
Sample request
{
"ProjectName": "test-project",
"SourceURI": "oss://test-bucket/test-object.jpg",
}
Sample response
{
"RequestId": "47449201-245D-58A7-B56B-BDA483874B20",
"Faces": [
{
"Beard": "none",
"MaskConfidence": 0.724,
"Gender": "male",
"Boundary": {
"Left": 138,
"Top": 102,
"Height": 19,
"Width": 17
},
"BeardConfidence": 0.801,
"FigureId": "b6525b63-cb12-4fab-a9f4-9c7de08b80c3",
"Mouth": "close",
"Emotion": "none",
"Age": 36,
"MouthConfidence": 0.984,
"FigureType": "face",
"GenderConfidence": 0.999,
"HeadPose": {
"Pitch": -9.386,
"Roll": -3.478,
"Yaw": 14.624
},
"Mask": "none",
"EmotionConfidence": 0.998,
"HatConfidence": 0.794,
"GlassesConfidence": 0.999,
"Sharpness": 0.025,
"FigureClusterId": "figure-cluster-id-unavailable",
"FaceQuality": 0.3,
"Attractive": 0.002,
"AgeSD": 8,
"Glasses": "none",
"FigureConfidence": 0.998,
"Hat": "none"
},
{
"Beard": "none",
"MaskConfidence": 0.649,
"Gender": "male",
"Boundary": {
"Left": 85,
"Top": 108,
"Height": 18,
"Width": 14
},
"BeardConfidence": 0.975,
"FigureId": "798ab164-ae05-4a9f-b8c9-4b69ca183c3f",
"Mouth": "close",
"Emotion": "none",
"Age": 34,
"MouthConfidence": 0.97,
"FigureType": "face",
"GenderConfidence": 0.917,
"HeadPose": {
"Pitch": -0.946,
"Roll": -1.785,
"Yaw": -39.264
},
"Mask": "mask",
"EmotionConfidence": 0.966,
"HatConfidence": 0.983,
"GlassesConfidence": 1,
"Sharpness": 0.095,
"FigureClusterId": "figure-cluster-id-unavailable",
"FaceQuality": 0.3,
"Attractive": 0.022,
"AgeSD": 9,
"Glasses": "none",
"FigureConfidence": 0.998,
"Hat": "none"
},
{
"Beard": "none",
"MaskConfidence": 0.534,
"Gender": "female",
"Boundary": {
"Left": 245,
"Top": 128,
"Height": 16,
"Width": 13
},
"BeardConfidence": 0.998,
"FigureId": "b9fb1552-cc98-454a-ac7c-18e5c55cc5bf",
"Mouth": "close",
"Emotion": "none",
"Age": 6,
"MouthConfidence": 0.999,
"FigureType": "face",
"GenderConfidence": 0.972,
"HeadPose": {
"Pitch": 21.686,
"Roll": 16.806,
"Yaw": 50.348
},
"Mask": "mask",
"EmotionConfidence": 0.991,
"HatConfidence": 0.999,
"GlassesConfidence": 1,
"Sharpness": 0.389,
"FigureClusterId": "figure-cluster-id-unavailable",
"FaceQuality": 0.3,
"Attractive": 0.046,
"AgeSD": 6,
"Glasses": "none",
"FigureConfidence": 0.991,
"Hat": "none"
},
{
"Beard": "none",
"MaskConfidence": 0.654,
"Gender": "male",
"Boundary": {
"Left": 210,
"Top": 130,
"Height": 18,
"Width": 15
},
"BeardConfidence": 0.738,
"FigureId": "a00154ad-6e5a-48a8-b79e-4cd3699e3281",
"Mouth": "close",
"Emotion": "none",
"Age": 24,
"MouthConfidence": 0.999,
"FigureType": "face",
"GenderConfidence": 0.999,
"HeadPose": {
"Pitch": -3.356,
"Roll": 1.734,
"Yaw": 12.431
},
"Mask": "none",
"EmotionConfidence": 0.993,
"HatConfidence": 1,
"GlassesConfidence": 0.984,
"Sharpness": 0.449,
"FigureClusterId": "figure-cluster-id-unavailable",
"FaceQuality": 0.3,
"Attractive": 0.005,
"AgeSD": 15,
"Glasses": "none",
"FigureConfidence": 0.985,
"Hat": "none"
}
]
}
The sample response shows that the current image contains four faces and provides information on each face, including gender, age, and mood.
Sample code
The following sample code demonstrates how to use IMM SDK for Python to detect faces.
# -*- coding: utf-8 -*-
# This file is auto-generated, don't edit it. Thanks.
import sys
import os
from typing import List
from alibabacloud_imm20200930.client import Client as imm20200930Client
from alibabacloud_tea_openapi import models as open_api_models
from alibabacloud_imm20200930 import models as imm_20200930_models
from alibabacloud_tea_util import models as util_models
from alibabacloud_tea_util.client import Client as UtilClient
class Sample:
def __init__(self):
pass
@staticmethod
def create_client(
access_key_id: str,
access_key_secret: str,
) -> imm20200930Client:
"""
Use your AccessKey ID and AccessKey secret to initialize the client.
@param access_key_id:
@param access_key_secret:
@return: Client
@throws Exception
"""
config = open_api_models.Config(
access_key_id=access_key_id,
access_key_secret=access_key_secret
)
# Specify the IMM endpoint.
config.endpoint = f'imm.cn-beijing.aliyuncs.com'
return imm20200930Client(config)
@staticmethod
def main(
args: List[str],
) -> None:
# The AccessKey pair of an Alibaba Cloud account has permissions on all API operations. Using these credentials to perform operations is a high-risk operation. We recommend that you use a RAM user to call API operations or perform routine O&M.
# We recommend that you do not include your AccessKey pair (AccessKey ID and AccessKey secret) in your project code. Otherwise, the AccessKey pair may be leaked and the security of all resources within your account may be compromised.
# In this example, the AccessKey pair is read from the environment variables to implement identity verification for API access. For information about how to configure environment variables, visit https://help.aliyun.com/document_detail/2361894.html.
imm_access_key_id = os.getenv("AccessKeyId")
imm_access_key_secret = os.getenv("AccessKeySecret")
client = Sample.create_client(imm_access_key_id, imm_access_key_secret)
detect_image_faces_request = imm_20200930_models.DetectImageFacesRequest(
project_name='test-project',
source_uri='oss://test-bucket/test-object.jpg'
)
runtime = util_models.RuntimeOptions()
try:
# You can choose to print the response of the API operation.
client.detect_image_faces_with_options(detect_image_faces_request, runtime)
except Exception as error:
# Print the error message if necessary.
UtilClient.assert_as_string(error.message)
@staticmethod
async def main_async(
args: List[str],
) -> None:
# The AccessKey pair of an Alibaba Cloud account has permissions on all API operations. Using these credentials to perform operations is a high-risk operation. We recommend that you use a RAM user to call API operations or perform routine O&M.
# We recommend that you do not include your AccessKey pair (AccessKey ID and AccessKey secret) in your project code. Otherwise, the AccessKey pair may be leaked and the security of all resources within your account may be compromised.
# In this example, the AccessKey pair is read from the environment variables to implement identity verification for API access. For information about how to configure environment variables, visit https://help.aliyun.com/document_detail/2361894.html.
imm_access_key_id = os.getenv("AccessKeyId")
imm_access_key_secret = os.getenv("AccessKeySecret")
client = Sample.create_client(imm_access_key_id, imm_access_key_secret)
detect_image_faces_request = imm_20200930_models.DetectImageFacesRequest(
project_name='test-project',
source_uri='oss://test-bucket/test-object.jpg'
)
runtime = util_models.RuntimeOptions()
try:
# You can choose to print the response of the API operation.
await client.detect_image_faces_with_options_async(detect_image_faces_request, runtime)
except Exception as error:
# Print the error message if necessary.
UtilClient.assert_as_string(error.message)
if __name__ == '__main__':
Sample.main(sys.argv[1:])
Billing
During face detection, the following billable items are generated on both OSS and IMM sides:
OSS side: For detailed pricing, see OSS Pricing.
API
Billable item
Description
GetObject
GET requests
You are charged request fees based on the number of successful requests.
Data retrieval of IA objects
If IA objects are retrieved, you are charged IA data retrieval fees based on the size of retrieved IA objects.
Data Retrieval Capacity from Archive Direct Read
If Archive objects in a bucket for which real-time access is enabled are retrieved, You are charged Archive data retrieval fees based on the size of retrieved Archive objects.
Transfer acceleration
If you enable transfer acceleration and use an acceleration endpoint to access your bucket, you are charged transfer acceleration fees based on the data size.
HeadObject
GET requests
You are charged request fees based on the number of successful requests.
IMM side: For detailed pricing, see IMM billable items.
ImportantStarting from 11:00 UTC+8 on July 28, 2025, IMM face detection service will be integrated into the image detection service, and the billable item will be changed from basic face image to image detection. For more information, see IMM billing adjustment announcement.
API
Billable item
Description
DetectImageFaces
ImageDetect
You are charged face detection fees based on the number of successful requests.