The facial recognition feature uses AI to identify the bounding box and facial attributes of faces in images. If there are multiple faces present within an image, the bounding box and facial attributes of each face are recognized. This metadata can be used for age and gender statistics.

Note To use the face recognition feature, you must activate Intelligent Media Management (IMM) and bind IMM to OSS. For more information, see Quick start.

The bounding box of a face contains four values: the y coordinate of the upper-left corner, the x coordinate of the upper-left corner, the width, and the height.

Facial attributes are made up of six values: gender, age, head posture, eye, facial blurring, and facial quality.

Parameters

Operation: imm/detecface

Sample responses:
{
    "Faces":[
        {
            "Age":29,
            "Attractive":0.95,
            "Emotion":"HAPPY",
            "EmotionConfidence":0.9875330924987793,
            "EmotionDetails":{
                "ANGRY":0.000016857109585544094,
                "CALM":0.012278525158762932,
                "DISGUSTED":0.000012325451280048583,
                "HAPPY":0.9875330924987793,
                "SAD":0.0000388074986403808,
                "SCARED":0.000006888585176056949,
                "SURPRISED":0.000054363932576961815
            },
            "FaceAttributes":{
                "Beard":"NONE",
                "BeardConfidence":1,
                "FaceBoundary":{
                    "Height":928,
                    "Left":607,
                    "Top":628,
                    "Width":894
                },
                "Glasses":"NONE",
                "GlassesConfidence":1,
                "Mask":"NONE",
                "MaskConfidence":0.9999999403953552,
                "Race":"YELLOW",
                "RaceConfidence":0.598323404788971
            },
            "FaceConfidence":0.9704222083091736,
            "FaceId":"4199e1985b6d3bb075f0994c82e6d2fd82a274c11ce183e1fdb222dd3aa8c7ce",
            "Gender":"MALE",
            "GenderConfidence":1,
        }
    ],
    "ImageUri":"oss://image-demo/person.jpg",
    "RequestId":"5C3D854A3243A93A275E9C99",
    "httpStatusCode":200,
    "success":true
}

Examples

Assume that the requested bucket is named imm-demo and is located in the China (Hangzhou) region, the endpoint used to access the bucket is oss-cn-hangzhou.aliyuncs.com, and the requested image is named person.jpg. The unsigned URL of requested image is as follows:
http://image-demo.oss-cn-hangzhou.aliyuncs.com/person.jpg?x-oss-process=imm/detecface
The following code provides an example on how to perform the face recognition operation by using the OSS SDK for Python:
# Create a bucket. All object-related methods must be called through buckets.
bucket = oss2.Bucket(oss2.Auth(access_key_id, access_key_secret), endpoint, bucket_name)

# Detect faces.
style = 'imm/detectface'
result = bucket.get_object(objectKey, process=style)
# Obtain the result of the operation.
buf = result.read(result.content_length)
print json.dumps(json.loads(buf), indent=4, sort_keys=True)