All Products
Search
Document Center

Alibaba Cloud Model Studio:Image to dancing video - AnimateAnyone

Last Updated:Oct 21, 2025

AnimateAnyone generates character action videos from character images and action templates. It consists of three independent models: "AnimateAnyone-detect" for detecting character image compliance, "AnimateAnyone-template" for generating action templates, and "AnimateAnyone" for generating character videos.

Important

This document applies only to the China (Beijing) region. To use these models, you must use an API key from the China (Beijing) region.

Model overview

Model introduction

  • AnimateAnyone-detect is an image detection model that checks whether an input image meets the character image specifications for the AnimateAnyone model.

  • AnimateAnyone-template is an action template generation model that extracts character actions from a motion video to generate an action template compatible with the AnimateAnyone model.

  • AnimateAnyone is a character video generation model that generates character action videos from character images and action templates.

Performance showcase

Input: Character image

Input: Action video

Output (image background)

Output (video background)

05-9_16

04-9_16

Note
  • The preceding examples were generated by the Tongyi App, which integrates AnimateAnyone.

  • The videos generated by the AnimateAnyone model do not include audio.

Billing and rate limiting

Mode

Model

Unit price

QPS limit for task submission API

Number of concurrent tasks

Model call

animate-anyone-detect-gen2

Model call, pay-as-you-go:

$0.000574/image

5

No limit for sync APIs

animate-anyone-template-gen2

Model call, pay-as-you-go:

$0.011469/second

1

(At any given time, only one job is running. Other jobs in the queue are waiting.)

animate-anyone-gen2

Prerequisites

You must activate the service and obtain an API key: Get an API key.

Model calling

  • The AnimateAnyone series of models supports pay-as-you-go calls.

  • When you call a model, you must specify its name. Call the models in the following order:

    a. Call the "AnimateAnyone-detect" model to confirm that the input character image meets the required specifications. For more information, see AnimateAnyone image detection.

    b. Call the "AnimateAnyone-template" model and provide a motion video to generate an action template. For more information, see AnimateAnyone action template generation.

    c. Call the "AnimateAnyone" model. Provide the character image that passed detection and the action template ID to generate a video. For more information, see AnimateAnyone video generation.