AnimateAnyone generates character videos from a still image and a motion video. Provide a character photo and a video with desired movements. AnimateAnyone produces a video of that character performing those movements.
AnimateAnyone is available only in the China (Beijing) region. You must use an API key created in the China (Beijing) region.
How AnimateAnyone works
AnimateAnyone uses a three-model pipeline. Call each model in sequence:
| Step | Model | What it does |
|---|---|---|
| 1. Detect | AnimateAnyone-detect | Validates that your character image meets the input specifications. |
| 2. Template | AnimateAnyone-template | Extracts movements from a motion video and generates a reusable action template. |
| 3. Generate | AnimateAnyone | Combines the validated character image with the action template to produce the output video. |
The output video can use either the character image background or the motion video background (no audio included).
Performance showcase
|
Input: Character image |
Input: Motion video |
Output (image background) |
Output (video background) |
These examples were generated by the Tongyi App, which integrates AnimateAnyone.
Prerequisites
Activate Model Studio and obtain an API key before you begin. See Get an API key.
Use an API key from the China (Beijing) region.
Calling workflow
Call the three models in the following order (specify each model name):
Step 1: Validate the character image
Call AnimateAnyone-detect (animate-anyone-detect-gen2) to verify that your character image meets the required specifications.
For the request and response parameters, see AnimateAnyone image detection.
Step 2: Generate an action template
Call AnimateAnyone-template (animate-anyone-template-gen2) with a motion video. The model extracts the movements and produces an action template.
For the request and response parameters, see AnimateAnyone action template generation.
Step 3: Generate the video
Call AnimateAnyone (animate-anyone-gen2) with the character image that passed detection and the action template ID from Step 2. The model generates the character video.
For the request and response parameters, see AnimateAnyone video generation.
Billing and rate limits
AnimateAnyone models use pay-as-you-go billing. The table below lists the pricing and QPS (queries per second) limits for each model.
|
Pattern |
Model |
Unit price |
QPS limit |
Concurrency |
|
Model invocation |
animate-anyone-detect-gen2 |
Model invocation, pay-as-you-go: $0.000574/image |
5 |
No limit (synchronous API) |
|
animate-anyone-template-gen2 |
Model invocation, pay-as-you-go: $0.011469/second |
1 (At any given time, only one job is running. Other jobs in the queue are waiting.) |
||
|
animate-anyone-gen2 |