Generate high-definition long videos from text or images using EasyAnimate, a DiT-based framework with model fine-tuning support for custom styles.
Solution overview
|
Solution |
Use cases |
Billing |
|
Cloud-based IDE with built-in tutorials and code. Best for deep model understanding or custom development. |
Creates a DSW instance on public resources, billed in pay-as-you-go mode. For details, see DSW billing. |
|
|
No environment setup required. Deploy or fine-tune models with one click and invoke through WebUI or API. Best for rapid validation or application integration. |
Creates an EAS service (for deployment) and a DLC job (for fine-tuning) on public resources, both billed in pay-as-you-go mode. For details, see DLC billing and EAS billing. |
Solution 1: Generate videos using DSW
Step 1: Create a DSW instance
-
Log on to the PAI console and select a region. In the left-side navigation pane, click Workspaces, then select and enter a workspace.
-
In the left-side navigation pane, click Model Training > Interactive Modeling (DSW).
-
Click Create instance and configure the following parameters. Keep default values for other parameters.
Parameter
Description
Instance Name
Example: AIGC_test_01.
Resource Type
Select Public Resources.
Instance Type
Select a GPU specification such as
ecs.gn7i-c8g1.2xlarge(A10 or GU100 GPUs recommended).Image
Select Alibaba Cloud Image and search for
easyanimate:1.1.5-pytorch2.2.0-gpu-py310-cu118-ubuntu22.04. -
Click OK. Wait until the instance status changes to Running.
Step 2: Download the EasyAnimate tutorial and model
-
In the DSW instance row, click Actions, then click Open.
-
On the Notebook tab, go to Launcher and open DSW Gallery.
-
Search for AI video generation example based on EasyAnimate (V5) and click Open in DSW to download resources to the DSW instance.
Multiple versions are available. This topic uses V5.

-
Download and install EasyAnimate.
In the tutorial file, click
to run Function Definitions, Download Code, and Download Model in sequence.
Step 3: Launch WebUI and generate a video
-
Click
to run Launch UI. -
Click the generated link to open WebUI.

-
In WebUI, select a pre-trained model path from the dropdown list and configure parameters as needed.

-
Click Generate. After about 5 minutes, view or download the video on the right.

Solution 2: Generate videos using Model Gallery
Step 1: Deploy the pre-trained model
-
Log on to the PAI console and select a region. In the left-side navigation pane, click Workspaces, then select and enter a workspace.
-
In the left-side navigation pane, click Quick Start > Model Gallery. Search for EasyAnimate high-definition long video generation model, click Deploy, keep default configurations, and confirm. When the service status changes to Running, deployment is complete.

Step 2: Generate videos using WebUI or API
After deployment, generate videos using WebUI or API.
To view deployment details later, click Model Gallery > Job Management > Deployment Jobs, then click the Service name.
WebUI
-
On the Service details page, click View Web App.

-
In WebUI, select a pre-trained model path and configure parameters.

-
Click Generate. After about 5 minutes, view or download the video on the right.

API
-
On the Service details page, in the Resource Details section, click View Call Information to obtain the endpoint and token.

-
Call the service to generate a video. The following Python example demonstrates the request format.
import os import requests import json import base64 from typing import Dict, Any class EasyAnimateClient: """ EasyAnimate EAS service API client. """ def __init__(self, service_url: str, token: str): if not service_url or not token: raise ValueError("Service URL and token cannot be empty") self.base_url = service_url.rstrip('/') self.headers = { 'Content-Type': 'application/json', 'Authorization': token } def update_model(self, model_path: str, edition: str = "v3", timeout: int = 300) -> Dict[str, Any]: """ Load a model by specifying its version and path. Args: model_path: Model path in the service, e.g., "/mnt/models/Diffusion_Transformer/EasyAnimateV3-XL-2-InP-512x512". edition: Model version. Default: "v3". timeout: Request timeout in seconds. Use a longer timeout because model loading is slow. """ # 1. Set model edition requests.post( f"{self.base_url}/easyanimate/update_edition", headers=self.headers, json={"edition": edition}, timeout=timeout ).raise_for_status() # 2. Load model (may take several minutes) print(f"Loading model: {model_path}") response = requests.post( f"{self.base_url}/easyanimate/update_diffusion_transformer", headers=self.headers, json={"diffusion_transformer_path": model_path}, timeout=15000 ) response.raise_for_status() return response.json() def generate_video(self, prompt_textbox: str, **kwargs) -> bytes: """ Generate a video from a text prompt. Args: prompt: Positive prompt in English. **kwargs: Optional parameters. See the API parameters table below. Returns: Video binary data in MP4 format. """ payload = { "prompt_textbox": prompt_textbox, "negative_prompt_textbox": kwargs.get("negative_prompt", "The video is not of a high quality, it has a low resolution..."), "width_slider": kwargs.get("width_slider", 672), "height_slider": kwargs.get("height_slider", 384), "length_slider": kwargs.get("length_slider", 144), "sample_step_slider": kwargs.get("sample_step_slider", 30), "cfg_scale_slider": kwargs.get("cfg_scale_slider", 6.0), "seed_textbox": kwargs.get("seed_textbox", 43), "sampler_dropdown": kwargs.get("sampler_dropdown", "Euler"), "generation_method": "Video Generation", "is_image": False, "lora_alpha_slider": 0.55, "lora_model_path": "none", "base_model_path": "none", "motion_module_path": "none" } response = requests.post( f"{self.base_url}/easyanimate/infer_forward", headers=self.headers, json=payload, timeout=1500 ) response.raise_for_status() result = response.json() if "base64_encoding" not in result: raise ValueError(f"Unexpected response format: {result}") return base64.b64decode(result["base64_encoding"]) # --- Example usage --- if __name__ == "__main__": try: # 1. Set service credentials. Replace with your actual service URL and token. EAS_URL = "<eas-service-url>" EAS_TOKEN = "<eas-service-token>" # 2. Create client client = EasyAnimateClient(service_url=EAS_URL, token=EAS_TOKEN) # 3. Load model (required before first video generation; call again to switch models) client.update_model(model_path="/mnt/models/Diffusion_Transformer/EasyAnimateV3-XL-2-InP-512x512") # 4. Generate video video_bytes = client.generate_video( prompt_textbox="A beautiful cat playing in a sunny garden, high quality, detailed", width_slider=672, height_slider=384, length_slider=72, sample_step_slider=20 ) # 5. Save video file with open("api_generated_video.mp4", "wb") as f: f.write(video_bytes) print("Video saved: api_generated_video.mp4") except requests.RequestException as e: print(f"Request error: {e}") except (ValueError, KeyError) as e: print(f"Parameter error: {e}") except Exception as e: print(f"Unexpected error: {e}")API parameters are described below.
Step 3: (Optional) Fine-tune the pre-trained model
Fine-tune the model on custom data to generate videos with specific styles or content.
-
Log on to the PAI console. In the left-side navigation pane, click Workspaces, then select and enter a workspace.
-
In the left-side navigation pane, click Quick Start > Model Gallery.
-
Search for EasyAnimate high-definition long video generation model and click Fine-tune.

-
Set Source to Public Resources. For Instance type, select an instance with A10 or higher GPUs. Configure hyperparameters and keep default values for other parameters.
To use a custom dataset, follow these steps:
-
Click Train > Confirm. With default settings, training takes about 40 minutes. When the job status changes to Successful, the model is trained.
To view training job details later, click Model Gallery > Job Management > Training Jobs, then click the job name.
-
Click Deploy in the upper-right corner to deploy the fine-tuned model. When the status changes to Running, deployment is complete.

-
On the Service details page, click View Web Application to open WebUI.
To view service details later, click Model Gallery > Job Management > Deployment Jobs, then click the Service name.
-
In WebUI, select the trained LoRA model to generate videos. For API usage, see Step 2.

Production environment recommendations
-
Stop or delete unused resources: This tutorial creates DSW instances and EAS services on public resources. Stop or delete them when no longer needed to avoid continued charges.
-
DSW instances:

-
EAS services:

-
-
Use EAS for production deployment: Deploy models to EAS using Solution 2 (one-click) or Solution 1 (custom image). For details, see Deploy a model as an online service.
EAS production features:
-
Stress testing: Test the concurrency level supported by the service endpoint.
-
Auto scaling: Scale instances based on traffic.
-
Log monitoring and alerting: Monitor service status in real time.
-
References
EAS also supports one-click deployment of AI video generation services based on ComfyUI and Stable Video Diffusion. For details, see AI video generation - ComfyUI deployment.
