Wan is an open-source video generation model that supports T2V (text-to-video) and I2V (image-to-video) generations. PAI provides customized JSON workflows and API calling methods to help you use the Wan model in ComfyUI to generate high-quality videos. This topic uses I2V as an example to show how to deploy the ComfyUI service and use Wan to generate videos.
Deploy ComfyUI standard service (for single user)
Deploy the service
Use the custom deployment method to deploy the ComfyUI standard service. Perform the following steps:
Log on to the PAI console. Select a region on the top of the page. Then, select the desired workspace and click Elastic Algorithm Service (EAS).
Click Deploy Service. In the Custom Model Deployment section, click Custom Deployment.
On the Custom Deployment page, configure the following parameters.
In the Environment Information section:
Parameter
Description
Image Configuration
Select from the Alibaba Cloud Images list.
Note1.9 is the image version. Due to rapid version iterations, you can just select the latest version when deploying.
Mount storage
Mount an external storage (such as OSS or NAS) for the service. The generated videos will be automatically saved to the corresponding data source. Take OSS as an example, set the following parameters:
Uri: Select an OSS bucket directory. For more information about how to create a bucket and directory, see Get started with the OSS console. Make sure that your bucket is in the same region as the EAS service.
Mount Path: The destination path in the service instance. For example,
/code/data-oss.
Command
After you select an image, the system automatically sets this parameter.
After you complete the model settings, set the
--data-dirmount directory in Command and make sure that the mount directory is the same as the mount path in model settings.For the image version 1.9,
--data-diris preconfigured. You only need to update it to the mount path in model settings. For example,python main.py --listen --port 8000 --data-dir /code/data-oss.In the Resource Information section, set the resource specifications.
Parameter
Description
Resource Type
Select Public Resources.
Deployment Resources
Choose a Resource Type. Because video generation requires more GPU video memory than image generation, we recommend a type with GPU memory of no less than 48 GB per card, such as the GU60 types (for example,
ml.gu8is.c16m128.1-gu60).In the Network information section, set a virtual private cloud (VPC) with Internet access. For more information, see Configure Internet access for VPC.
NoteThe EAS service does not have Internet access by default. However, because the I2V function needs to download images from the internet, a VPC with Internet access is required.
After you configure the parameters, click Deploy.
Use WebUI
After the service is deployed, you can build a workflow on the WebUI page. Perform the following steps:
Click View Web App in the Service Type column.
In the upper-left corner, select , select a JSON workflow file, and Open it.
PAI has integrated various acceleration algorithms in ComfyUI. Here are some workflows with good speed and performance:
I2V (upload image directly): wanvideo_720P_I2V.json
After the workflow is loaded, you can click upload in the Load Image section to upload or update image files.

I2V (load image URL): wanvideo_720P_I2V_URL.json
After the workflow is loaded, set the image URL in the Load Image By URL section to update images.

Click the Run button at the bottom of the page to generate a video.
After about 20 minutes of execution, the result will be displayed in the Video Combine section on the right.

Synchronous API call
The standard service only supports synchronous calling, which means directly requesting the inference instance without using the EAS queue service. Perform the following steps:
Export the workflow JSON file.
The API request body depends on the workflow configuration. You need to first set up the workflow on the WebUI page of the service. Then, select in the upper-left corner to get the JSON file corresponding to the workflow.

View the endpoint information.
In the service list, click the service name, and then click View Endpoint Information in the Basic Information section.
In the Invocation Method panel, obtain the endpoint and token.
NoteTo use the Internet endpoint: the client must support access to the Internet.
To use the VPC endpoint: the client must be in the same VPC as the service.

Call the service.
The following is a complete code sample for calling and obtaining results. You can obtain the full path of the output image from
data[prompt_id]["outputs"]["fullpath"]in the final result.The sample code gets the endpoint and token from environment variables. Run the following commands in the terminal to add temporary environment variables (only effective in the current session):
# Set your endpoint and token. export SERVICE_URL="http://test****.115770327099****.cn-beijing.pai-eas.aliyuncs.com/" export TOKEN="MzJlMDNjMmU3YzQ0ZDJ*****************TMxZA=="
Deploy ComfyUI API service (high concurrency scenarios)
Deploy the service
If you have already created a standard service and want to change it to the API version, we recommend that you delete the original service and create a new API version instead.
Use the custom deployment method to deploy the ComfyUI API service. Perform the following steps:
Log on to the PAI console. Select a region on the top of the page. Then, select the desired workspace and click Elastic Algorithm Service (EAS).
Click Deploy Service. In the Custom Model Deployment section, click Custom Deployment.
On the Custom Deployment page, configure the following parameters.
In the Environment Information section:
Parameter
Description
Image Configuration
Select from the Aibaba Cloud Images list.
Note1.9 is the image version. Due to rapid version iterations, you can just select the latest version when deploying.
Model Settings
Mount an external storage (such as OSS or NAS) for the service. The generated videos will be automatically saved to the corresponding data source. Take OSS as an example, set the following parameters:
Uri: Select an OSS bucket directory. For more information about how to create a bucket and directory, see Get started with the OSS console. Make sure that your bucket is in the same region as the EAS service.
Mount Path: The destination path in the service instance. For example,
/code/data-oss.
Command
After you select an image, the system automatically sets this parameter.
After you complete the model settings, set the
--data-dirmount directory in Command and make sure that the mount directory is the same as the mount path in model settings.For the image version 1.9,
--data-diris preconfigured. You only need to update it to the mount path in model settings. For example,python main.py --listen --port 8000 --api --data-dir /code/data-oss.In the Resource Information section, select the resource specifications.
Parameter
Description
Resource Type
Select Public Resources.
Deployment Resources
Select Resource Type. Because video generation requires more GPU video memory than image generation, we recommend a type with GPU memory of no less than 48 GB per card, such as the GU60 types (for example,
ml.gu8is.c16m128.1-gu60).In the Asynchronous Queue section, set Maximum Data for A Single Input Request and Maximum Data for A Single Output. The standard value is 1024 KB.
NoteSet the data size appropriately to avoid request rejection, sample loss, response failure, or queue blocking due to exceeding the limit.
In the Network information section, set a VPC with Internet access, including the VPC, vSwitch, and Security Group parameters. For more information, see Configure Internet access for VPC.
NoteThe EAS service does not have Internet access by default. However, because the I2V function needs to download images from the internet, a VPC with Internet access is required.
After you configure the parameters, click Deploy.
Asynchronous API call
The API service only supports asynchronous calling and the api_prompt path. Asynchronous calling means using the EAS queue service to send requests to the input queue and obtain results through subscription. Perform the following steps:
View the endpoint information.
Click Invocation Information in the Service Type column of the service. In the Invocation Method panel, view the endpoint and token on the Asynchronous Invocation tab.
NoteTo use the Internet endpoint: the client must support access to the Internet.
To use the VPC endpoint: the client must be in the same VPC as the service.

Run the following command in the terminal to install the eas_prediction SDK.
pip install eas_prediction --userCall the service.
The following is a complete code sample. You can obtain the full path of the output image from
json.loads(x.data.decode('utf-8'))[1]["data"]["output"]["gifs"][0]["fullpath"]in the final result.The sample code gets the endpoint and token from environment variables. Run the following commands in the terminal to add temporary environment variables (only effective in the current session):
# Set your endpoint and token. export SERVICE_URL="http://test****.115770327099****.cn-beijing.pai-eas.aliyuncs.com/" export TOKEN="MzJlMDNjMmU3YzQ0ZDJ*****************TMxZA=="
Appendix: More examples
The usage process for T2V is the same as that of I2V. You can refer to the steps above to deploy and call the service. However, T2V does not require Internet access, so you do not need to configure a VPC when deploying the EAS service.
You can experience the WebUI calling process through the sample workflow file (wanvideo_720P_T2V.json). Load the workflow on the WebUI page, then enter the prompt in the WanVideo TextEncode input box. Click Run to start.
To call through the API, Here are code samples:
The sample code gets the endpoint and token from environment variables. Run the following commands in the terminal to add temporary environment variables (only effective in the current session):
# Set your endpoint and token.
export SERVICE_URL="http://test****.115770327099****.cn-beijing.pai-eas.aliyuncs.com/"
export TOKEN="MzJlMDNjMmU3YzQ0ZDJ*****************TMxZA=="Synchronous API call
Asynchronous API call
References
To learn more about the deployment and functionality ComfyUI, such as loading custom models, integrating plugins, and FAQs, see AI video generation - ComfyUI deployment.