All Products
Search
Document Center

Platform For AI:Deploy an AI video generation application in EAS

Last Updated:May 09, 2024

You can use Elastic Algorithm Service (EAS) of Platform for AI (PAI) to deploy a web application for AI video generation based on ComfyUI and Stable Video Diffusion models. You can quickly implement an AI-powered text-to-video solution that helps you complete tasks such as generating short videos or animation on social platforms. This topic describes how to deploy an AI video generation application and related inference services, and provides answers to some frequently asked questions during the deployment.

Background information

As generative AI gains more attention, AI-powered video generation is becoming popular in various industries. Many open source foundation models for video generation are available in the market. You can select a foundation model based on the model performance and the specific scenario. ComfyUI, a node-based web UI tool for generative AI, divides AI-powered content generation into worker nodes to achieve precise workflow customization and reproducibility. This topic describes how to deploy an AI video generation application and related inference services in the following steps:

  1. Deploy a model service in EAS

    Deploy an AI video generation application in EAS.

  2. Use ComfyUI to perform model inference

    After you start ComfyUI, you can perform model inference on a web application interface that generates images and videos based on text prompts.

  3. FAQ

    This section describes how to load other open source models or custom models that you trained, and perform model inference.

Prerequisites

Deploy a model service in EAS

  1. Go to the EAS-Online Model Services page.

    1. Log on to the PAI console.

    2. In the left-side navigation pane, click Workspaces. On the Workspaces page, click the name of the workspace to which the model service that you want to manage belongs.

    3. In the left-side navigation pane, choose Model Deployment>Elastic Algorithm Service (EAS) to go to the EAS-Online Model Services page. image

  2. On the PAI-EAS Model Online Service page, click Deploy Service. In the dialog box that appears, select Custom Deployment and click OK.

  3. On the Create Service page, configure the parameters. The following table describes key parameters.

    Parameter

    Description

    Service Name

    Enter the name of the service. In this example, the name comfyui_svd_demo is used.

    Deployment Method

    Select Deploy Web App by Using Image.

    Select Image

    Click PAI Image. Select comfyui from the image drop-down list and 0.1 from the image version drop-down list.

    Note

    You can select the latest version of the image when you deploy the model service. If versions later than 2.0 are available when you deploy the service, select the latest version.

    Command to Run

    • After you configure the image version, the system automatically sets the parameter to python main.py --listen --port 8000.

    • Port number: 8000.

    Resource Group Type

    Select Public Resource Group.

    Resource Configuration Mode

    Select General.

    Resource Configuration

    Select GPU. On the list that appears, select an instance type. We recommend that you select ml.gu7i.c16m60.1-gu30 for cost efficiency. If the resources are insufficient, you can also select the ecs.gn6i-c16g1.4xlarge instance type.

  4. Click Deploy. The deployment requires approximately 5 minutes to complete.

    When the Model Status changes to Running, the service is deployed.

Use ComfyUI to perform model inference

  1. Find the service that you want to manage and click View Web App in the Service Type column.

  2. Perform model inference on the web application interface.

    Select a model for text-to-image generation and a model for image-to-video generation based on your business requirements. In this example, the default settings are used. Enter text prompts in the CLIP Text Encode (Prompt) section, such as Rocket takes off from the ground, fire, sky, airplane and click Queue Prompt. The system starts to run the workflow and generate the video. 85453c9fcadd222fbb087c5acddb6e90.png

  3. Right-click the generated video and select Save Image to save the generated video to your on-premises device. image.png

    The following section provides an example of the generated video:

FAQ

How do I mount a custom model and ComfyUI plug-in?

If you obtained a model such as SDXL, LoRA, or SVD from the open-source community or a third-party ComfyUI plug-in, or you have a custom model that you trained, and you want to save the generated model file or the plug-in file to your Object Storage Service (OSS) bucket directory and load the model or plug-in file by mounting the file, perform the following steps:

  1. Log on to the OSS console. Create a bucket and an empty directory.

    Example: oss://bucket-test/data-oss/, where: bucket-test is the name of the OSS bucket, and data-oss is the empty directory in the bucket. For more information about how to create a bucket, see Create buckets. For more information about how to create an empty directory, see Manage directories.

  2. On the EAS-Online Model Services page, find the service that you want to update and click Update Service in the Actions column.

  3. In the Model Service Information section, configure the following parameters.

    Parameter

    Description

    Model Settings

    Click Specify Model Settings.

    • Select OSS Mount and set the OSS Path parameter to the path of the OSS bucket that you created in Step 1. Example: oss://bucket-test/data-oss/.

    • Mount Path: Mount the OSS directory to the /code/data-oss path of the image. Example: /code/data-oss.

    • Turn off Enable Read-only Mode to disable the read-only mode.

    Command to Run

    Add the --data-dir mount directory in the Command to Run field. The mount directory must be the same as the Mount Path value in the Model Settings section. Example: python main.py --listen --port 8000 --data-dir /code/data-oss.

  4. Click Update to update the model service.

    The following figure shows the directory that PAI automatically creates in the empty OSS directory you specified and copies the required data to the directory. We recommend that you upload data to the specified directory after the service is started. image.png

  5. Upload the on-premises model file to the OSS path ~/models/checkpoints/ that is generated in Step 4. For more information, see the "Upload an object" section in the Get started by using the OSS console topic.

  6. Load the model and perform model inference.

    1. On the EAS-Online Model Services page, find the service that you want to manage and choose image.png > Restart Service in the Actions column.

    2. After the service is started, click View Web App in the Service Type column.

    3. On the web application interface, select the model that you want to use from the Load Checkpoint drop-down list. Perform model inference based on the procedure described in the Use ComfyUI to perform model inference section of this topic. image.png

References

For information about EAS, see Overview of online model services EAS.