This tutorial walks you through deploying ComfyUI on an Alibaba Cloud Container Service for Kubernetes (ACK) managed Pro cluster with GPU-accelerated nodes. By the end, you will have a running ComfyUI instance that generates images from text prompts using the sdxl-turbo model.
What you will build:
-
A persistent model storage layer backed by Object Storage Service (OSS)
-
A ComfyUI deployment on ACK via Helm
-
A working image generation pipeline accessible through your browser
Prerequisites
Before you begin, make sure you have:
-
An ACK managed Pro cluster running Kubernetes 1.22 or later, with GPU-accelerated nodes that each have at least 16 GB of GPU memory
-
NVIDIA driver version 535 or later installed on the GPU nodes (this tutorial uses
550.144.03via theack.aliyun.com/nvidia-driver-versionnode pool label) -
kubectlconfigured to connect to your cluster -
Git and Git Large File Storage (LFS) installed on your local machine
Step 1: Download the model
This tutorial uses sdxl-turbo, a model that generates images from text prompts in a single inference step.
-
Install Git and Git LFS.
macOS
Download Git from the official Git website, or run:
brew install git brew install git-lfsWindows
Download and install Git from the official Git website. Git LFS is included in Git for Windows -- download the latest version.
Linux (Red Hat-based)
For other distributions, see the official Git website.
yum install git yum install git-lfs -
Clone the model repository and download the model files.
GIT_LFS_SKIP_SMUDGE=1 git clone https://www.modelscope.cn/AI-ModelScope/sdxl-turbo.git cd sdxl-turbo git lfs pullThe first command clones the repository metadata without downloading large files.
git lfs pullthen fetches the actual model weights.
Step 2: Upload the model to OSS
Store the model files in OSS so your ACK cluster can mount them as persistent storage.
-
Install ossutil, a command-line tool for managing OSS resources.
-
Create an OSS bucket. Replace
<Your-Bucket-Name>with a globally unique name.ossutil mb oss://<Your-Bucket-Name> -
Create a directory and upload the model files.
ossutil mkdir oss://<Your-Bucket-Name>/sdxl-turbo ossutil cp -r ./sdxl-turbo oss://<Your-Bucket-Name>/sdxl-turbo
Step 3: Create persistent storage in ACK
Create a Persistent Volume (PV) and Persistent Volume Claim (PVC) so the ComfyUI pod can access the model files from OSS.
Create a PV
-
Log on to the ACK console and go to the cluster details page.
-
In the left navigation pane, choose .
-
Click Create and configure the following parameters:
Parameter Value PV Type OSS Volume Name sdxl-turbo-modelCapacity 20GiAccess Mode ReadWriteMany Access Certificate Create Secret Optional Parameters -o umask=022 -o max_stat_cache_size=0 -o allow_otherBucket ID Select the bucket where you uploaded the model OSS Path /sdxl-turboEndpoint Public Endpoint -
Click Create.
Create a PVC
-
In the left navigation pane, choose .
-
Click Create and configure the following parameters:
Parameter Value PVC Type OSS Name sdxl-turbo-modelAllocation Mode Existing Volumes Existing Volumes Select the sdxl-turbo-modelPVCapacity 20 Gi -
Click Create. The
sdxl-turbo-modelPVC appears in the list with a Bound status.
Step 4: Deploy ComfyUI with Helm
-
In the ACK console, click Clusters in the left navigation pane, then click the name of your cluster.
-
In the left navigation pane, choose .
-
Click Deploy. Find the comfyui component and click Next.

-
In the Parameters step, locate the
persistencesection and update it to mount your model PVC. Leave all other settings at their defaults.persistence: - mountPath: "/app/ComfyUI/models/checkpoints" # Mount path for the model annotations: helm.sh/resource-policy: keep persistentVolumeClaim: existingClaim: "sdxl-turbo-model" # Use the PVC created earlier accessModes: ReadWriteMany storageClass: oss # Specify the storage type as OSS size: 20Gi subPath: ""Three fields to update:
Field Value Purpose mountPath"/app/ComfyUI/models/checkpoints"Tells ComfyUI where to find the model existingClaim"sdxl-turbo-model"References the PVC you created storageClassossMatches the OSS-backed PV
Step 5: Access ComfyUI and generate images
-
Forward the ComfyUI service to a local port using
kubectl.kubectl port-forward svc/ack-comfyui 8080:8188Expected output:
Forwarding from 127.0.0.1:8080 -> 8188 Forwarding from [::1]:8080 -> 8188Note: Keep this terminal window open to maintain the port forwarding connection. Make sure port 8080 is available on your local machine.
-
Open a browser and go to
http://localhost:8080. -
Download the sdxl-turbo example workflow. In the ComfyUI interface, click Load and select the downloaded workflow file.
-
In the CLIP Text Encode (Prompt) text box, enter a prompt. For example:
beautiful landscape scenery glass bottle with a galaxy inside cute fennec fox snow HDR sunset -
Enable Extra Options and Auto Queue in the settings panel, then click Queue Prompt to generate an image.

Troubleshooting
Pod stuck in Pending state
The GPU node may not have enough resources, or the NVIDIA driver version may not match the requirement.
Check the pod events:
kubectl describe pod -l app.kubernetes.io/name=comfyui
Look for scheduling errors related to GPU resources or node selectors. Make sure your GPU nodes have the ack.aliyun.com/nvidia-driver-version label set to a driver version of 535 or later.
Port forwarding fails
If kubectl port-forward fails with a connection error, verify that the ComfyUI pod is running:
kubectl get pods -l app.kubernetes.io/name=comfyui
The pod status should be Running. If the pod is still starting, wait for it to become ready before retrying.
Next steps
-
Try different models: Download additional models to your OSS bucket and create new PV/PVC pairs to mount them.
-
Scale with more GPU nodes: Add GPU node pools to your ACK cluster for parallel workloads.
-
Expose ComfyUI externally: Configure an Ingress or Server Load Balancer (SLB) to make ComfyUI accessible outside your local machine. When exposing externally, configure authentication and network policies to secure access.