All Products
Search
Document Center

Container Service for Kubernetes:Overview of Serverless Argo Workflows clusters

Last Updated:Mar 26, 2026

Serverless Argo Workflows (also called workflow clusters) runs Argo workflows on a serverless architecture powered by Alibaba Cloud Container Compute Service (ACS) or Elastic Container Instance (ECI). It handles infrastructure automatically so you can focus on building workflows—no cluster provisioning, no version management, and no capacity planning.

Console

ACK One workflow cluster console

Use cases

Argo Workflows is a Cloud Native Computing Foundation (CNCF) graduated project and the most widely used workflow engine for Kubernetes. Graduation reflects the highest maturity level in user adoption, security, and widespread use. It is deployed across autonomous driving, scientific computing, financial quantization, and digital media.

image

Workflow clusters are a strong fit for:

  • Batch data processing — launch thousands of parallel tasks without pre-allocating compute

  • Machine learning pipelines — run training, evaluation, and deployment steps as orchestrated pod workflows

  • Infrastructure automation — coordinate multi-step provisioning and teardown jobs reliably

  • CI/CD pipelines — build high-throughput pipelines that scale out on demand and release resources when done

Argo Workflows stands out for batch task orchestration because of three core properties:

  • Cloud-native — designed for Kubernetes, where each task runs as a pod; the most popular workflow engine on Kubernetes

  • Lightweight and scalable — no VM overhead; launches thousands of tasks in parallel with elastic scaling

  • Powerful orchestration — supports regular jobs, Spark jobs, Ray jobs, and Tensor jobs in a single workflow

Why workflow clusters

Workflow clusters are built on open-source Argo Workflows and are fully compatible with the open-source API. If you already run Argo workflows in Container Service for Kubernetes (ACK) clusters or any Kubernetes cluster, you can upgrade to workflow clusters without modifying a single workflow definition.

Beyond compatibility, workflow clusters remove the operational work of running Argo at scale:

  • No infrastructure management — clusters are ready to use immediately. Version upgrades are handled automatically, so you can focus on workflow logic instead of cluster maintenance.

  • Elastic scaling with automatic resource release — resources scale out when workflows run and are released after completion. You pay only for compute time that workflows actually consume, without any manual intervention.

  • Multi-zone reliability — the engine automatically schedules pods across availability zones, keeping workflows running even when individual zones experience issues.

  • Optimized control plane — the Argo Workflows control plane is tuned for performance, efficiency, stability, and observability at scale, so you get consistent scheduling behavior under heavy load.

  • Enhanced OSS artifact management — upload large artifacts, stream data between workflow steps, and configure automatic artifact garbage collection (GC) without managing OSS bucket policies manually.

  • Cost optimization with spot capacity — run fault-tolerant workloads on BestEffort instances and preemptible elastic container instances to further reduce compute costs.

  • Community technical support — get expert guidance on workflow optimization to improve pipeline performance and reduce costs.

Architecture

Workflow clusters are serverless workflow engines built on Kubernetes and powered by open-source Argo Workflows.

image

Network design

Workflow clusters are available in the following regions: China (Beijing), China (Hangzhou), China (Shanghai), China (Shenzhen), China (Zhangjiakou), China (Heyuan), China (Guangzhou), China (Hong Kong), Singapore, Malaysia (Kuala Lumpur), Indonesia (Jakarta), Japan (Tokyo), Germany (Frankfurt), UK (London), and Thailand (Bangkok). To use workflow clusters in other regions, join DingTalk group 35688562 for support from product technical experts.

Before creating a workflow engine, plan your VPC and vSwitch configuration:

  • Create a VPC or select an existing one.

  • Create vSwitches or select existing ones, following these guidelines:

    • Make sure the CIDR blocks of your vSwitches can supply enough IP addresses. Argo workflows may create many pods, each requiring an IP address from your vSwitch.

    • Create a vSwitch in each availability zone of your selected region, then specify all vSwitch IDs when creating the workflow engine. The engine automatically schedules ACS pods or elastic container instances in zones with sufficient stock. If all zones in the region are out of stock, workflows cannot run because no elastic container instances can be created.