All Products
Search
Document Center

Container Service for Kubernetes:Overview of Kubernetes clusters for distributed Argo workflows (workflow clusters)

Last Updated:Mar 07, 2024

Kubernetes clusters for distributed Argo workflows (workflow clusters) are deployed on top of a serverless architecture. This type of cluster runs Argo workflows on elastic container instances and optimizes cluster parameters to schedule large-scale workflows with efficiency and elasticity. This topic describes the console, benefits, architecture, and network design of workflow clusters.

Console

Distributed Cloud Container Platform for Kubernetes (ACK One) console

Benefits

Developed based on open source Argo Workflows, workflow clusters comply with the standards of open source workflows. If you have Argo workflows running in existing Container Service for Kubernetes (ACK) clusters or Kubernetes clusters, you can seamlessly upgrade the clusters to workflow clusters without the need to modify the workflows.

By using workflow clusters, you can easily manage workflow orchestration and run each workflow step in containers. This builds a high-efficiency continuous integration/continuous deployment (CI/CD) pipeline that allows you to quickly launch a large number of containers for compute-intensive jobs such as machine learning and data processing jobs.

  • Workflow clusters are developed based on open source Argo Workflows. You can seamlessly upgrade Kubernetes clusters that run Argo workflows to workflow clusters without the need to modify the workflows.

  • Workflow clusters support fully automated O&M and allow you to focus on workflow development.

  • Workflow clusters provide high elasticity and auto scaling capabilities to reduce the costs of compute resources.

  • Workflow clusters support high scheduling reliability and multi-zone load balancing.

  • Workflow clusters use control planes whose performance, efficiency, stability, and observability are optimized.

Architecture

Open source Argo Workflows is used by workflow clusters as the workflow engine for serverless workloads in Kubernetes clusters.

image

Network design

  • Workflow clusters are available in the following regions: China (Beijing), China (Hangzhou), China (Shanghai), China (Shenzhen), and China (Zhangjiakou). To use workflow clusters in other regions, join the DingTalk group 35688562 for technical support.

  • Create a virtual private cloud (VPC) or select an existing VPC.

  • Create vSwitches or select existing vSwitches.

    • Make sure that the CIDR blocks of the vSwitches that you use can provide sufficient IP addresses for Argo workflows. Argo workflows may create a large number of pods each of which requests an IP address from the vSwitches that you use.

    • Create a vSwitch in each zone of the region that you select. When you create a workflow engine, specify multiple vSwitch IDs in the input parameters of the workflow engine. After you create a workflow engine, it automatically creates elastic container instances in zones with sufficient stock of elastic container instances to run a large number of workflows. If elastic container instances are out of stock in all zones in the region that you select, you cannot run workflows because elastic container instances cannot be created.