All Products
Search
Document Center

Container Service for Kubernetes:Overview

Last Updated:Jun 02, 2023

Kubernetes clusters for distributed Argo workflows (workflow clusters) are deployed on top of a serverless architecture. This type of cluster runs Argo workflows on elastic container instances and optimizes cluster parameters to schedule large-scale workflows with efficiency and elasticity. This topic describes the architecture, advantages, and network design of workflow clusters.

Table of contents

Advantages

Developed based on open source Argo Workflows, workflow clusters fully comply with the standards of open source workflows. If you have Argo workflows running in existing Container Service for Kubernetes (ACK) clusters or Kubernetes clusters, you can seamlessly upgrade the clusters to workflow clusters without the need to modify the workflows.

By using workflow clusters, you can easily manage workflow orchestration and run each workflow step in containers. This builds a high-efficiency continuous integration/continuous deployment (CI/CD) pipeline that allows you to quickly launch a large number of containers for compute-intensive jobs such as machine learning and data processing jobs.

  • Kubernetes clusters for distributed Argo workflows are developed based on open source Argo Workflows. You can seamlessly upgrade Kubernetes clusters that run Argo workflows to Kubernetes clusters for distributed Argo workflows without the need to modify the workflows.

  • Kubernetes clusters for distributed Argo workflows support fully automated O&M and allow you to focus on workflow development.

  • Kubernetes clusters for distributed Argo workflows provide high elasticity and auto scaling capabilities to reduce the costs of compute resources.

  • Kubernetes clusters for distributed Argo workflows support high scheduling reliability and multi-zone load balancing.

  • Kubernetes clusters for distributed Argo workflows use control planes whose performance, efficiency, stability, and observability are optimized.

Architecture

Open source Argo Workflows is used by workflow clusters as the workflow engine for serverless workloads in Kubernetes clusters.

The following figure shows the architecture of workflow clusters.

架构

Network design

  • Kubernetes clusters for distributed Argo workflows are available in the following regions: China (Beijing), China (Hangzhou), China (Shanghai), China (Shenzhen), and China (Zhangjiakou). To use Kubernetes clusters for distributed Argo workflows in other regions, , join the DingTalk group 35688562 for technical support..

  • Create a virtual private cloud (VPC) or select an existing VPC.

  • Create vSwitches or select existing vSwitches.

    • Make sure that the CIDR blocks of the vSwitches that you use can provide sufficient IP addresses for Argo workflows. Argo workflows may create a large number of pods each of which requests an IP address from the vSwitches that you use.

    • Create a vSwitch in each zone of the region that you select. When you create a workflow engine, specify multiple vSwitch IDs in the input parameters of the workflow engine. After you create a workflow engine, it automatically creates elastic container instances in zones with sufficient stock of elastic container instances to run a large number of workflows. If elastic container instances are out of stock in all zones in the region that you select, you cannot run workflows because elastic container instances cannot be created.