All Products
Search
Document Center

Container Service for Kubernetes:Create a workflow

Last Updated:Jan 21, 2025

Workflow clusters are developed based on open source Argo Workflows, and can be applied to Continuous Integration and Continuous Delivery (CICD) pipelines, data processing, machine learning, and simulation calculation. This topic provides an example on how to use the Alibaba Cloud Argo CLI to create a workflow and configure CPU and memory resources for it.

Prerequisites

Usage notes

Workflow service accounts

Workflows allow you to specify service accounts to access other Kubernetes resources. You can create a service account, and the workflow cluster automatically grants permissions to this service account. If the service account has insufficient permissions, join DingTalk group 35688562 and request technical support.

Alibaba Cloud Argo CLI

The Alibaba Cloud Argo CLI is completely compatible with the open source Argo CLI, and has enhanced metrics and logging capabilities. You can use the Alibaba Cloud Argo CLI to query the CPU usage, memory usage, and operating cost of a workflow. Additionally, you can use it to obtain the logs of pods that have been deleted from a workflow.

Perform the following steps:

    Note

    In the following example, Linux is used. You can download one of the following Alibaba Cloud Argo CLI packages based on the operating system that is used:

  1. Run the following command to download the Alibaba Cloud Argo CLI:

    wget https://ack-one.oss-cn-hangzhou.aliyuncs.com/cli/v3.4.12/argo-cli-aliyun-linux
  2. Run the following command to make argo-cli-aliyun-linux executable:

    chmod +x argo-cli-aliyun-linux
  3. Move the executable file to a directory contained in a environment variable. Example: /usr/local/bin/.

    mv argo-cli-aliyun-linux /usr/local/bin/argo

Create a workflow

You can use the Alibaba Cloud Argo CLI or kubectl to create a workflow.

Use the Alibaba Cloud Argo CLI to manage a workflow

  1. Create a file named helloworld-workflow.yaml and add the following content to the file:

    apiVersion: argoproj.io/v1alpha1
    kind: Workflow                  # new type of k8s spec.
    metadata:
      generateName: hello-world-    # name of the workflow spec.
    spec:
      entrypoint: whalesay          # invoke the whalesay template.
      templates:
        - name: whalesay              # name of the template.
          container:
            image: docker/whalesay
            command: [ cowsay ]
            args: [ "hello world" ]
  2. Run the following command to submit the workflow:

    argo submit helloworld-workflow.yaml
  3. Query the status of the workflow.

    1. Run the following command to query a list of workflows:

      argo list

      Expected output:

      NAME                STATUS      AGE   DURATION   PRIORITY
      hello-world-lgdpp   Succeeded   2m    37s        0
    2. Run the following command to query the status of the workflow:

      argo get hello-world-lgdpp

      Expected output:

      Name:                hello-world-lgdpp
      Namespace:           default
      ServiceAccount:      unset (will run with the default ServiceAccount)
      Status:              Succeeded
      Conditions:
       PodRunning          False
       Completed           True
      ....
      Duration:            37 seconds
      Progress:            1/1
      ResourcesDuration:   17s*(1 cpu),17s*(100Mi memory)
      
      STEP                  TEMPLATE  PODNAME            DURATION  MESSAGE
       ✔ hello-world-lgdpp  whalesay  hello-world-lgdpp  27s

Use kubectl to manage a workflow

After you configure the kubeconfig file, you can use kubectl to manage workflow clusters. However, unlike regular Kubernetes clusters, some operations are limited when you use kubectl. The following table describes the permissions that you have to manage different resources by using kubectl.

Resource

Permission

priorityclasses

The permissions to manage PriorityClasses and customize PriorityClasses in workflows to control pod scheduling based on pod priorities.

namespaces

The permissions to create namespaces and full permissions on all resources in self-managed namespaces. However, you cannot access resources in system namespaces. System namespaces refer to namespaces whose names start with kube-.

Important

The namespace named after the cluster ID is the system namespace of Argo. You can manage this namespace. For example, you can modify Argo workflow settings in workflow-controller-configmap.

persistentvolumes

Full permissions.

persistentvolumeclaims

Full permissions on resources in self-managed namespaces.

secretsconfigmapsserviceaccounts

Full permissions on resources in self-managed namespaces.

pods

Read permissions on resources in self-managed namespaces.

pods/logevents

Read permissions on resources in self-managed namespaces.

pods/exec

Permissions to create resources in self-managed namespaces.

Argo:

workflows

workflowtasksets

workflowtemplates

cronworkflows

Full permissions on resources in self-managed namespaces.

Configure CPU and memory requests for the containers in the workflow

Workflow clusters preferably use preemptible elastic container instances. Pay-as-you-go elastic container instances are also used for cost optimization. The protection period of preemptible elastic container instances is 1 hour. Make sure that each step in your workflow can be completed within 1 hour.

Preemptible elastic container instances support only configurations of 2 vCPUs or higher.

  • If no resource requests are configured for a container or configured less than 2 vCPUs/4 GiB, the system uses 2 vCPUs/4 GiB by default.

  • If thd resource requests exceed 2 vCPUs/4 GiB, the system will automatically match an elastic container instance that meets the specifications.

    The following table describes the supported CPU and memory requests. We recommend that you do not set the CPU request to more than 8 vCPUs.

    vCPU

    Memory (GiB)

    2

    4, 8, and 16

    4

    4, 8, 16, and 32

    8

    4, 8, 16, 32, and 64

Force a pay-as-you-go elastic container instance to run the workflow

If you do not want to use a preemptible elastic container instance to run key tasks in cost-prioritized mode, you can force a pay-as-you-go elastic container instance to run the workflow.

Configure the requests and limits parameters in the Container section, as shown in the following sample code:

apiVersion: argoproj.io/v1alpha1
kind: Workflow                  # new type of k8s spec.
metadata:
  generateName: hello-world-    # name of the workflow spec.
spec:
  entrypoint: whalesay         # invoke the whalesay template.
  templates:
    - name: whalesay              # name of the template.
      container:
        image: docker/whalesay
        command: [ cowsay ]
        args: [ "hello world" ]
        resources:
          requests:
            cpu: 0.5
            memory: 1Gi
          limits:
            cpu: 0.5
            memory: 1Gi