All Products
Search
Document Center

Container Service for Kubernetes:Create a workflow

Last Updated:Mar 26, 2026

Workflow clusters are built on open-source Argo Workflows and support CI/CD pipelines, data processing, machine learning, and simulation calculation. This topic shows you how to install the Alibaba Cloud Argo CLI, create and submit a hello-world workflow, check its status, and configure CPU and memory resources.

Prerequisites

Before you begin, ensure that you have:

Install the Alibaba Cloud Argo CLI

The Alibaba Cloud Argo CLI is fully compatible with the open-source Argo CLI and adds enhanced metrics and logging capabilities. Use it to:

  • Query the CPU usage, memory usage, and operating cost of a workflow.

  • Retrieve logs from pods that have already been deleted.

Download the binary for your operating system:

The following steps use Linux as an example.

  1. Download the binary.

    wget https://ack-one.oss-cn-hangzhou.aliyuncs.com/cli/v3.4.12/argo-cli-aliyun-linux
  2. Make it executable.

    chmod +x argo-cli-aliyun-linux
  3. Move it to a directory on your PATH.

    mv argo-cli-aliyun-linux /usr/local/bin/argo
  4. Verify the installation.

    argo version

Workflow service accounts

Workflows can reference service accounts to access other Kubernetes resources. The workflow cluster automatically grants the required permissions to service accounts you create. If a service account has insufficient permissions, join DingTalk group 35688562 to request technical support.

Create a workflow

Use the Alibaba Cloud Argo CLI

Quick reference — common Argo CLI commands

OperationCommand
Submit a workflowargo submit <file>.yaml
List all workflowsargo list
Get status of a workflowargo get <workflow-name>
  1. Create helloworld-workflow.yaml with the following content.

    apiVersion: argoproj.io/v1alpha1
    kind: Workflow                  # new type of k8s spec
    metadata:
      generateName: hello-world-    # name prefix for the workflow instance
    spec:
      entrypoint: whalesay          # first template to run
      templates:
        - name: whalesay            # name of the template
          container:
            image: docker/whalesay
            command: [cowsay]
            args: ["hello world"]

    Key fields:

    FieldDescription
    kind: WorkflowDeclares a new Kubernetes resource type managed by Argo.
    generateNameKubernetes generates a unique name for each run by appending a random suffix to this prefix.
    spec.entrypointThe template Argo runs first when the workflow starts.
    spec.templatesDefines the set of templates available to the workflow. Each template describes a unit of work.
  2. Submit the workflow.

    argo submit helloworld-workflow.yaml
  3. List all workflows to confirm the submission.

    argo list

    Expected output:

    NAME                STATUS      AGE   DURATION   PRIORITY
    hello-world-lgdpp   Succeeded   2m    37s        0
  4. Get detailed status for the workflow.

    argo get hello-world-lgdpp

    Expected output:

    Name:                hello-world-lgdpp
    Namespace:           default
    ServiceAccount:      unset (will run with the default ServiceAccount)
    Status:              Succeeded
    Conditions:
     PodRunning          False
     Completed           True
    ....
    Duration:            37 seconds
    Progress:            1/1
    ResourcesDuration:   17s*(1 cpu),17s*(100Mi memory)
    
    STEP                  TEMPLATE  PODNAME            DURATION  MESSAGE
     ✔ hello-world-lgdpp  whalesay  hello-world-lgdpp  27s

    The ResourcesDuration field shows the CPU and memory consumed by the workflow, which you can use to monitor costs.

Use kubectl

After you configure the kubeconfig file, you can use kubectl to manage workflow clusters. Some operations are restricted compared to regular Kubernetes clusters. The following table describes the permissions available for each resource type.

ResourcePermissions
priorityclassesManage PriorityClasses and customize pod scheduling priority.
namespacesCreate namespaces and have full permissions on all resources in self-managed namespaces. Access to system namespaces (names starting with kube-) is blocked.
Important

The namespace named after the cluster ID is the Argo system namespace — you can manage it. For example, modify Argo workflow settings in workflow-controller-configmap.

persistentvolumesFull permissions.
persistentvolumeclaimsFull permissions on resources in self-managed namespaces.
secrets, configmaps, serviceaccountsFull permissions on resources in self-managed namespaces.
podsRead permissions on resources in self-managed namespaces.
pods/log, eventsRead permissions on resources in self-managed namespaces.
pods/execCreate permissions on resources in self-managed namespaces.
Argo: workflows, workflowtasksets, workflowtemplates, cronworkflowsFull permissions on resources in self-managed namespaces.

Configure CPU and memory requests

Workflow clusters use preemptible elastic container instances by default, with pay-as-you-go instances available for cost optimization. The protection period for preemptible instances is 1 hour — each workflow step must complete within that window.

Preemptible elastic container instances require at least 2 vCPUs:

  • If a container has no resource requests, or requests less than 2 vCPUs/4 GiB, the system defaults to 2 vCPUs/4 GiB.

  • If requests exceed 2 vCPUs/4 GiB, the system automatically selects an instance that meets those specs.

Supported CPU and memory combinations (we recommend keeping CPU requests at 8 vCPUs or below):

vCPUMemory (GiB)
24, 8, and 16
44, 8, 16, and 32
84, 8, 16, 32, and 64

Force a pay-as-you-go instance

If you do not want to use a preemptible elastic container instance to run key tasks in cost-prioritized mode, you can force a pay-as-you-go elastic container instance to run the workflow.

Configure the requests and limits parameters in the container spec, as shown in the following example:

apiVersion: argoproj.io/v1alpha1
kind: Workflow                  # new type of k8s spec
metadata:
  generateName: hello-world-    # name prefix for the workflow instance
spec:
  entrypoint: whalesay          # first template to run
  templates:
    - name: whalesay            # name of the template
      container:
        image: docker/whalesay
        command: [cowsay]
        args: ["hello world"]
        resources:
          requests:
            cpu: 0.5
            memory: 1Gi
          limits:
            cpu: 0.5
            memory: 1Gi