Workflow clusters are built on open-source Argo Workflows and support CI/CD pipelines, data processing, machine learning, and simulation calculation. This topic shows you how to install the Alibaba Cloud Argo CLI, create and submit a hello-world workflow, check its status, and configure CPU and memory resources.
Prerequisites
Before you begin, ensure that you have:
Install the Alibaba Cloud Argo CLI
The Alibaba Cloud Argo CLI is fully compatible with the open-source Argo CLI and adds enhanced metrics and logging capabilities. Use it to:
Query the CPU usage, memory usage, and operating cost of a workflow.
Retrieve logs from pods that have already been deleted.
Download the binary for your operating system:
Darwin: argo-cli-aliyun-darwin
Linux: argo-cli-aliyun-linux
The following steps use Linux as an example.
Download the binary.
wget https://ack-one.oss-cn-hangzhou.aliyuncs.com/cli/v3.4.12/argo-cli-aliyun-linuxMake it executable.
chmod +x argo-cli-aliyun-linuxMove it to a directory on your PATH.
mv argo-cli-aliyun-linux /usr/local/bin/argoVerify the installation.
argo version
Workflow service accounts
Workflows can reference service accounts to access other Kubernetes resources. The workflow cluster automatically grants the required permissions to service accounts you create. If a service account has insufficient permissions, join DingTalk group 35688562 to request technical support.
Create a workflow
Use the Alibaba Cloud Argo CLI
Quick reference — common Argo CLI commands
| Operation | Command |
|---|---|
| Submit a workflow | argo submit <file>.yaml |
| List all workflows | argo list |
| Get status of a workflow | argo get <workflow-name> |
Create
helloworld-workflow.yamlwith the following content.apiVersion: argoproj.io/v1alpha1 kind: Workflow # new type of k8s spec metadata: generateName: hello-world- # name prefix for the workflow instance spec: entrypoint: whalesay # first template to run templates: - name: whalesay # name of the template container: image: docker/whalesay command: [cowsay] args: ["hello world"]Key fields:
Field Description kind: WorkflowDeclares a new Kubernetes resource type managed by Argo. generateNameKubernetes generates a unique name for each run by appending a random suffix to this prefix. spec.entrypointThe template Argo runs first when the workflow starts. spec.templatesDefines the set of templates available to the workflow. Each template describes a unit of work. Submit the workflow.
argo submit helloworld-workflow.yamlList all workflows to confirm the submission.
argo listExpected output:
NAME STATUS AGE DURATION PRIORITY hello-world-lgdpp Succeeded 2m 37s 0Get detailed status for the workflow.
argo get hello-world-lgdppExpected output:
Name: hello-world-lgdpp Namespace: default ServiceAccount: unset (will run with the default ServiceAccount) Status: Succeeded Conditions: PodRunning False Completed True .... Duration: 37 seconds Progress: 1/1 ResourcesDuration: 17s*(1 cpu),17s*(100Mi memory) STEP TEMPLATE PODNAME DURATION MESSAGE ✔ hello-world-lgdpp whalesay hello-world-lgdpp 27sThe
ResourcesDurationfield shows the CPU and memory consumed by the workflow, which you can use to monitor costs.
Use kubectl
After you configure the kubeconfig file, you can use kubectl to manage workflow clusters. Some operations are restricted compared to regular Kubernetes clusters. The following table describes the permissions available for each resource type.
| Resource | Permissions |
|---|---|
priorityclasses | Manage PriorityClasses and customize pod scheduling priority. |
namespaces | Create namespaces and have full permissions on all resources in self-managed namespaces. Access to system namespaces (names starting with kube-) is blocked. Important The namespace named after the cluster ID is the Argo system namespace — you can manage it. For example, modify Argo workflow settings in |
persistentvolumes | Full permissions. |
persistentvolumeclaims | Full permissions on resources in self-managed namespaces. |
secrets, configmaps, serviceaccounts | Full permissions on resources in self-managed namespaces. |
pods | Read permissions on resources in self-managed namespaces. |
pods/log, events | Read permissions on resources in self-managed namespaces. |
pods/exec | Create permissions on resources in self-managed namespaces. |
Argo: workflows, workflowtasksets, workflowtemplates, cronworkflows | Full permissions on resources in self-managed namespaces. |
Configure CPU and memory requests
Workflow clusters use preemptible elastic container instances by default, with pay-as-you-go instances available for cost optimization. The protection period for preemptible instances is 1 hour — each workflow step must complete within that window.
Preemptible elastic container instances require at least 2 vCPUs:
If a container has no resource requests, or requests less than 2 vCPUs/4 GiB, the system defaults to 2 vCPUs/4 GiB.
If requests exceed 2 vCPUs/4 GiB, the system automatically selects an instance that meets those specs.
Supported CPU and memory combinations (we recommend keeping CPU requests at 8 vCPUs or below):
| vCPU | Memory (GiB) |
|---|---|
| 2 | 4, 8, and 16 |
| 4 | 4, 8, 16, and 32 |
| 8 | 4, 8, 16, 32, and 64 |
Force a pay-as-you-go instance
If you do not want to use a preemptible elastic container instance to run key tasks in cost-prioritized mode, you can force a pay-as-you-go elastic container instance to run the workflow.
Configure the requests and limits parameters in the container spec, as shown in the following example:
apiVersion: argoproj.io/v1alpha1
kind: Workflow # new type of k8s spec
metadata:
generateName: hello-world- # name prefix for the workflow instance
spec:
entrypoint: whalesay # first template to run
templates:
- name: whalesay # name of the template
container:
image: docker/whalesay
command: [cowsay]
args: ["hello world"]
resources:
requests:
cpu: 0.5
memory: 1Gi
limits:
cpu: 0.5
memory: 1Gi