Workflow clusters let you pin each workflow step to a specific Elastic Compute Service (ECS) instance type — GPU-accelerated or AMD-based — by adding a single annotation to the pod template. Use this when your workloads require specific hardware, such as GPU computation, video encoding, or high-memory processing.
Prerequisites
Before you begin, ensure that you have:
-
A workflow cluster in Container Service for Kubernetes (ACK)
-
Permissions to create and submit Argo Workflows
How it works
Add the k8s.aliyun.com/eci-use-specs annotation under metadata.annotations in the template that requires custom hardware. The annotation value is the ECS instance type, such as ecs.gn5i-c4g1.xlarge.
All elastic container instances created for that template step are scheduled onto the specified ECS instance type.
Supported instance types
GPU-accelerated instance types
GPU-accelerated elastic container instances support NVIDIA driver version 460.73.01 and CUDA Toolkit 11.2. For available CUDA images, see NVIDIA CUDA on Docker Hub.
| Instance family | GPU | Example instance type |
|---|---|---|
| gn6v | NVIDIA V100 | ecs.gn6v-c8g1.2xlarge |
| gn6i | NVIDIA T4 | ecs.gn6i-c4g1.xlarge |
| gn5 | NVIDIA P100 | ecs.gn5-c4g1.xlarge |
| gn5i | NVIDIA P4 | ecs.gn5i-c2g1.large |
For the full list, see Instance families.
AMD-based instance types
AMD-based elastic container instances run on AMD EPYC™ ROME processors and use the SHENLONG architecture to minimize virtualization overhead. They work well for video encoding and decoding, large packet throughput, web frontend servers, MMO (massively multiplayer online) game frontends, and DevOps application development and testing.
| Instance family | Type | Example instance type |
|---|---|---|
| g7a, g6a | General-purpose | ecs.g7a.large, ecs.g6a.large |
| c7a, c6a | Compute-optimized | ecs.c7a.large, ecs.c6a.large |
| r7a, r6a | Memory-optimized | ecs.r7a.large, ecs.r6a.large |
For the full list, see Instance families.
Run a workflow on a GPU-accelerated instance
The following example schedules the whalesay template on a gn5i GPU instance by setting k8s.aliyun.com/eci-use-specs in the template's annotations.
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: hello-world-
spec:
entrypoint: whalesay
templates:
- name: whalesay
metadata:
annotations:
k8s.aliyun.com/eci-use-specs: ecs.gn5i-c4g1.xlarge # GPU-accelerated instance type
container:
image: docker/whalesay
command: [cowsay]
args: ["hello world"]
Run a workflow on an AMD-based instance
The following example schedules the whalesay template on a c6a AMD instance.
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: hello-world-
spec:
entrypoint: whalesay
templates:
- name: whalesay
metadata:
annotations:
k8s.aliyun.com/eci-use-specs: "ecs.c6a.xlarge" # AMD-based instance type
container:
image: docker/whalesay
command: [cowsay]
args: ["hello world"]
What's next
-
For the full list of ECS instance families and their specifications, see Instance families.