By default, ACK schedules all workloads to x86-based virtual nodes. If your cluster includes both ARM and x86 virtual nodes, configure node scheduling to control where your workloads run — enforce ARM-only placement for architecture-specific images, or set a preferred architecture for multi-arch images.
Prerequisites
Before you begin, make sure you have:
-
An ACK cluster running version 1.20 or later. See Create an ACK managed cluster and Manually upgrade a cluster.
-
The
ack-virtual-nodecomponent installed at version 2.9.0 or later. See ACK Virtual Node.
ARM instances are available in limited regions and zones. Verify that your cluster is in a supported region before proceeding. See Overview of regions available for ECS instance types.
Usage notes
Taint handling and cluster version
All ARM-based virtual nodes carry the taint kubernetes.io/arch=arm64:NoSchedule. How you handle this taint depends on your cluster version:
-
Cluster version < 1.24: Declare the toleration
kubernetes.io/arch=arm64:NoScheduleexplicitly intolerationswhen usingnodeSelectoror node affinity. -
Cluster version >= 1.24: The scheduler detects the taint automatically — no manual toleration required.
Billing
Pods on ARM-based Elastic Container Instance (ECI) nodes are billed by the ECS instance type used to create the ECI — not by vCPU or memory usage.
To view the instance type for a running pod, run:
kubectl describe pod <pod-name>
Look for the k8s.aliyun.com/eci-instance-spec field in the output — it shows the ECS instance type the pod is billed against.
For pricing details, see:
Step 1: Add ARM-based virtual nodes
Enable ARM support by setting enableLinuxArm64Node: true in the eci-profile ConfigMap. Choose the Console or kubectl method.
At least one vSwitch in vSwitchIds must be in a zone that supports ARM instances. If all your vSwitches are in unsupported zones, create a new vSwitch in a supported zone and add its ID to vSwitchIds. See Create and manage a vSwitch.
Console
-
Log in to the ACK console. In the left-side navigation pane, click Clusters.
-
Click the name of the cluster you want to manage. In the left-side navigation pane, choose Configurations > ConfigMaps.
-
Set Namespace to kube-system. Locate eci-profile and click Edit.
-
Set
enableLinuxArm64Nodetotrue, then click Confirm.
After saving, wait approximately 30 seconds. The new ARM virtual node appears on the Node page with the name virtual-kubelet-<zoneId>-linux-arm64.
kubectl
Prerequisites: Connect to your cluster using kubectl. See Obtain the cluster kubeconfig and connect to the cluster using kubectl.
Run the following command to open the ConfigMap for editing:
kubectl edit configmap eci-profile -n kube-system
Make the following changes:
-
Set
enableLinuxArm64Nodetotrue. -
In
vSwitchIds, include at least one vSwitch in a zone that supports ARM instances.
After saving, wait approximately 30 seconds. The new ARM virtual node appears on the Node page with the name virtual-kubelet-<zoneId>-linux-arm64.
Step 2: Schedule workloads to ARM virtual nodes
All ARM virtual nodes have the label kubernetes.io/arch=arm64 by default. You do not need to apply this label manually.
Use this label to control scheduling. Choose your approach based on what you need:
| Approach | Use when |
|---|---|
nodeSelector |
Your image supports only ARM; you need hard placement enforcement |
Node affinity (required) |
Same as nodeSelector, but with more expressive matching rules; enables automatic taint toleration on ACK Pro clusters >= 1.24 |
Node affinity (preferred) |
Your image is multi-arch; you want ARM preferred but allow fallback to x86 |
Schedule ARM-only workloads
Use this approach when your container image supports only the ARM architecture. The pod fails to start if scheduled on an x86 node, so hard placement enforcement is required.
nodeSelector
nodeSelector restricts the pod to nodes with the kubernetes.io/arch=arm64 label — all ARM virtual nodes in ACK have this label.
Add the following to your pod spec:
nodeSelector:
kubernetes.io/arch: arm64 # Schedule to ARM nodes only.
Full Deployment example:
The following YAML includes an explicit toleration for kubernetes.io/arch=arm64:NoSchedule. On ACK Pro clusters running version 1.24 or later, the scheduler adds this toleration automatically and you can omit it.apiVersion: apps/v1
kind: Deployment
metadata:
name: only-arm
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
nodeSelector:
kubernetes.io/arch: arm64 # Schedule to ARM nodes only.
tolerations:
# Tolerate the taint of the virtual node.
- key: virtual-kubelet.io/provider
operator: Exists
effect: NoSchedule
# Tolerate the taint of the ARM-based virtual node.
- key: kubernetes.io/arch
operator: Equal
value: arm64
effect: NoSchedule
containers:
- name: nginx
image: alibaba-cloud-linux-3-registry.cn-hangzhou.cr.aliyuncs.com/alinux3/nginx_optimized:20240221-1.20.1-2.3.0
Node affinity (required)
Prerequisites: Enable the cluster virtual node scheduling policy and confirm your cluster version and component version meet the requirements.
requiredDuringSchedulingIgnoredDuringExecution enforces the same hard placement as nodeSelector, but with richer matching expressions. On ACK Pro clusters >= 1.24, adding this constraint causes the scheduler to automatically tolerate the kubernetes.io/arch=arm64:NoSchedule taint.
Add the following to your pod spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- arm64
Full Deployment example:
The following YAML includes an explicit toleration for kubernetes.io/arch=arm64:NoSchedule. On ACK Pro clusters running version 1.24 or later, the scheduler adds this toleration automatically and you can omit it.apiVersion: apps/v1
kind: Deployment
metadata:
name: only-arm
spec:
selector:
matchLabels:
app: nginx
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- arm64
tolerations:
# Tolerate the taint of the virtual node.
- key: virtual-kubelet.io/provider
operator: Exists
effect: NoSchedule
# Tolerate the taint of the ARM-based virtual node.
- key: kubernetes.io/arch
operator: Equal
value: arm64
effect: NoSchedule
containers:
- name: nginx
image: nginx
Schedule multi-arch images with architecture preference
Prerequisites: Enable the cluster virtual node scheduling policy and confirm your cluster version and component version meet the requirements.
If your container image supports both ARM and x86, use preferredDuringSchedulingIgnoredDuringExecution to express a preference without hard enforcement. The scheduler assigns each node a score based on the weight value (range: 1–100) — nodes that satisfy the preferred rule receive a higher score, and the pod lands on the highest-scoring node. When no node matches the preferred architecture, the pod falls back to another available architecture.
The following snippet sets ARM as the preferred architecture with weight: 1. Increase the weight (up to 100) to strengthen the preference when multiple preferred rules compete.
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- arm64
Limitations
The ARM architecture does not support components from the application marketplace. The component center supports only these module categories:
-
Core components
-
Logging and monitoring
-
Storage
-
Network
FAQ
Why does the pod go to an x86 ECS node even with ARM node affinity configured?
The ACK scheduler prioritizes ECS nodes over virtual nodes by default. Node affinity controls priority *between* virtual node architectures (ARM vs. x86), not *between* virtual nodes and ECS nodes. If your cluster has sufficient x86 ECS capacity, pods may land there regardless of the preferred ARM node affinity setting. To change this behavior, adjust the scoring weights of the scheduler plug-in.
Can I use preemptible (Spot) instances on ARM virtual nodes?
Yes. See Use preemptible instances.
How do I set up networking for ARM virtual nodes after creating a cluster?
In the eci-profile ConfigMap, set vSwitchIds to include a vSwitch in a zone that supports ARM instances. This ensures ARM virtual nodes are created in a supported zone.
What's next
-
Build multi-arch container images using Container Registry Enterprise Edition. See Build multi-arch container images.
-
Manage regular ARM ECS nodes. See Schedule to ARM nodes.
-
Run Apache Spark jobs on ARM virtual nodes for big data workloads without managing cluster resources. See Run Spark jobs on ARM virtual nodes.