GPU-HPN reserved nodes in ACS are expensive resources. Without labels and taints, any workload can be scheduled onto them, wasting capacity reserved for GPU jobs. ACS lets you attach labels, annotations, and taints to GPU-HPN nodes to control workload placement precisely — including custom properties for affinity scheduling.
How labels and taints work
Labels and taints give you direct control over how the Kubernetes scheduler places workloads on GPU-HPN nodes:
-
Labels mark nodes with key-value pairs. Pods use
nodeSelectoror node affinity rules to target nodes with matching labels. -
Annotations attach non-identifying metadata to nodes, often used by controllers and tooling.
-
Taints repel pods that lack a matching toleration. This keeps general workloads off expensive GPU-HPN nodes, ensuring only compatible pods are scheduled there.
Taint effects determine how the scheduler and runtime respond when a pod lacks a matching toleration:
| Effect | Behavior |
|---|---|
NoSchedule |
New pods without a matching toleration are not scheduled on the node. Running pods are unaffected. |
PreferNoSchedule |
The scheduler avoids placing pods without a matching toleration, but may do so if no other nodes are available. |
NoExecute |
New pods without a matching toleration are not scheduled, and running pods without a matching toleration are evicted. |
Supported node types
Node objects in an ACS cluster are either standard virtual nodes or GPU-HPN reserved nodes. Support for labels, annotations, and taints depends on the node type:
| Node type | Labels | Annotations | Taints |
|---|---|---|---|
| Standard virtual nodes | Not supported | Not supported | Not supported |
| GPU-HPN reserved nodes (GPU-HPN node custom labels and scheduler enabled) | Any key | Any key | Any key |
| GPU-HPN reserved nodes (GPU-HPN node custom labels and scheduler not enabled) | custom.k8s.io/ prefix only |
custom.k8s.io/ prefix only |
custom.k8s.io/ prefix only |
The GPU-HPN node custom labels and scheduler configuration is enabled by default in newer ACS versions. For details, see kube-scheduler.
Add a label to a GPU-HPN node
The following example adds the label custom.k8s.io/test-key=test-value to a GPU-HPN reserved node.
Prerequisites
Before you begin, ensure that you have:
-
An ACS cluster with at least one GPU-HPN reserved node
-
kubectlconfigured to connect to the cluster -
(If GPU-HPN node custom labels and scheduler is not enabled) A label key prefixed with
custom.k8s.io/— see GPU-HPN node custom labels and scheduler
Steps
-
List the GPU-HPN reserved nodes in your cluster. GPU-HPN nodes carry the label
alibabacloud.com/node-type=reservedby default, so you can filter by it:kubectl get node -lalibabacloud.com/node-type=reservedThe output lists all GPU-HPN reserved nodes:
NAME STATUS ROLES AGE VERSION cn-wulanchabu-c.cr-cye8scj0ukxxxxxxxgbg Ready agent 4d20h v1.28.3-xx cn-wulanchabu-c.cr-u9mqtuqxxxxxxx9noka5 Ready agent 2d22h v1.28.3-xxCopy the name of the node you want to label.
-
Add the label to the node. Replace
<node-name>with the node name from step 1:kubectl label node <node-name> custom.k8s.io/test-key=test-valueExample:
kubectl label node cn-wulanchabu-c.cr-cye8scj0ukxxxxxxxgbg custom.k8s.io/test-key=test-valueThe output confirms the operation:
node/cn-wulanchabu-c.cr-cye8scj0ukxc3gkxxgbg not labeledThe message
not labeledis a kubectl quirk — it indicates the label was applied successfully. -
Verify the label was applied.
kubectl get node <node-name> -Lcustom.k8s.io/test-keyExample:
kubectl get node cn-wulanchabu-c.cr-cye8scj0ukxxxxxxxgbg -Lcustom.k8s.io/test-keyThe
TEST-KEYcolumn shows the label value:NAME STATUS ROLES AGE VERSION TEST-KEY cn-wulanchabu-c.cr-cye8scj0ukxxxxxxxgbg Ready agent 4d21h v1.28.3-xx test-value
What's next
-
To use node labels for workload scheduling, configure
nodeSelectoror node affinity rules in your pod spec. See the Kubernetes documentation on node affinity. -
To repel non-GPU workloads from GPU-HPN nodes using taints, add a taint with the
NoScheduleeffect and configure matching tolerations on your GPU pods. -
To manage scheduler configuration for GPU-HPN nodes, see kube-scheduler.