Hybrid cloud node pools allow you to register nodes from your data center (IDC) to an ACK cluster, enabling unified management and coordinated scheduling across on-premises and cloud resources. This topic describes how to schedule applications to a hybrid cloud node pool to meet requirements for resource location, data localization compliance, or specific hardware.
How it works
When you create a hybrid cloud node pool, ACK automatically adds a taint and a label to every node in the pool:
Taint:
nodepool-type=hybridcloud:NoSchedule— prevents system components and general workloads from landing on hybrid cloud nodes by default.Label:
alibabacloud.com/nodepool-type: hybridcloud— marks the node type so scheduling policies can target it.
A toleration removes the scheduling restriction, making a pod eligible for hybrid cloud nodes. It does not guarantee placement there. To guarantee placement, pair the toleration with a nodeSelector or nodeAffinity rule.
Choose an approach
| Approach | Use when | Key configuration |
|---|---|---|
| Allow (Solution 1) | Expand total capacity; no strict location constraints. Pods may still land on other node pools. | tolerations only |
| Target a specific node pool (Solution 2a) | Specific hardware, network topology, or data residency for one pool. | tolerations + nodeSelector (node pool ID) |
| Target any hybrid cloud node pool (Solution 2b) | Workloads that must run on-premises but across any IDC node pool. | tolerations + nodeAffinity |
| Priority-based across pools (Solution 3) | Prefer a lower-cost hybrid cloud pool for scale-out; release cloud ECS nodes first on scale-in. | ResourcePolicy + tolerations |
Solution 1: Allow scheduling to hybrid cloud nodes
When to use: Your cluster needs more total capacity and your workload has no strict placement constraints. With this configuration, the scheduler treats the hybrid cloud node pool as a valid candidate — but pods may still run on other active node pools.
Core configuration: Add a tolerations entry in spec.template.spec:
spec:
template:
spec:
# Allow this pod to run on hybrid cloud nodes.
tolerations:
- key: "nodepool-type"
operator: "Equal"
value: "hybridcloud"
effect: "NoSchedule"Solution 2: Force scheduling to hybrid cloud nodes
These two sub-solutions both guarantee that pods land on hybrid cloud nodes. Choose based on how precisely you need to target a node pool.
Target a specific node pool
When to use: Your workload requires a particular hybrid cloud node pool — for example, one with specific hardware, a dedicated network topology, or strict data residency requirements. Use nodeSelector to pin pods to a single node pool by ID.
Core configuration: Add both tolerations and nodeSelector in spec.template.spec:
spec:
template:
spec:
# Core configuration 1: Lift the hybrid cloud scheduling restriction.
tolerations:
- key: "nodepool-type"
operator: "Equal"
value: "hybridcloud"
effect: "NoSchedule"
# Core configuration 2: Pin to a specific node pool by ID.
nodeSelector:
alibabacloud.com/nodepool-id: npxxxxxxxxxxxxGet the node pool ID from the Node Management > Node Pools page of your cluster.
Target any hybrid cloud node pool
When to use: Your workload must run on-premises but can run on any hybrid cloud node pool. nodeAffinity matches all nodes with the hybridcloud label, so pods spread across all IDC node pools rather than being pinned to one.
Core configuration: Add both tolerations and affinity in spec.template.spec:
spec:
template:
spec:
# Core configuration 1: Lift the hybrid cloud scheduling restriction.
tolerations:
- key: "nodepool-type"
operator: "Equal"
value: "hybridcloud"
effect: "NoSchedule"
# Core configuration 2: Require a hybrid cloud node by label.
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: alibabacloud.com/nodepool-type
operator: In
values:
- hybridcloudSolution 3: Priority-based scheduling with ResourcePolicy
When to use: You have multiple node pools and want ordered scale-out and reverse-ordered scale-in. For example, use a lower-cost hybrid cloud node pool as the primary pool and a cloud-based ECS node pool as overflow:
Scale-out: Pods are scheduled to the hybrid cloud pool first. If resources are insufficient, they overflow to the ECS pool.
Scale-in: Pods in the ECS pool are stopped first, preserving resource stability in the hybrid cloud pool.
This solution requires two resources: a ResourcePolicy and a workload that references it.
`ResourcePolicy` configuration: Define the scheduling priority order in the units list, then associate the policy with target pods using a selector.
apiVersion: scheduling.alibabacloud.com/v1alpha1
kind: ResourcePolicy
metadata:
name: nginx-priority-policy
spec:
# Match pods with this label.
selector:
app: nginx-priority
# The scheduler tries pools in order. First match wins.
units:
# Priority 1: hybrid cloud node pool (lower cost, preferred)
- resource: ecs
nodeSelector:
alibabacloud.com/nodepool-id: np-pool-a-xxxxxxxxxx
# Priority 2: ECS node pool (overflow)
- resource: ecs
nodeSelector:
alibabacloud.com/nodepool-id: np-pool-b-xxxxxxxxxxGet the node pool IDs from the Node Management > Node Pools page of your cluster.
Workload configuration: Add the matching label and the hybrid cloud toleration to the pod template.
spec:
template:
metadata:
# This label connects the pod to the ResourcePolicy above.
labels:
app: nginx-priority
spec:
# Allow scheduling to the hybrid cloud node pool.
tolerations:
- key: "nodepool-type"
operator: "Equal"
value: "hybridcloud"
effect: "NoSchedule"ResourcePolicy:
Deployment:
Apply in production
Do not delete the default taint
nodepool-type=hybridcloud:NoSchedulefrom the hybrid cloud node pool. This taint prevents system components from being accidentally scheduled to hybrid cloud nodes.Do not delete or modify the default label
alibabacloud.com/nodepool-type: hybridcloudon hybrid cloud nodes. Modifying or deleting this label can affect the normal operation of the node pool.