Label is an important concept of Kubernetes. Services, Deployments, and pods are associated with each other by labels. You can configure pod scheduling policies based on node labels. This allows you to schedule pods to nodes that have specific labels. This topic describes how to schedule an application pod to a specific node pool.
Procedure
- Add a label to the nodes in a node pool. Container Service for Kubernetes (ACK) allows you to manage a group of cluster nodes by using a node pool. For example, you can centrally manage the labels and taints of the nodes in a node pool. For more information about how to create a node pool, see Manage node pools.You can also click Scale Out in the Actions column of a node pool to update or add labels. If auto scaling is enabled for a node pool, click Modify in the Actions column to update or add labels.
- Configure a scheduling policy for an application pod. After the preceding step is completed, the pod: nginx label is added to the nodes in the node pool. You can set the nodeSelector or nodeAffinity field in pod configurations to ensure that an application pod is scheduled to nodes with matching labels in a node pool. Perform the following steps:
- Set nodeSelector. nodeSelector is a field in the spec section of pod configurations. Add the pod: nginx label to nodeSelector. Example:
apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment-basic labels: app: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: nodeSelector: pod: nginx # After you add the label in this field, the application pod can run only on nodes with this label in the node pool. containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80
- Set nodeAffinity. You can also use nodeAffinity to schedule an application pod based on your requirements. nodeAffinity supports the following scheduling policies:
- requiredDuringSchedulingIgnoredDuringExecution
If this policy is used, a pod can be scheduled only to a node that meets the match rules. If no node meets the match rules, the system retries until a node that meets the rules is found. IgnoreDuringExecution indicates that if the label of the node where the pod is deployed changes and no longer meets the match rules, the pod continues to run on the node.
- preferredDuringSchedulingIgnoredDuringExecution
If this policy is used, the pod is preferably scheduled to a node that meets the match rules. If no node meets the rules, the system ignores the rules.
In the following example, the requiredDuringSchedulingIgnoredDuringExecution policy is used to ensure that the application pod always runs on a node in a specific node pool.apiVersion: apps/v1 kind: Deployment metadata: name: nginx-with-affinity labels: app: nginx-with-affinity spec: replicas: 2 selector: matchLabels: app: nginx-with-affinity template: metadata: labels: app: nginx-with-affinity spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: pod operator: In # The application runs on a node that has the pod: nginx label. values: - nginx containers: - name: nginx-with-affinity image: nginx:1.7.9 ports: - containerPort: 80
- Set nodeSelector.
Result
In the preceding example, all application pods are scheduled to the xxx.xxx.0.74 node. This node has the pod: nignx label.