Node-level services such as log collectors and monitoring agents must run on every node in your cluster. A DaemonSet ensures exactly one pod runs on each node: when a node joins the cluster, the DaemonSet creates a pod on it automatically; when a node is removed, the pod is cleaned up. This topic explains how DaemonSet scheduling works and how to create one using the console or kubectl.
Common use cases:
Log collection daemons (for example, Fluentd)
Node monitoring agents (for example, Prometheus Node Exporter)
For workloads that require replica counts or advanced scheduling beyond one-pod-per-node placement, use a Deployment instead. For the full DaemonSet specification, see the Kubernetes documentation.
How scheduling works
By default, a DaemonSet places one pod on every node. Three mechanisms can restrict which nodes receive pods.
Taints and tolerations
DaemonSet pods respect node taints. A pod does not run on a node if it cannot tolerate the node's taints. The following tolerations are added to DaemonSet pods automatically:
| Toleration key | Effect | Behavior |
|---|---|---|
node.kubernetes.io/unschedulable | NoSchedule | Pods are scheduled on unschedulable nodes. |
node.kubernetes.io/not-ready | NoExecute | Pods run on nodes that are not ready. Running pods are not evicted for 300 seconds. |
node.kubernetes.io/unreachable | NoExecute | Pods run on nodes that are unreachable. Running pods are not evicted for 300 seconds. |
nodeSelector
If a DaemonSet includes a nodeSelector, pods run only on nodes that match the label. For example, nodeSelector: { disktype: ssd } limits the DaemonSet to nodes labeled disktype=ssd.
Affinity and anti-affinity
Node affinity, pod affinity, and pod anti-affinity rules also apply to DaemonSet pods.
Prerequisites
Before you begin, ensure that you have:
An ACK cluster with at least one node
(For kubectl) A kubectl connection to the cluster. See Connect to a cluster using kubectl
Public network access for your cluster or nodes, because the example image is a public image. Configure one of the following:
(Recommended) Enable public network access for the cluster by creating an Internet NAT gateway for the cluster's VPC
Assign a static public IP address to each node where the workload runs
Create a DaemonSet using the console
Log on to the Container Service Management Console. In the left navigation pane, click Clusters.
On the Clusters page, click the name of your cluster. In the left navigation pane, choose Workloads > DaemonSets.
On the DaemonSets page, click Create from Image.
Configure the DaemonSet. The form is identical to a Deployment with two differences: For all other configuration options, see Create a Deployment.
Basic Information: No Replicas setting. The number of pods is determined by the number of nodes.
Advanced: No Scaling setting.
Create a DaemonSet using kubectl
Save the following YAML to a file named
daemonset.yaml.apiVersion: apps/v1 kind: DaemonSet metadata: name: nginx-test namespace: default # Change the namespace as needed. labels: app: nginx spec: selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: anolis-registry.cn-zhangjiakou.cr.aliyuncs.com/openanolis/nginx:1.14.1-8.6 resources: limits: cpu: '1' memory: 2Gi requests: cpu: 500m memory: 512MiApply the manifest.
kubectl apply -f daemonset.yamlExpected output:
daemonset.apps/nginx-test createdVerify that a pod is running on each node.
kubectl get pods --all-namespaces -o wide | grep nginx-testThe output lists one pod per node, each assigned to a different node IP address.
default nginx-test-8mqvh 1/1 Running 0 3m38s 192.168.*.** cn-shanghai.192.168.**.250 <none> <none> default nginx-test-ltlx6 1/1 Running 0 3m38s 192.168.*.** cn-shanghai.192.168.**.98 <none> <none> default nginx-test-n6zrv 1/1 Running 0 3m38s 192.168.*.** cn-shanghai.192.168.**.17 <none> <none>
What's next
If a pod is not starting or behaving unexpectedly, see Troubleshoot pod issues.
For general workload creation issues, see Workload FAQ.
To learn about scheduling in depth, see Scheduling.