Event monitoring is a monitoring method provided by Kubernetes. It provides improvements over resource monitoring in terms of timeliness, accuracy, and scenarios. You can use node-problem-detector with the Kubernetes event center of Log Service to sink cluster events, and configure node-problem-detector to diagnose clusters and send error events to sinks. You can sink cluster events to DingTalk, Log Service, and EventBridge. This allows you to monitor exceptions and issues in clusters in real time.

Background information

Kubernetes is designed based on the state machine. Events are generated due to transitions between different states. Typically, Normal events are generated when the state machine changes to expected states and Warning events are generated when the state machine changes to unexpected states.

Container Service for Kubernetes (ACK) provides out-of-the-box monitoring solutions for events in different scenarios. The node-problem-detector and kube-eventer components that are maintained by ACK allow you to monitor Kubernetes events. Diagram of event monitoring
  • node-problem-detector is a tool to diagnose Kubernetes nodes. node-problem-detector detects node exceptions, generates node events, and works with kube-eventer to raise alerts upon these events and enable closed-loop management of alerts. node-problem-detector generates node events when the following exceptions are detected: Docker engine hangs, Linux kernel hangs, outbound traffic exceptions, and file descriptor exceptions. For more information, see NPD.
  • kube-eventer is an open source event emitter that is maintained by ACK. kube-eventer sends Kubernetes events to sinks such as DingTalk, Log Service, and EventBridge. kube-eventer also provides filter conditions to filter different levels of events. You can use kube-eventer to collect events in real time, trigger alerts upon specific events, and asynchronously archive events. For more information, see kube-eventer.

This topic describes how to configure event monitoring in the following scenarios:

Scenario 1: Use node-problem-detector with the Kubernetes event center of Log Service to sink cluster events

node-problem-detector works with third-party plug-ins to detect node exceptions and generate cluster events. A Kubernetes cluster also generates events when the status of the cluster changes. For example, when a pod is evicted or an image pull operation fails, a related event is generated. The Kubernetes event center of Log Service collects, stores, and visualizes cluster events. It allows you to query and analyze these events, and configure alerts. You can sink cluster events to the Kubernetes event center of Log Service by using the following methods.

Method 1: If Install node-problem-detector and Create Event Center was selected when you created the cluster, perform the following steps to go to the Kubernetes event center. For more information about how to install node-problem-detector and deploy the Kubernetes event center when you create a cluster, see Create an ACK managed cluster.

  1. Log on to the ACK console.
  2. In the left-side navigation pane of the ACK console, click Clusters.
  3. On the Clusters page, find the cluster that you want to manage and click the name of the cluster or click Details in the Actions column. The details page of the cluster appears.
  4. Choose Operations > Event Center.
  5. Click Cluster Events Management in the upper-right corner of the page to go to the K8s Event Center page. In the left-side navigation pane of the K8s Event Center page, find the cluster that you want to manage and click the Show icon to the left of the cluster name. You can view event details that are provided by the Kubernetes event center.
    The Kubernetes event center provides event overview, event details, and information about pod lifecycles. You can also customize queries and configure alerts.

Method 2: If the Kubernetes event center was not deployed when you created the cluster, perform the following steps to deploy and use the Kubernetes event center:

  1. Install node-problem-detector in the monitored cluster and enable Log Service. For more information, see Scenario 3: Use DingTalk to raise alerts upon Kubernetes events.
    Note If node-problem-detector is deployed but Log Service is not enabled, reinstall node-problem-detector.
    1. In the left-side navigation pane of the ACK console, click Clusters.
    2. On the Clusters page, find the cluster that you want to manage and click its name or click Details in the Actions column.
    3. Choose Applications > Helm.
    4. On the Helm page, delete the ack-node-problem-detector release to uninstall node-problem-detector.
    When you configure parameters for node-problem-detector, create a Log Service project for the cluster by setting eventer.sinks.sls.enabled to true. sls
    After node-problem-detector is redeployed, a Log Service project is automatically created in the Log Service console for the cluster.
  2. Log on to the Log Service console to configure the Kubernetes event center for the cluster.
    1. In the Import Data section, click Kubernetes - Standard Output.
    2. Select the Log Service project that is automatically created in the preceding step from the Project drop-down list, and select k8s-event from the Logstore drop-down list.
    3. Click Next and click Complete Installation.
  3. In the Projects section of the Log Service console, find and click the Log Service project.
  4. In the left-side navigation pane, click the Dashboard icon and click Kubernetes Event Center V1.5.
    Event center
    On the dashboard of the Kubernetes event center, you can view all cluster events.

Scenario 2: Configure node-problem-detector to diagnose a cluster and send events of exceptions to sinks

node-problem-detector is a tool to diagnose Kubernetes nodes. node-problem-detector detects node exceptions, generates node events, and works with kube-eventer to raise alerts upon these events and enable closed-loop management of alerts. node-problem-detector generates node events when the following exceptions are detected: Docker engine hangs, Linux kernel hangs, outbound traffic exceptions, and file descriptor exceptions. Perform the following steps to install and configure node-problem-detector.

  1. Log on to the ACK console.
  2. In the left-side navigation pane, choose Marketplace > Marketplace. On the App Catalog tab, find and click ack-node-problem-detector.
    Note If the Kubernetes event center is deployed, you must first uninstall the ack-node-problem-detector component.
    1. In the left-side navigation pane of the ACK console, click Clusters.
    2. On the Clusters page, find the cluster that you want to manage and click its name or click Details in the Actions column.
    3. Choose Applications > Helm.
    4. On the Helm page, delete the ack-node-problem-detector release.
  3. On the ack-node-problem-detector page, click Deploy, select a cluster, and then configure the required parameters on the Parameters wizard page.
    The namespace is automatically set to kube-system and the release name is automatically set to ack-node-problem-detector.
    You can set the sink parameters for kube-eventer as described in the following table.
    Table 1. Parameters
    ParameterDescriptionDefault
    npd.image.repositoryThe image address of node-problem-detector registry.aliyuncs.com/acs/node-problem-detector
    npd.image.tagThe image version of node-problem-detector v0.6.3-28-160499f
    alibaba_cloud_pluginsPlug-ins that are used to diagnose nodes. For more information, see the Node diagnosis plug-ins supported by node-problem-detector table. fd_check, ntp_check, network_problem_check, and inode_usage_check are supported.
    plugin_settings.check_fd_warning_percentageThe alerting threshold for the percentage of opened file descriptors monitored by fd_check 80
    plugin_settings.inode_warning_percentageThe alerting threshold for monitoring the inode usage 80
    controller.regionIdThe region where the cluster that has ack-node-problem-detector installed is deployed. Only cn-hangzhou, cn-beijing, cn-shenzhen, cn-shanghai are supported. The region where the cluster that has the plug-in installed is deployed
    controller.clusterTypeThe type of the cluster where ack-node-problem-detector is installed ManagedKubernetes
    controller.clusterIdThe ID of the cluster where ack-node-problem-detector is installed The ID of the cluster where ack-node-problem-detector is installed
    controller.clusterNameThe name of the cluster where ack-node-problem-detector is installed The name of the cluster where ack-node-problem-detector is installed
    controller.ramRoleTypeThe type of the assigned Resource Access Control (RAM) role. A value of restricted indicates that token-based authentication is enabled for the RAM role. The default RAM role type assigned to the cluster
    eventer.image.repositoryThe image address of kube-eventer registry.cn-hangzhou.aliyuncs.com/acs/eventer
    eventer.image.tagThe image version of kube-eventer v1.6.0-4c4c66c-aliyun
    eventer.image.pullPolicySpecifies how the kube-eventer image is pulled IfNotPresent
    eventer.sinks.sls.enabledSpecifies whether to enable Log Service as a sink of kube-eventer false
    eventer.sinks.sls.projectThe name of the project in Log Service. N/A
    eventer.sinks.sls.logstoreThe name of the Logstore in the Log Service project N/A
    eventer.sinks.dingtalk.enabledSpecifies whether to enable DingTalk as a sink of kube-eventer false
    eventer.sinks.dingtalk.levelThe level of events at which alerts are raised warning
    eventer.sinks.dingtalk.labelThe labels of the events N/A
    eventer.sinks.dingtalk.tokenThe token of the DingTalk chatbot N/A
    eventer.sinks.dingtalk.monitorkindsThe type of resource for which event monitoring is enabled N/A
    eventer.sinks.dingtalk.monitornamespacesThe namespace of the resources for which event monitoring is enabled N/A
    eventer.sinks.eventbridge.enableSpecifies whether to enable eventBridge as a sink of kube-eventer false

    Node diagnosis plug-ins supported by node-problem-detector are listed in the following table.

    Plug-inFeatureDescription
    fd_checkChecks whether the percentage of opened file descriptors on each cluster node exceeds a maximum of 80% The default threshold is 80%. The threshold is adjustable. This plug-in consumes a considerable amount of resources to perform the check. We recommend that you do not enable this plug-in.
    ram_role_checkChecks whether cluster nodes are assigned the required RAM role and whether the AccessKey ID and AccessKey secret are configured for the RAM role N/A
    ntp_checkChecks whether the system clocks of cluster nodes are properly synchronized through Network Time Protocol (NTP) This plug-in is enabled by default.
    nvidia_gpu_checkChecks whether the NVIDIA GPUs of cluster nodes can generate Xid messages N/A
    network_problem_checkChecks whether the connection tracking (conntrack) table usage on each cluster node exceeds 90% This plug-in is enabled by default.
    inodes_usage_checkChecks whether the inode usage on the system disk of each cluster node exceeds 80% The default threshold is 80%. The threshold is adjustable. This plug-in is enabled by default.
    csi_hang_checkChecks whether the Container Storage Interface (CSI) plug-in works as expected on cluster nodes N/A
    ps_hang_checkChecks whether processes in the uninterruptible sleep (D) state exist in the systems of cluster nodes N/A
    public_network_checkChecks whether cluster nodes can access the Internet N/A
    irqbalance_checkChecks whether the irqbalance daemon works as expected in the systems of cluster nodes N/A
    pid_pressure_checkChecks whether the ratio of pid processes in the node system to the maximum pid processes allowed in the kernel exceeds 85% This plug-in is enabled by default.
    docker_offline_checkChecks whether the docker daemon works as expected on cluster nodes This plug-in is enabled by default.
    Note Some plug-ins are enabled by default, as shown in the preceding table. You can find these plug-ins if you select Install node-problem-detector and Create Event Center when you enable Log Service for the cluster. You can also find these plug-ins when you install the ack-node-problem-detector component on the Add-ons page. You must manually enable some plug-ins when you deploy the ack-node-problem-detector component from the App Catalog page.
  4. On the Parameters wizard page, click OK.

    Go to the Clusters page. On the Clusters page, find and click the name of the monitored cluster or Applications in the Actions column. On the page that appears, click the DaemonSets tab. On the DaemonSets tab, you can find that ack-node-problem-detector-daemonset is running as expected.

    When both node-problem-detector and kube-eventer work as expected, the system sinks events and raises alerts based on the kube-eventer configurations.

Scenario 3: Use DingTalk to raise alerts upon Kubernetes events

Using a DingTalk chatbot to monitor Kubernetes events and raise alerts is a typical scenario of ChatOps. Perform the following steps to install and configure node-problem-detector.

  1. Click Group settings in the upper-right corner of the chatbox of a DingTalk group to open the Group Settings page.
  2. Click Group Assistant, and then click Add Robot. In the ChatBot dialog box, click the + icon and select the chatbot that you want to use. In this example, Custom is selected.
    Add a custom chatbot
  3. On the Robot details page, click Add to open the Add Robot page.
    Add a DingTalk chatbot
  4. Set the following parameters, read and accept the DingTalk Custom Robot Service Terms of Service, and then click Finished.
    ParameterDescription
    Edit profile pictureThe avatar of the chatbot. This parameter is optional.
    Chatbot nameThe name of the chatbot.
    Add to GroupThe DingTalk group to which the chatbot is added.
    Security settings

    Three types of security settings are supported: custom keywords, additional signatures, and IP addresses (or CIDR blocks). Only Custom Keywords are supported for filtering alerts that are raised upon cluster events.

    Select Custom Keywords and enter Warning to receive alerts. If the chatbot frequently sends messages, you can add more keywords to filter the messages. You can add up to 10 keywords. Messages from ACK are also filtered through these keywords before the chatbot sends them to the DingTalk group.

  5. Click Copy to copy the webhook URL.
    Copy the webhook URL
    Note On the ChatBot page, find the chatbot and click Settings icon to perform the following operations:
    • Modify the avatar and name of the chatbot.
    • Enable or disable message push.
    • Reset the webhook URL.
    • Remove the chatbot.
  6. Log on to the ACK console.
  7. In the left-side navigation pane, choose Marketplace > Marketplace. On the Marketplace page, click the App Catalog tab, and then find and click ack-node-problem-detector.
    Note If the Kubernetes event center is deployed, you must first uninstall the ack-node-problem-detector component.
    1. In the left-side navigation pane of the ACK console, click Clusters.
    2. On the Clusters page, find the cluster that you want to manage and click its name or click Details in the Actions column.
    3. Choose Applications > Helm.
    4. On the Helm page, delete the ack-node-problem-detector plug-in.
  8. On the ack-node-problem-detector page, click Deploy, select a cluster and namespace, and then click Next. On the Parameters wizard page, configure the required parameters and click OK.
    • In the npd section, set the enabled parameter to false.
    • In the eventer.sinks.dingtalk.enabled section, set the enabled parameter to true.
    • Enter the token that is contained in the webhook URL generated in Step 5.

Expected result:

kube-eventer takes effect about 30 seconds after the deployment is completed. When an event with a severity level higher than the threshold occurs, you will receive an alert in the DingTalk group, as shown in the following figure. DingTalk messages

Scenario 4: Sink Kubernetes events to Log Service

You can sink Kubernetes events to Log Service for persistent storage, and archive and audit these events. For more information, see Create and use an event center.

  1. Create a Log Service project and a Logstore.
    1. Log on to the Log Service console.
    2. In the Projects section, click Create Project. In the Create Project panel, set the parameters and click OK.
      In this example, a Log Service project named k8s-log4j is created in the China (Hangzhou) region where the monitored ACK cluster is deployed.
      Note We recommend that you create a Log Service project in the same region as your cluster. When a Log Service project and a cluster are deployed in the same region, the log is transmitted over the internal network. This enables the real-time collection and quick retrieval of log data. This also avoids cross-region transmission, which requires additional bandwidth and time costs.
    3. In the Projects section, find and click the k8s-log4j project. The details page of the project appears.
    4. On the Logstores tab, click the + icon to open the Create Logstore panel.
    5. In the Create Logstore panel, set the parameters and click OK.
      In this example, a Logstore named k8s-logstore is created. Create a Logstore named k8s-logstore
    6. After the k8s-logstore Logstore is created, instructions on how to use the Data Import wizard appear on the page. Click Data Import Wizard. The Import Data dialog box appears.
    7. Select log4jAppender and configure the settings by following the steps on the page.
      In this example, Log4jAppender is configured with the default settings. You can also customize the settings to meet your business requirements.
      User data
  2. Configure Log4jAppender for the cluster.
    1. Log on to the ACK console.
    2. In the left-side navigation pane, choose Marketplace > Marketplace. On the App Catalog tab, and then find and click ack-node-problem-detector.
      Note If the Kubernetes event center is deployed, you must first uninstall the ack-node-problem-detector component.
      1. In the left-side navigation pane of the ACK console, click Clusters.
      2. On the Clusters page, find the cluster that you want to manage and click its name or click Details in the Actions column.
      3. Choose Applications > Helm.
      4. On the Helm page, delete the ack-node-problem-detector release.
    3. On the ack-node-problem-detector page, click Deploy, select a cluster and namespace, and then click Next. On the Parameters wizard page, configure the required parameters and click OK to deploy eventer in the cluster.
      • In the npd section, set the enabled parameter to false.
      • In the eventer.sinks.dingtalk.enabled section, set the enabled parameter to true.
      • Enter the names of Project and Logstore that are created in Step 1 in the related fields.

        If you do not customize Project when you create the ACK cluster, the Project parameter is set to k8s-log-{YOUR_CLUSTER_ID} by default.

  3. An event is generated after an operation is performed on the cluster, such as a pod deletion or an application creation. You can log on to the Log Service console to view the collected log data. For more information, see Consume log data.
    View the collected log data
  4. Set indexes and archiving. For more information, see Create indexes.
    1. Log on to the Log Service console. In the Projects section, find and click the name of the project.
    2. Click Logstore management iconnext to the name of the Logstore, and then select Search & Analysis.
    3. In the upper-right corner of the page that appears, click Enable Index.
    4. In the Search & Analysis panel, set the parameters.
    5. Click OK.
      The log query and analysis page appears. Log analysis
      Note
      • The index configuration takes effect within 1 minute.
      • A newly enabled or modified index applies only to data that is imported after the index is enabled or modified.
    6. If you want to implement offline archiving and computing, you can ship data from the Logstore to Object Storage Service (OSS). For more information, see Ship log data to OSS.

Scenario 5: Sink Kubernetes events to EventBridge

EventBridge is a serverless event service provided by Alibaba Cloud. Alibaba Cloud services, custom applications, and software as a service (SaaS) applications can connect to EventBridge in a standardized and centralized manner. In addition, EventBridge can route events among these applications based on standardized CloudEvents 1.0 protocol. ACK events can be sunk to EventBridge, which allows you to build a loosely-coupled and distributed event-driven architecture in EventBridge. For more information about EventBridge, see What is EventBridge?.

  1. Activate EventBridge. For more information, see Activate EventBridge and grant permissions to a RAM user.
  2. Log on to the ACK console.
  3. In the left-side navigation pane, choose Marketplace > Marketplace. On the App Catalog tab, find and click ack-node-problem-detector.
    Note If the Kubernetes event center is deployed, you must first uninstall the ack-node-problem-detector component.
    1. In the left-side navigation pane of the ACK console, click Clusters.
    2. On the Clusters page, find the cluster that you want to manage and click its name or click Details in the Actions column.
    3. Choose Applications > Helm.
    4. On the Helm page, delete the ack-node-problem-detector release.
  4. On the ack-node-problem-detector page, click Deploy, select a cluster and namespace, and then click Next. On the Parameters wizard page, configure the required parameters and click OK to deploy ack-node-problem-detector in the cluster.
    Configure the Kubernetes event center and enable EventBridge as a sink of Kubernetes events.
    • In the npd section, set the enabled parameter to true.
    • In the eventer.sinks.eventbridge.enable section, set the parameter to true. eventbridge_enable
  5. After EventBridge is enabled as a sink of Kubernetes events, you can view Kubernetes events in the EventBridge console.
    1. Log on to the EventBridge console.
    2. In the left-side navigation pane, click Event Buses.
    3. On the Event Buses page, find the event that you want to view and click Event Tracking in the Actions column.
    4. Select a query method, set query conditions, and then click Query.
    5. In the list of events, find the event that you want to view and click Details in the Actions column.
    In the Event Details dialog box, you can view the details of the event.