All Products
Search
Document Center

Simple Log Service:Collect monitoring data about Kubernetes resources

Last Updated:Mar 08, 2024

Kubernetes provides multiple metrics. You can collect the metrics from Kubernetes clusters to the Full-stack Observability application for visualization. Simple Log Service and Alibaba Cloud OpenAnolis have jointly developed the non-intrusive monitoring feature. You can use the feature to analyze network traffic flows and identify bottleneck issues for Kubernetes clusters in cloud-native scenarios.

Prerequisites

A Full-stack Observability instance is created. For more information, see Create an instance.

Limits

If you want to enable data plane monitoring on a host, the operating system of the host must be Linux x86_64, and the kernel version must be 4.19 or later. If the host runs a CentOS operating system whose version is between 7.6 and 7.9, the kernel version must be 3.1.0. You can run the uname -r command to view the kernel information of an operating system.

Step 1: Create a Logtail configuration

  1. Log on to the Simple Log Service console.

  2. In the Log Application section, click the Intelligent O&M tab. Then, click Full-stack Observability.

  3. On the Simple Log Service Full Stack Observability page, click the instance that you want to manage.

  4. In the left-side navigation pane, click Full-stack Monitoring.

    If this is your first time to use Full-stack Monitoring in the instance, click Enable.

  5. In the left-side navigation tree, click Data Import. On the Data Import Configurations page, find Resource Monitoring in the Kubernetes Monitoring section.

    The first time you create a Logtail configuration for host monitoring data, turn on the switch to go to the configuration page. If you created a Logtail configuration, click the 创建 icon to go to the configuration page.

  6. Create a machine group.

    If a machine group is created, skip this step.

  7. Download the custom resource definition (CRD) template tool.

    Method

    Description

    Outside a cluster

    If you want to install the CRD template tool outside a cluster, the ~/.kube/config configuration file must exist in the logon account and must specify the permissions to manage the cluster. You can run the kubectl command to configure the ~/.kube/config configuration file.

    In a cluster

    If you want to install the CRD template tool in a cluster, the system creates CRDs based on the permissions of an installed component named alibaba-log-controller. If the ~/.kube/config configuration file does not exist or you cannot access a cluster due to network connection issues, you can use the method to install the CRD template tool.

    Outside a cluster

    1. Log on to a cluster and run one of the following commands to download the CRD template tool based on your region:

      • Regions in the Chinese mainland

        curl https://logtail-release-cn-hangzhou.oss-cn-hangzhou.aliyuncs.com/kubernetes/crd-tool.tar.gz -o /tmp/crd-tool.tar.gz
      • Regions outside the Chinese mainland

        curl https://logtail-release-ap-southeast-1.oss-ap-southeast-1.aliyuncs.com/kubernetes/crd-tool.tar.gz -o /tmp/crd-tool.tar.gz
    2. Install the CRD template tool. After you install the CRD template tool, a tool named sls-crd-tool is generated in the folder where the CRD template tool resides.

      tar -xvf /tmp/crd-tool.tar.gz -C /tmp &&chmod 755 /tmp/crd-tool/install.sh  && sh -x  /tmp/crd-tool/install.sh
    3. Run the ./sls-crd-tool list command to check whether the CRD template tool is installed. If a value is returned, the CRD template tool is installed.

    In a container

    1. Log on to a cluster and go to the alibaba-log-controller container.

      kubectl get pods -n kube-system -o wide |grep alibaba-log-controller | awk -F ' ' '{print $1}'
      kubectl exec -it {pod} -n kube-system bash
    2. Download the CRD template tool.

      • If you can download resources over the Internet from the cluster, run one of the following commands to download the CRD template tool.

        • Regions in the Chinese mainland

          curl https://logtail-release-cn-hangzhou.oss-cn-hangzhou.aliyuncs.com/kubernetes/crd-tool.tar.gz -o /tmp/crd-tool.tar.gz
        • Regions outside the Chinese mainland

          curl https://logtail-release-ap-southeast-1.oss-ap-southeast-1.aliyuncs.com/kubernetes/crd-tool.tar.gz -o /tmp/crd-tool.tar.gz
      • If you cannot download resources over the Internet from the cluster, you can download the CRD template tool outside the cluster. Then, you can run the kubectl cp <source> <destination> command or use the file upload feature of Container Service for Kubernetes (ACK) to transmit the CRD template tool to the preceding container.

    3. Install the CRD template tool. After you install the CRD template tool, a tool named sls-crd-tool is generated in the folder where the CRD template tool resides.

      tar -xvf /tmp/crd-tool.tar.gz -C /tmp &&chmod 755 /tmp/crd-tool/install.sh  && sh -x  /tmp/crd-tool/install.sh
    4. Run the ./sls-crd-tool list command to check whether the CRD template tool is installed. If a value is returned, the CRD template tool is installed.

  8. Use the CRD template tool to generate a Logtail configuration.

    1. Run the following command to view the definition of the template:

      ./sls-crd-tool  get k8sMonitor
    2. Replace the REQUIRED parameter with the current instance ID and run the following command to preview the value of the parameter:

      ./sls-crd-tool  apply -f template-k8sMonitor.yaml --create=false
    3. Check whether the project parameter specifies the project to which the current instance belongs. If yes, run the following command to deploy the template file:

      ./sls-crd-tool  apply -f template-k8sMonitor.yaml
    4. Go to the Data Import Configurations page. The following figure shows that the Logtail configuration is generated. The number next to Configurations in the Resource Monitoring section is incremented by one. If the Logtail configuration is not generated, the number remains unchanged.

      image.png

Resources for the monitoring component

The Kubernetes resources that are used to collect Kubernetes monitoring data are all created in the sls-monitoring namespace. The resources include one Deployment, one StatefulSet, one DaemonSet, and seven AliyunLogConfig CRDs.

Resource

Resource name

Description

AliyunLogConfig

{instance-id}-k8s-metas

Used to collect the configuration data of Kubernetes clusters, including the name, namespace, label, image, and limit of each Deployment, pod, Ingress, and Service. By default, the collected data is stored in a Logstore named {instance}-metas.

{instance-id}-k8s-metrics

Used to collect the metric data of Kubernetes clusters, including the CPU, memory, and network data of pods and containers. By default, the collected data is stored in a Metricstore named {instance}-k8s-metrics.

{instance-id}-k8s-metrics-kubelet

Used to collect the metric data of Kubernetes kubelet. By default, the collected data is stored in a Metricstore named {instance}-k8s-metrics.

{instance-id}-node-metas

Used to collect the configuration data of Kubernetes nodes, including CPU models and memory sizes. By default, the collected data is stored in a Logstore named {instance}-metas.

{instance-id}-node-metrics

Used to collect the metric data of Kubernetes nodes, including CPU utilization and memory usage. By default, the collected data is stored in a Metricstore named {instance}-node-metrics.

What to do next

After you collect Kubernetes monitoring data to Full-stack Observability, Full-stack Observability automatically creates dedicated dashboards for the monitoring data. You can use the dashboards to analyze the monitoring data. For more information, see View dashboards.