All Products
Search
Document Center

Container Service for Kubernetes:Use SysOM to locate container memory issues

Last Updated:Feb 02, 2024

It is difficult to troubleshoot container service failures because the container engine layer is not transparent to users. To address this issue, Container Service for Kubernetes (ACK) provides OS kernel-level container monitoring capabilities, which make the container engine layer more reliable and transparent. This helps you efficiently migrate containerized applications. This topic describes how to use System Observer Monitoring (SysOM) to locate container memory issues.

Prerequisites

Scenarios

Many enterprises use containerization to build IT architectures to reduce costs, increase efficiency, and improve flexibility and scalability.

However, containerization compromises the transparency of the container engine layer, causing issues, such as Out-Of-Memory (OOM) issues. When a pod in Kubernetes occupies excessive amounts of memory and the memory usage exceeds the upper limit, an OOM issue occurs.

To resolve this issue, the Container Service for Kubernetes (ACK) team works with the GuestOS team of Alibaba Cloud to provide OS kernel-level container monitoring capabilities. These capabilities can help you better manage and optimize the memory usage of containers and avoid OOM issues caused by missing memory.

Container memory

The memory of a container consists of application memory, kernel memory, and free memory.

Memory category

Memory type

Description

Application memory

Application memory consists of the following types:

  • anon: Anonymous memory that is not associated with files. For example, anonymous memory is used for the heap, stack, and data segments in a process. Heap memory is allocated by brk and mmap.

  • filecache: The memory that is used to cache file data read from disks and written to disks. Cache that is frequently used is called active file cache and is usually not reclaimed by the system.

  • buffer: The memory used to store the metadata of the block device and file system.

  • hugetlb: The memory occupied by huge pages in a file system.

The memory occupied by a running application.

Kernel memory

Kernel memory consists of the following types:

  • Slab: A caching memory allocator that is used to manage memory allocation for kernel objects.

  • Vmalloc: A kernel mechanism that is used to allocate large memory regions in the virtual memory space.

  • allocpage: A mechanism that is used to allocate local memory.

  • Others: KernelStack, PageTables, and Reserve.

Memory used by the OS kernel.

Free memory

None.

Memory that is not used.

How it works

Kubernetes uses memory working sets to monitor and manage the memory occupied by containers. When the memory occupied by containers exceeds the specified memory upper limit or the available memory on the node becomes insufficient, Kubernetes determines whether to evict or kill containers based on memory working sets. SysOM allows you to identify pods whose memory working sets are large and monitor and analyze memory usage in a comprehensive and fine-grained manner. This way, O&M engineers and developers can quickly locate and fix pods whose memory working sets are large and improve the performance and stability of containers.

The memory working set of a container refers to the amount of memory that is occupied by the container within a period of time. The memory is required by the container to run as expected. Memory working set = InactiveAnon + ActiveAnon + ActiveFile. The total amount of anonymous memory for an application is equal to the sum of InactiveAnon and ActiveAnon. ActiveFile is equal to the size of the active file cache of the application.

Use the SysOM feature

The OS kernel-level dashboards of SysOM allow you to view the system metrics, such as memory, network, and storage metrics, of pods and nodes in real time. For more information about SysOM metrics, see Kernel-level container monitoring based on SysOM.

  1. Log on to the ACK console. In the left-side navigation pane, click Clusters.

  2. On the Clusters page, click the name of the cluster that you want to manage and choose Operations > Prometheus Monitoring in the left-side navigation pane.

  3. On the Prometheus Monitoring page, click the Others tab and then click the ACK Sysom Pod Kernel Resource Monitor tab to view pod memory distribution on the dashboard.

    Use the following formula to locate the missing memory issue: Memory working set = InactiveAnon + ActiveAnon + ActiveFile.

    1. The Pod memory distribution pie chart on the dashboard provided by SysOM displays OS kernel-level pod memory distribution. Compare the sizes of the inactive_anon, active_anon, and active_file memory types in the pie chart and find the memory type that accounts for the largest proportion.

      The inactive_anon memory type accounts for the largest proportion, as shown in the following figure.

      image.png

    2. In the Pod Resource Analysis section, use the top command to find the pod that occupies the most InactiveAnon memory in the cluster.

      The arms-prom pod occupies the most InactiveAnon memory, as shown in the following figure.

      image.png

    3. In the Pod Memory Details section, you can view the memory usage details of pods. Based on the pod memory metrics displayed on the dashboard, such as pod cache, InactiveFile, InactiveAnon, and Dirty Memory, you can identify common missing memory issues in pods.

      image.png

  4. In the Pod File Cache section, locate the cause of large cache memory.

    When a pod occupies large amounts of memory, the memory working set of the pod may exceed the actual in-use memory size. The missing memory issue adversely affects the performance of the application to which the pod belongs.

    image.png

  5. Fix the missing memory issue.

    You can use the fine-grained scheduling feature provided by ACK to fix the missing memory issue. For more information, see Memory QoS for containers.

References