Community Blog Why Your Active Java Application Containers are Killed

Why Your Active Java Application Containers are Killed

Nowadays container technology is popular, but the containerized deployment of Java applications may meet a problem related to JVM heap size.

When deploying container technology for Java applications, you may meet a problem: although you have set container resource restrictions, the active Java application containers are inexplicably killed by OOM Killer.

This problem is the result of a very common mistake: the failure to correctly set container resource restrictions and the corresponding JVM(Java Virtual Machine) heap size.

In this article, we take a Tomcat application as an example.

  1. The app in the pod is an initialization container, responsible for copying one JSP application to the "webapps" directory of the Tomcat container. Note: In the image, the JSP application index.jsp is used to display JVM and system resource information.
  2. The Tomcat container remains active and we have restricted the maximum memory usage to 256 MB.

But when we check the memory status, the system memory in the container is 3,951 MB, but the maximum JVM heap size is 878 MB. Why is this the case? Didn't we set the container resource capacity to 256 MB? In this situation, the application memory usage exceeds 256 MB, but the JVM has not implemented garbage collection (GC). Rather, the JVM process is directly killed by the system's OOM killer.

The root cause of the problem:

  1. If we do not set a JVM heap size, the maximum heap size is set by default based on the memory size of the host environment.
  2. Docker containers use cgroups to limit the resources used by processes. Therefore, if the JVM in the container still uses the default settings based on the host environment memory and CPU cores, this results in incorrect JVM heap calculation.

But now the Java community supports auto sensing of container resource restrictions in Java SE 8u131+ and JDK 9: https://blogs.oracle.com/java-platform-group/java-se-support-for-docker-cpu-and-memory-limits . For how to solve this problem in details, you can go to Kubernetes Demystified: Restrictions on Java Application Resources.

Related Blog Posts

How Does Garbage Collection Work in Java?

In this article, we will dive into garbage collection (GC) in Java today and see what's all involved in it.

The Java heap is the largest memory managed by JVM, and the heap is the main space that the garbage collector manages. Here, we mainly analyze the structure of the Java heap.

The Java heap is mainly divided into two spaces: the young generation and the old generation. The young generation is divided into the Eden space and the Survivor space, while the Survivor space is further divided into the From space and the To space. We may have the following questions: Why is the Survivor space required? Why is the Survivor space subdivided into two more spaces? Take it easy. Let's have a detailed look at how an object is created and deleted.

JProfiler Best Practices: Powerful Performance Diagnostic Tool

This article describes some common Java performance diagnostic tools and highlights the basic principles and best practices of JProfiler.

The Java Development Kit (JDK) offers many built-in command line tools. They can help you obtain information of the target Java virtual machine (JVM) from different aspects and different layers, including jmap which allows you to obtain memory-related information of the target Java process, including the usage of different Java heaps, statistical information of objects in Java heaps, and loaded classes.

Related Documentation

JVM monitoring

The application monitoring function of Application Real-Time Monitoring Service (ARMS) provides the Java Virtual Machine (JVM) monitoring function. It monitors heap metrics, non-heap metrics, direct buffer metrics, memory-mapped buffer metrics, garbage collection (GC) details, and the number of JVM threads. This topic describes the JVM monitoring feature and how to monitor JVM metrics.

Related Products

Application Real-Time Monitoring Service

Application Real-Time Monitoring Service (ARMS) is an end-to-end Alibaba Cloud monitoring service for Application Performance Management (APM). You can quickly develop real-time business monitoring capabilities using the frontend monitoring, application monitoring, and custom monitoring features provided by ARMS.

Enterprise Distributed Application Service

Enterprise Distributed Application Service (EDAS) is a PaaS platform for a variety of application deployment options and microservices solutions to help you monitor, diagnose, operate and maintain your applications.

0 0 0
Share on

Alibaba Clouder

2,600 posts | 750 followers

You may also like