×
Community Blog Introduction to Java Virtual Machine (JVM) Performance Tuning

Introduction to Java Virtual Machine (JVM) Performance Tuning

Java Virtual Machine performs excellent and is widely used, in this article, we will introduce the performance tuning for JVM.

JVM tuning is a systematic and complex task. At present, the automatic adjustment under JVMs is very excellent and basic initial parameters can ensure that common applications runs stably. For some teams, application performance may not take a high priority. In this case, the default garbage collector is usually adequate enough to meet the desired requirement. Tuning should be based on your own situation.

JVM tuning mainly involves optimizing the garbage collector for better collection performance so that applications running on VMs can have a larger throughput while using less memory and experiencing lower latency. As what we said above, less memory/lower latency does not necessarily mean that the less/lower the memory/latency is, the better the performance is. It is about the optimal choice.

Performance Tuning Principles

During the tuning process, the three following principles can help us implement easier garbage collection tuning to meet desired application performance requirements.

  1. Minor GC collection principle: Each time Minor GC should collect as many garbage objects as possible to reduce the frequency of Full GC for an application.
  2. GC memory maximization principle: When solving throughput and latency problems, the larger the memory used by the garbage collector, the more efficient the garbage collection and the smoother the application.
  3. GC tuning "two out of three" principle: We should only tune two of the three performance attributes instead of all the three attributes: throughput, latency, and memory usage.

JVM tuning involves continuous configuration optimizations and multiple iterations based on the performance test results. Before each desired system metric is met, each of the previous steps may experience multiple iterations. In some cases, to meet a specific metric, the previous parameters may need to be tuned many times, requiring all the previous steps to be tested again.

In addition, tuning generally starts with meeting the memory usage requirement of applications, then latency and throughput. Tuning should follow this sequence of steps. We cannot invert the sequence of these tuning steps.

java jvm tuning procedure

For the details for each tuning step, please go to How to Properly Plan JVM Performance Tuning.

Related Documentation

JVM monitoring

The application monitoring function of Application Real-Time Monitoring Service (ARMS) provides the Java Virtual Machine (JVM) monitoring function. It monitors heap metrics, non-heap metrics, direct buffer metrics, memory-mapped buffer metrics, garbage collection (GC) details, and the number of JVM threads. This topic describes the JVM monitoring feature and how to monitor JVM metrics.

The subscriptions are inconsistent for Message Queue

Multiple consumers are started on different JVMs. For consumer instances under the same group ID, different topics are configured, or topics are the same but tags are different. As a result, the subscription relationships are inconsistent, and messages do not meet the expectations.

When the same group ID is used to start multiple consumer instances on different JVMs, ensure that the topics and tags configured for these consumer instances are consistent.

Related Blog Posts

Performance Analysis of Alibaba Large-Scale Data Center

Data centers have become the standard infrastructure for supporting large-scale Internet services. As they grow in size, each upgrade to the software (e.g. JVM) or hardware (e.g. CPU) becomes costlier and riskier. Reliable performance analysis to assess the utility of a given upgrade facilitates cost reduction and construction optimization at data centers, while erroneous analysis can produce misleading data, bad decisions, and ultimately huge losses.

This article introduces the challenges and practices of performance monitoring and analysis of Alibaba's large-scale data center.

For example, we might observe different performance influences of a JVM feature on different Java applications and different performance results of the same application on difference hardware. Cases like this are far from uncommon. Since it is infeasible for us to run tests for every application and every piece of hardware, we need a systematic approach to estimating the overall performance impact of a new feature on various applications and hardware.

Operating Principle and Implementation of Flink: Memory Management

Nowadays, open-source big data frameworks (such as Hadoop, Spark and Storm) all employ JVM, and Flink is one of them. JVM-based data analysis engines all need to store a large amount of data in the memory, so they have to address the following JVM issues:

  1. Low Java object storage density. An object that contains only a boolean attribute takes as many as 16 bytes, where the object header takes 8 bytes, the attribute takes 1 byte, and the padding takes 7 bytes. Actually, one bit (1/8 byte) is enough to store the attribute.
  2. Full GC greatly affects performance. Especially, as it will take seconds or even minutes to implement GC if the JVM that leverages considerable memory space to process a large amount of data.
  3. The out of memory (OOM) error reduces stability. The OOM error is a common issue affecting distributed computing frameworks. If the total size of all the objects in the JVM exceeds the size of the memory allocated to the JVM, the error occurs, causing the JVM to crash. As a result, both the robustness and performance of distributed computing frameworks are affected.

Therefore, an increasing number of big data projects choose to manage JVM memory on their own, such as Spark, Flink, and HBase, with an aim to achieve as high performance as the C language and prevent the OOM error. This article introduces the measures Flink adopts to address the above-mentioned issues, including memory management, customized serialization tool, buffer-friendly data structures and algorithms, off-heap memory and JIT compilation optimization.

Related Products

Application Real-Time Monitoring Service

Application Real-Time Monitoring Service (ARMS) is an end-to-end Alibaba Cloud monitoring service for Application Performance Management (APM). You can quickly develop real-time business monitoring capabilities using the frontend monitoring, application monitoring, and custom monitoring features provided by ARMS. You can also monitor the Java Virtual Machine (JVM) with ARMS.

Elastic Compute Service

Alibaba Cloud Elastic Compute Service (ECS) provides fast memory and the latest Intel CPUs to help you to power your cloud applications and achieve faster results with low latency. All ECS instances come with Anti-DDoS protection to safeguard your data and applications from DDoS and Trojan attacks.

0 0 0
Share on

Alibaba Clouder

2,605 posts | 747 followers

You may also like

Comments