The majority of businesses now are using Docker to run applications. They devote a lot of time, energy, and resources to stabilize their success and invest heavily in a variety of advanced observation techniques. Despite this, they are experiencing poor performance and the containers are being stressed as a result of the heavy traffic flow. The motive of this blog is to help those running Java applications on docker containers in getting optimal performance.
A lot of times people complain that the application does not seem to be running as well as it did on the server. Here are some things to keep in mind when running a Java program on Docker.
In Java, we can use JVM arguments to get the best performance in docker.
- Accessing Memory Parameters
- Garbage Collectors
- Min and Max Heap Free Ratio
By setting favorable values for the above JVM arguments in a Java application, one can achieve the best performance.
Accessing Memory Parameters
In order to achieve a good memory performance, we can override the default values for the JVM memory parameters by passing custom values to certain flags while running our Java application:
-Xms: The -Xms flag’s value determines the Java heap’s initial or minimum values. It can be used in circumstances where the application needs more memory than the JVM’s default minimum.
-Xmx: Similar to -Xms, the -Xmx flag can be used to set the heap space maximum value for a Java application. It can be used when we want to deliberately restrict the amount of memory for our application.
Please keep in mind that the -Xms value must be the same as or less than the -Xmx value. It should be set to the -Xmx value for optimal results.
How to decide:
Let us suppose we have a container and it has been assigned 2GB memory. So, we have to set -Xms and -Xmx values to 1280. How do I know? Well, I have my own way to figure out which I discovered through testing and experience:
You can divide the actual assigned memory by 1.6.
x/1.6 = Value.
2048 / 1.6 = 1280.
The Runtime#maxMemory method returns as much memory as the JVM will try to use. Once JVM memory usage reaches this level, it will stop allocating more memory and instead increase garbage collect frequency.
If the JVM objects still need more memory even after the garbage collector is run, then the JVM may throw a java.lang.OutOfMemoryError runtime exception.
Reference from https://www.baeldung.com/java-heap-memory-api
There are four garbage collectors available in the JVM. Application throughput and Application pause can differ between garbage collectors. Application throughput refers to how fast a Java application runs, while Application pause refers to how long it takes the garbage collector to clean up unused memory.
- Serial Garbage Collector
- Parallel Garbage Collector
- CMS Garbage collector
- G1 Garbage Collector
Here, we’ll talk about G1 garbage collector.
First, I’d like to inform you that when JVM performs a GC, it puts the application on hold, which means it will not work or get stuck during the GC. As a result, we must be extremely cautious about when GC is started and how long it will take.
G1 is a stop-the-world and evacuating garbage collector that is generational, gradual, parallel, often concurrent, and tracks pause-time targets in each of the stop-the-world pauses. G1 divides the heap into (virtual) young and old generations, much like other collectors. Space reclamation activities are focused on the young generation, as this is the most effective way to do so, with space reclamation in the older generation happening on certain occasions.
To boost throughput, certain operations are often done in stop-the-world pauses. Other operations that would take longer if the application was halted, such as whole-heap operations like global labelling, are run in parallel with the application.
G1 retrieves space primarily through evacuation, which involves compacting objects in selected memory areas directly by copying them to new memory areas. Space previously occupied by live objects is reused for allocation by the application after an evacuation is completed.
The Garbage-First collector does not collect in real time. It tries with high probability to reach set pause-time targets for a longer period of time, but not always with utter certainty for a given pause.
You can explicitly enable it by providing -XX:+UseG1GC on the command line.
Min and Max Heap Free Ratio
The two most important factors affecting garbage collection performance are total available memory and the proportion of the heap dedicated to the young generation.
If you want to reduce your application’s dynamic memory footprint (the amount of RAM it uses during execution), you can do so by reducing the Java heap size. This may be needed by Java SE Embedded applications.
Minimize Java heap size by lowering the values of the options -XX:MaxHeapFreeRatio (default value is 70%) and -XX:MinHeapFreeRatio (default value is 40%) with the command-line options -XX:MaxHeapFreeRatio and -XX:MinHeapFreeRatio. Lowering -XX:MaxHeapFreeRatio to as low as 10% and -XX:MinHeapFreeRatio to 5% has shown to successfully reduce the heap size without too much performance degradation; however, results may vary greatly depending on your application. Try different values for these parameters until they’re as low as possible, yet they retain acceptable performance.
So based on the above parameters if we have a 2GB memory docker container, we can run the java application by using the below java arguments.
java -Xms1250M -Xmx1250M -XX:+UseG1GC -
XX:+UseStringDeduplication -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -jar java-application
Blog Pundit: Adeel Ahmad
Opstree is an End to End DevOps solution provider