Skip to content

Container Aware - running java applications inside containers

In this post, we will discuss the resource management for a containerized java application.

Sumit Gaur
Sumit Gaur
3 min read
Container Aware - running java applications inside containers

With more and more organizations adopting containers as their preferred mode of deploying packages, it is essential than ever to understand the inner workings of containerized execution environments, especially how policing is done for resources like memory, CPU in a container.

As we know, a container is an isolated process running on a host machine. We can set the amount of resources available to it using host operating system features like namespace isolation and control groups (also known as cGroups). For example:

# run a openjdk container and limit the allocated memory to 512mb and cpu to 1
docker run -it --name java9b139 --rm -m 512M --cpus 1 openjdk:9-b139-jdk

The command mentioned above will start an OpenJDK container and limit the available memory to 512MB (using -m 512M flag). It will also begin an interactive pseudo-terminal connected to the container's STDIN and STDOUT as the root user (-it flags). Now, if you run the free command from within the container, how much memory do you expect to see? Unfortunately, it will not be 512MB, if that's what you guessed.

# on a machine with 8GB of memory
root@19922d307399:/# free -m
             total       used       free     shared    buffers     cached
Mem:          6078       1297       4781          0         36        911
-/+ buffers/cache:        349       5729
Swap:         2048          0       2048
root@19922d307399:/#

Problem Description

This mismatch in the numbers is because most of the tools (free, top, etc.)providing resource details predate the control groups feature added later. Due to this, any process running inside the container sees the entire memory available to itself.

Now consider a java application running inside a container with a memory limit set to 512MB, running on a host machine with 8GB of available memory. As the JVM is not aware of the memory restrictions applied on the container, in the absence of any external tuning parameters, it will use approximately \(\frac{1}{4}\) of the available memory for the heap space, which comes out to be around ~1.5-2GB. We can verify the same using the -XX:+PrintFlagsFinal:

root@2dff18778cf3:/# java -XX:+PrintFlagsFinal -version | grep MaxHeap
    uintx MaxHeapFreeRatio                         = 70                                    {manageable} {default}
   size_t MaxHeapSize                              = 1593835520                               {product} {ergonomic}
openjdk version "9-Debian"
OpenJDK Runtime Environment (build 9-Debian+0-9b139-1)
OpenJDK 64-Bit Server VM (build 9-Debian+0-9b139-1, mixed mode)

But due to the control group restrictions, as soon as the memory usage will go beyond 512MB, the process will be terminated by the orchestrator (docker daemon in this case). The only way out of this situation is to supply the -Xmx parameter while running the application to direct the JVM to not go beyond the specified limit.

The same is the issue with the CPU count. As you can see, we started the container and set the number of available CPUs to be one only (via --cpus 1 flag). When checked from within the container, the JVM can still see all 8 CPUs available:

root@2e711fd02c1d:/# jshell
Apr 11, 2021 12:38:04 PM java.util.prefs.FileSystemPreferences$1 run
INFO: Created user preferences directory.
|  Welcome to JShell -- Version 9-Debian
|  For an introduction type: /help intro


jshell> Runtime.getRuntime().availableProcessors();
$1 ==> 8

jshell>

Container Aware JVM - to the rescue

What if the JVM can identify the runtime environment (containerized or not) and tune the default params accordingly? It will help the developers focus more on the actual problem and minimize the surprises when running containerized java applications.

As part of jdk10 enhancements (backported to java8), the container detection is now inbuilt in the JVM. It can now correctly detect the execution environment and thus tune the default values accordingly. Consider the following examples:

$ docker run -it --name java10 --rm -m 512M --cpus 1 openjdk:10.0 sh
# java -XX:+PrintFlagsFinal -version | grep MaxHeap
    uintx MaxHeapFreeRatio                         = 70                                    {manageable} {default}
   size_t MaxHeapSize                              = 134217728                                {product} {ergonomic}
openjdk version "10.0.2" 2018-07-17
OpenJDK Runtime Environment (build 10.0.2+13-Debian-2)
OpenJDK 64-Bit Server VM (build 10.0.2+13-Debian-2, mixed mode)

As you can in the output, the heapsize is now restricted to around 134MB (approx 1/4 of 512MB) without setting up any JVM tuning params. The virtual machine detected the containerized runtime environment and correctly mapped the control group restrictions. Same is the case with number of CPUs as well:

$ docker run -it --name java10 --rm -m 512M --cpus 1 openjdk:10.0
Apr 11, 2021 1:05:45 PM java.util.prefs.FileSystemPreferences$1 run
INFO: Created user preferences directory.
|  Welcome to JShell -- Version 10.0.2
|  For an introduction type: /help intro

jshell> Runtime.getRuntime().availableProcessors();
$1 ==> 1

jshell>

The number of available CPUs to the JVM is now correctly mapped to the single CPU instruction passed during container start. As a rule of thumb, we should always give the tuning parameters while starting a containerized java process (especially when working with old versions of java) but having default values aligned with the setup is of great help.

This is all for this post. If you want to share any feedback, please drop me an email, or you can contact me on any of the social platform mentioned in the header. I’ll try to respond at the earliest.

Keep learning, Keep Coding.

Features

Sumit Gaur

A passionate java developer and open source advocate, who understands the value of continuous learning and sharing knowledge.