Understanding JVM/SmartFoxServer memory usage

In this article we’re going to address yet another oft-asked question regarding how SmartFoxServer uses memory and how to read the graph in the AdminTool’s Dashboard.

When we open the Dashboard in SFS2X’s AdminTool we find, among other things, a diagram of the heap memory used to hold the application’s object instances.

memory-diag1

At a first glance the graph shows two basic parameters, the amount of used memory (blue) and the allocated memory (cyan), but there’s more. At the top of the graph there’s a “Max” value of approximately 239 MBytes, which represents the top limit that SmartFoxServer can allocate, if needed. If our application required, for example, to allocate 300 MBytes of data it would incur in an OutOfMemory error,  which is an unrecoverable exception and it would fail.

In order to be able to solve this issue we need to add specific boot-time settings to make sure the JVM starts up with enough resources.

Before we discuss it, let’s recap the three main parameters of JVM heap memory:

  • Used heap: the amount of memory actually used by objects
  • Allocated heap: the amount of heap memory currently allocated
  • Max heap: the maximum amount that the Allocated value can reach, if and when necessary

» JVM startup settings

When the Java Virtual Machine boots up it auto-determines a reasonable amount of memory to use based on the system’s configuration. On a Linux system with 2 GBytes of system RAM the JVM will boot with ~460MB of max memory, which is usually more than enough for many SmartFoxServer use cases. On a 16GB server machine it will boot up with a max heap size of approximately 3.5GBytes.

Generally speaking SmartFoxServer 2X uses very little memory, it can even run with only 32 Mbytes of RAM and it doesn’t need manual memory settings even when the traffic is in the thousands of CCU. However there can be situations in which more RAM is required especially when custom server side code is used, or very high traffic needs to be sustained.

There are two main JVM parameters for setting the heap size at startup:

-Xms: specifies the minimum heap size in MB or GB
-Xmx: specifies the maximum heap size in MB or GB.

For example this combo: -Xms512M -Xmx2G, indicates that the JVM will start up with 512 MBytes of allocated heap and a maximum size of 2 GBytes.

These settings can be added via the SmartFoxServer’s AdminTool > Server Configurator > JVM Settings. You can read more about adding custom JVM settings in our documentation.

» Common questions

On multiple occasions we have been contacted by developers that were monitoring their server and were worried that heap memory seemed to running out quickly, such as in this screenshot:

memory-diag2

At a first glance it looks like the used heap is reaching the limit, but… if you just hold on for a moment and look what happens a few minutes later…

memory-diag3

The memory management inside the JVM is very smart but works in an indeterministic way. The used memory was growing fast in the first diagram because a number of unused objects were still held in the heap. In the 2nd screenshot we see the garbage collector (GC) cleaning up the used memory, while at the same time the allocated heap is pushed up a bit, from ~95 MBytes to ~140 MBytes, which is still way below the limit of 466 MBytes, shown on top of the graph.

The moral of the story is that the JVM memory management is difficult to predict for an external onlooker and usually there is no need to worry if the used and allocated memory seem to collide.

» When to intervene

At this point one may be asking: if memory management is indeterministic and the behavior of the GC is unpredictable how do we know when we its time to fine tune the JVM’s memory?

Typically the red flag is triggered by the allocated memory being pushed to the limit (allocated heap size == max heap size) and by frequent and small peaks and troughs of the used memory, indicating that the GC is very busy.

Here’s an example of a diagram that suggests manual intervention is in order:

memory-diag4We forced the server to run with only ~40MB of RAM, this in turn provides a max size of 37 MBytes for the heap. We can see that the allocated memory is already maxed out and the many peaks and troughs indicate very frequent garbage collections. In only 3 minutes of activity the GC was triggered more than 20 times.

This is definitely a case where improved heaps settings will help. If we run the same test with these JVM settings: -Xms512M -Xmx1G

memory-diag5

we can see an entirely different picture,  with plenty of free heap in the allocated range and a much less frequent GC activity in the same time range.

» Too much memory can also be bad

Before we wrap this article up we would also recommend not to get too carried away with extra large amounts of memory as well. We have seen setups where the JVM was assigned 12,16 or even more GBytes of RAM and unless there’s a very specific reason for this, we don’t recommend it.

Throwing king size amounts of heap memory at the JVM, with the purpose of minimizing the GC activity, is counterproductive leading to very long GC pauses that could grind the server to a halt and make it unresponsive for too long.

This means that all incoming traffic needs to be buffered and when the GC cycle ends there will be huge spikes of load resulting in more network spikes etc… Not the best behavior you would want from your server.

If you really need a very large heap size (> 8 GBytes) we recommend to also fine tune the garbage collector, maybe employing a concurrent GC to avoid stopping the server’s active threads.

» Learn more

Here are a few relevant articles if you’re interested to learn more about fine tuning the JVM