JVM Lies: The OutOfMemory Myth

In praise of my own pigheadedness

There are times when an OutOfMemoryError means exactly what it says. Try adding new objects to an ArrayList in a while(true) loop and you'll see what I mean.

However, there are times when it doesn't.

Recently, when I saw a vital supporting application of our system throwing an OutOfMemoryError in production, my first instinct was to increase the -Xmx switch from the existing 2GB. Let's whack on an extra gig, why not. That will give us at least 6 months until we start worrying about the logical 4GB limit of a 32-bit process's addressable space.

I expect I am not alone in having the knee-jerk reaction that any application's memory problems can be solved by cranking up the heap. I blame James Gosling, or whoever decided that the JRE 1.1 JVM's heap should default to 64M. Even at the start of my Java programming career in 1998 I remember quickly running out of heap space, and needed to look up what this non-standard -Xmx switch did. Increasing this value made these problems just disappear.

However, instead of doing the obvious and increasing the -Xmx, I added extra GC debugging output and attempted to replicate the problem. We have plenty of spare memory on our hardware, so any time spent on such an obvious issue is arguably a waste: there was important business functionality I could be delivering instead of messing around with JVM switches. However, being at times more stubborn than my own good, I insisted on understanding exactly what was going on. In particular:

  1. Why was similar behaviour not occurring in the test environment?
    I am blessed with comparable hardware, and data volumes, in a test environment as the production environment. A rare treat, I appreciate, but an invaluable one for situations such as this. Well it turns out the answer to this question was straightforward: it was. The flaw was with our monitoring of this environment. Abashed, I made a mental note to improve our application monitoring and moved on to question 2.
  2. Why were we running out of memory?
    Data volumes increase in the system on a monthly basis, so the answer to this question may seem self-evident. Without correct monitoring and re-tuning, our JVMs are expected to run out of memory. This isn't necessarily an architectural flaw, it's simply about allocating the right amount of memory for the current data volumes. However, I had to be sure it was the heap that we were running out of.

Depending on the flavour of JVM, an OutOfMemoryError can indicate a shortage of memory in one of several areas. These broader concepts are common to generational GC algorithms across the major JVM vendors including Sun, IBM and BEA, although the specifics I refer to below relate to the Sun Hotspot GC model.

  • The first is the tenured generation. This is usually what I mean when I say "the heap". Memory is segmented into several generations, however it is when the tenured generation is full, and cannot be expanded any further, that the JVM considers itself OutOfMemory.
  • The second is the permanent generation. This does not resize during the life time of the application, regardless of how much free space may exist in the rest of the heap, but remains at whatever it was originally set to (default is 64K). Should this prove too small for the perm generation, then the JVM will throw an OOME even if there's plenty of heap left. Adding the -XX:+PrintHeapAtGC switch will tell you if this is the case.
  • The third possibility is your operating system is out of memory, e.g. you've asked for a 2GB heap on a box with 1GB RAM and 512MB swap space (not a typical server, admittedly, but serves as an example).
In my case I was primarily investigating which of the first two above scenarios was occurring (I knew we had enough spare memory on the box itself), so I was somewhat surprised to find out it was neither.
  • Another possibility is native components are hogging your 4GB ceiling. Native code competes with the JVM to use the 4GB of addressable space in your application. If these components are memory hungry, your app will be starved of addressable space, even if it hasn't actually used up all the heap you've given it yet. This may manifest itself during the workings of the Hotspot JIT compiler, which itself is a native component, as the Just In Time compiler uses some of your process's space to compile methods to native code at runtime. Should these memory requirements push the addressable space required in the process above 4GB, then you get an OOME thrown which the 1.4 JVM logs as:

    Exception in thread "CompilerThread0" java.lang.OutOfMemoryError: requested
    Exception in thread "main" java.lang.OutOfMemoryError: requested 32756 bytes for ChunkPool::allocate. Out of swap space?

The process hadn't used all the space available to it when I saw this error -- the Java heap had plenty of room left unused. However for addressing purposes this space was considered consumed.

So, what to do about the above error? Increasing the heap allocation actually exacerbates this problem! It decreases the headroom the compiler, and other native components, have to play with.

So the solution to my problem was:

  1. reduce the heap allocated to the JVM.
  2. remove the memory leaks caused by native objects not being freed in a timely fashion.

Or just use a 64-bit JVM.

About the author

Kenneth RoperKenneth Roper is a development team leader at tier-1 investment bank. He is interested in applications with low-latency requirements or large memory footprints. He spends a lot of time reading garbage collection logs and snow reports.

E-mail : kenneth.roper at codingthearchitecture.com


Re: JVM Lies: The OutOfMemory Myth

You forgot to mention that you can also get an OOM when you span to much threads which will then eat up all the stack instead of the heap. I saw that a few times and you easily overlook it b/c you always assume the heap is full, when you see an OOM, not the stack.

Re: JVM Lies: The OutOfMemory Myth

If you are getting an OOM around a native call including Runtime.exec(), try adding swap space.

My application uses a 3rd party library that uses Runtime.exec() to gather info about the OS. Under a load, this call would generate an OOM, but wrapped in an IOException. (I had to dig to get the real error)

I found this thread in the java programming forums
The following example shows the creation of a 2Gb auxiliary swap file.
 
# dd if=/dev/zero of=/auxswap bs=1M count=2048
# mkswap /auxswap
# swapon /auxswap

Note that this will affect 32-bit or 64-bit JVMs, it doesn't matter. I recently upgraded to a 64-bit JVM & OS and still received the OOM. Adding the swap solved the problem for both 32-bit & 64-bit.

Re: JVM Lies: The OutOfMemory Myth

That's a really nasty one. Great post, billybobbain, for anyone running with a large heap, especially on Linux.

Re: JVM Lies: The OutOfMemory Myth

Your code does not call destroy on process or clean up the filehandles Process opens. see also : http://jelmer.jteam.nl/2008/01/04/too-many-open-files/

Re: JVM Lies: The OutOfMemory Myth

Does String pool is also responsible for this error ? Since String pool is stored in the perm space it make sense that as number of string increases size of this pool increases which can eventually fill the perm space, what is your opinion ?

Re: JVM Lies: The OutOfMemory Myth

"default is 64K" I should hope not! ;-) Cheers, Nick.

Re: JVM Lies: The OutOfMemory Myth

Oops, quite right, that should be 64M. Thanks Nick.

Re: JVM Lies: The OutOfMemory Myth

I have seen OOME when an gc was not keeping up with the disposal of small objects. (In this case a deep stack trace that was being thrown away) The cure was -XX:+UseParallelGC. The developer didn't see the problem because they were using a different jre which wasn't an option for us.

Re: JVM Lies: The OutOfMemory Myth

One sign that you may have a leak in native code is when the JVM's OS process memory footprint vastly outweighs the actual heap size. We had this problem with a J2EE web app a couple of years ago. As the load increased over the months we started experiencing OOMEs which forced our admins to restart the app servers several times during the day. As we were using WebSphere I used an API supplied with the IBM JVM that allows you to cause a heap dump (just heap info) and a core dump (heap, stack and native info) programmatically. I stuck this in a JSP which our admins could call via telnet regularly via scripts. This allowed me to take snapshots of the heap as the memory increased throughout the day (or at least until the next restart). In addition to this we turned on the verbose GC. From the verbose GC log we found that the heap was behaving normally, expanding, sweeping and compacting at regular intervals. We could see that at the time of the OOME the heap was consuming about 70MB whereas the OS process size was over 500MB. IBM's GUI tool for visualising the heap dump and verbose GC output confirmed our suspicions. The core dump is intended for IBM staff so we sent all our findings to them. IBM found that the leak was coming from native libraries in the Oracle OCI JDBC driver we were using to connect to our database. It turned out that we had been using a very early version of the Oracle 9 JDBC driver which had some known native leaks and other bugs. After upgrading the driver to the then latest one the problem was gone instantly. This is just another example of an OOME not necessarily meaning that the app is poorly written or that the heap is always to blame. Cheers, Steve.

Re: JVM Lies: The OutOfMemory Myth

> The second is the permanent generation. This does not resize during the life time of the application [...] That's not true at least not with the Sun VM. Or what do you think the parameters -XX:PermSize and -XX:MaxPermSize are good for?

Re: JVM Lies: The OutOfMemory Myth

A fair point, that was lazy phrasing on my part. Strictly speaking I meant "the perm generation consumes up to the MaxPermSize, and does not increase beyond this regardless of the usage of the tenured space".

Re: JVM Lies: The OutOfMemory Myth

Isn't the hotspot compiled code stored in MaxPermSpace?

Re: JVM Lies: The OutOfMemory Myth

The OOM you refer to above is not actually strictly a proper VM out of memory. You are right, you made it worse by increasing memory pressure (possibly because you forced the entire process image up towards some memory barrier e.g. the classic 4GB barrier (actually even then all manner of other factors come into play here so whilst the naive idea is that 32bits == 4gb of accessible memory, that might not be the case, you may find that the total accessible memory is lower), but where it really comes from is the JIT. When the jit does a compile it needs to grab a chunk of memory for itself (known confusingly as swap space). Unfortunately in rare cases (typically when native libs screw with the JIT's life) you run out this "swap" space. What the JVM then does is a little undefined, (well I am sure the JIT engineers will argue with me, but I have seen this as the last thing in logs before some java app goes into Hellon Keller mode) - but largly this results in you getting "CompilerThread0" java.lang.OutOfMemoryError: requested Exception in thread "main" java.lang.OutOfMemoryError: requested 32756 bytes for ChunkPool::allocate. Out of swap space? The clues for this one are the words CompilerThread, swap space and chunk BTW its not a classic jvm OutOfMemory, hell its not even an exception, you can't trap it, it does not exist as any subclass of throwable , the gory details .. see http://12.101.252.19/hotspot/xref/src/share/vm/utilities/vmError.cpp (grep for swap space, hint what you see is a _string_ log message). I feel slightly sorry for you solving this one with a JIT OOM rather than a JVM oom, I was myself left with a feeling of being slightly cheated when I discovered that "out of swap space?" is a stdout message :S Btw if you need a huge heap, but still force native code / JIT to explode, you can attempt to reduce the stack size, which might by you precious extra bytes (but watch for StackOverFlowError :P) ... For the post stating native leaks, yes if the native code is leaking in the C side the process image will be large, but the JVM heap small, however another more fun native leak is code holding JNI Global references (these damn things pin objects so that the GC cannot touch them until the native code releases them). - This is gaurenteed fun as it looks like the javacode itself is the source of your leakage. (There are even more incidious ways to leak involving NIO, but I leave that topic for another day :P) As for the above post concerning temporary swap, thats for a different issue, related to process creation on the OS (and hence giving a different error (IOException)), however adding more swap does not fix a JIT explosion (even if the error message hints at this, it is referring to something different), I would recommend (many people would recommend) that the JVM never, ever lands on swap (unless you like your GC pauses to be very long), likewise sizing a heap beyond physical ram is simply asking for trouble. I have debugged more of these than is probably healthy :S As a plug on the tooling front, the newer jstat, jconsole, visualgc tools are great for this; as well is YourKit (which is possibly the nicest jvm profiler I have seen to date), they will basically tell you exactly where the fault is in no time at all.

Re: JVM Lies: The OutOfMemory Myth

SAP MemoryAnalyzer is also a quite nice tool for in depth analysis of heap dumps: https://www.sdn.sap.com/irj/sdn/wiki?path=/display/Java/Java+Memory+Analysis
(and it's available for free).

Nice faked OOM built into vmError.cpp ;)

regards,
Ingo

Re: JVM Lies: The OutOfMemory Myth

Thanks for the code link, and for re-inforcing the point that this is a "synthetic" exception. It could blow up all sorts of application frameworks which would otherwise deal more gracefully with a JVM OOM.

Re: JVM Lies: The OutOfMemory Myth

Great! Thanks for info, I had similar problem with our app and this article helped me to solve a problem with out of memory. Thanks. hor.ses.

Re: JVM Lies: The OutOfMemory Myth

didn't know about the sap tool, will have to have a look - Thanks :)

Re: JVM Lies: The OutOfMemory Myth

The SAP tool is great better than jhat, has similar functions as YouKit, and can support large dump which is so normal to us.

Re: JVM Lies: The OutOfMemory Myth

2G is maximum for 32 bit machine (little bit various on different machine), so reduce the heap is correct way.

Re: JVM Lies: The OutOfMemory Myth

No, that is not correct, Solaris e.g. can address up to 4GB and with a proper configuration there is also more than 2GB possible on Linux. Windows can only address up to 1.7GB AFAIK. regards, Ingo

Re: JVM Lies: The OutOfMemory Myth

It actually depends on the UserSpace / Kernel space split, for example on windows this is 2GB/2GB (as per default), giving you 2GB of userspace accessible directly. On most *nix blends (e.g. Solaris / Linux) the split is typically 3Gb/1Gb. There are ways to setup the space with other configurations like a 4GB / 4Gb split ..... Alternativly if you are stuck on 32-bit and you need more memory than 1 heap can provide there are a huge number of ways around the various ceilings and limits. However generally just making you heap larger to cope with OOM is not always the best approach, I would recommend the smaller the heap you can have the better as your GC response times become better

Re: JVM Lies: The OutOfMemory Myth

Oh yeah sorry forgot, for those running servers on windows ( ?why? ) you can change the split see http://technet.microsoft.com/en-us/library/bb124810.aspx However changing the split is not always a good thing as the kernel has less of its own space to work with. Also on some unix's (linux springs to mind) it is possible to have a configuration known as a 4GB/4GB split (often called hugemem by redhat), however this does come with overheads in memory lookups because technically under the hood some things wind up being bounce buffers and what not ... Another thing that is worthwhile looking into if you have a big heap is to use largepages / hugepages (whatever your OS calls them), these are the large page size option for your CPU (typically 2Mb / 4MB for x86), these massivly reduce the amount of work the OS has to do when dealing with big memory segments ..

Re: JVM Lies: The OutOfMemory Myth

You also forgot the extreme case of removing a memory dimm(hot swap) during runtime. What might the outcome be? lol

Re: JVM Lies: The OutOfMemory Myth

Depends on the architecture, but it is sometimes not as drastic as you think .....

Re: JVM Lies: The OutOfMemory Myth

Another resource you need to consider is virtual address space. As mentioned, under 32bit Windows, this is typically 1.7gigs or so. Using memory maps from Java effectively consumes "native virtual address space" outside the JVM's heap (created via FileInputStream.getChannel().map()), so you can end up with nasty "native" OOM errors (out of "swap space") as a result if you are not careful.

Regarding 64bit Sun JVM

You'd better not use it, unless you really HAVE to. Concurrent garbage collector regularily dumps core under heavy load (as of b04), and parallel collector blesses you with several seconds long pauses with 4+ Gb heaps (8 cores don't help). Right now we're stuck with parallel collector, and cope by having 5-10ms connect timeout and retrying to a mirror node in the cluster.

Re: JVM Lies: The OutOfMemory Myth

Excellent. Decreasing Xmx fixed my OOM error with Jintegra (Java to COM bridge).

Re: JVM Lies: The OutOfMemory Myth

There are different types of OutOfMemoryError in java and there are different solutions for that. some of them related to different area in heap

Re: JVM Lies: The OutOfMemory Myth

Hi Guys, we have faced OutOfMemory problem in our environment its 64bit linux, but here the issue is quite different(i.e these errors not thrown in logs, but we observed in cmd console app). Initial heap settings is Min:1024 MB Max:4096 MB. After that we tried to changed Min 128 MB Max 4096 MB in this case we didn't see the error but application was too slow. then we chnage Min and Max to 4096MB then issue was fixed and app is responding fast.

out of memory error at the command prompt.

Siva that issue of getting the error at the command prompt is where the JVM is unable to get the contiguous memory required for the heap for the JVM even though there is enough memory available. Some JVM's don't require contiguous memory but the Oracle one does. Try doing a search on contiguous memory out of memory error and you will see more information. Some times the JVM will start and sometimes it won't but the error will be in the command line and not within the JVM console.

Add a comment Send a TrackBack