This is a discussion on Setting max and min JVM heap size the same? - Websphere ; For several years I have seen suggestions in several places about how "good" it is for performance to set the minimum and maximum heap size to be the same value. To me this has always seemed very counterintuitive so I ...
For several years I have seen suggestions in several places about how "good" it is for performance to set the minimum and maximum heap size to be the same value. To me this has always seemed very counterintuitive so I figure I'd post the question here since IBM has some of the best JVM researchers in the world and WebSphere is certainly one of the most widely deployed application servers running large environments. I don't think Mythbusters would take up this challenge for their show...
The reasoning usually given to justify setting these two values to be the same is:
- The JVM will not need to resize the heap upon startup or go through the "expensive" operations like compaction.
- You can make the JVM more "predictable" by removing the chance that it will grow or shrink the heap.
Personally the reasons I feel this suggestion doesn't make sense are:
- You are assuming you know how much memory is appropriate for the JVM under all conditions and are not giving it any room to make adjustments dynamically.
- Often the amount of free memory the JVM attempts to keep available is set as a percentage of the min heap size. By setting this value high you are likely forcing the JVM to try and keep more free memory around than may be required.
- There are two settings for a reason. There has been enough research in JVM technology done over the years that if it were really the case that having these two values the same were the best we would just have a "heap size" setting and not be able to set a range.
If it is certainly a myth that setting these values to the same value is "good" I think there are two reasons this myth may persist:
- Setting these values in a QA environment where you have a fixed environment and a known load under test may be good at removing the unpredictability of heap size changes. The problem is that load in production is never a known quantity and at best you are just attempting to estimate what would happen "worst case" in QA. Production load is anything but fixed and setting these to different settings would allow production to dynamically adjust.
- Although recent JVMs have come a LONG way in terms of performance this has not always been the case. Since the performance of the earliest JVMs was less than it is today there was lots of demand to try and "squeeze" every last performance drop out of the system. I think this is backed up by experience as our company does a lot of training and we have seen a drastic dropoff in the demand for "performance tuning" type training with many of the latest versions of WebSphere and other servers.
Not that I've thrown the question out there and provided some ammunition for both "sides" I'd love to hear what other people think.
IBM Certified Advanced System Administrator WAS ND 6.1
Web Age Solutions