The 2025 JVM Heap Cheat Sheet: 7 Essential Settings
Tired of mysterious pauses and OOM errors? Our 2025 JVM Heap Cheat Sheet breaks down the 7 essential settings you need for a high-performance Java app.
Alex Volkov
Principal Software Engineer specializing in JVM performance and large-scale distributed systems.
Let's be honest. In 2025, the Java Virtual Machine is a marvel of engineering. With garbage collectors like G1, ZGC, and Shenandoah, the JVM is smarter and more self-sufficient than ever. It feels like we should just be able to write our code, press play, and let the magic happen. And for many simple applications, that's almost true.
But when you're running at scale—handling thousands of requests per second, managing gigabytes of data in memory, and promising your users a snappy, responsive experience—the default settings are rarely optimal. The difference between a stable, high-performance application and one that suffers from mysterious pauses and dreaded OutOfMemoryError
crashes often comes down to a few critical JVM flags.
This isn't about memorizing hundreds of obscure options. It's about understanding the key levers that control your application's memory and responsiveness. Here is your essential cheat sheet for the seven JVM heap and GC settings you absolutely need to know in 2025.
1. The Foundation: -Xms
and -Xmx
Let's start with the absolute basics, the two flags you've probably seen a thousand times: -Xms
(initial heap size) and -Xmx
(maximum heap size). While they seem simple, how you set them has a profound impact.
The Cheat: For any serious server-side application, set them to the same value.
java -Xms4g -Xmx4g -jar my-app.jar
Why? When the initial and maximum heap sizes are different, the JVM has to perform heap resizing. When the application needs more memory than is currently committed (but less than the max), the JVM pauses the application to request more memory from the operating system. Conversely, if the heap is underutilized, the JVM might shrink it to return memory to the OS, which also causes a pause. These resizing pauses can be surprisingly disruptive.
By setting -Xms
and -Xmx
to the same value, you tell the JVM to grab the entire memory block at startup. This eliminates resizing pauses and gives you a more predictable performance profile from the get-go. Think of it as building your house on a pre-poured, fixed-size foundation instead of one that expands and contracts on demand.
2. The Engine Room: Choosing Your Garbage Collector
This is one of the most important decisions you'll make. The default, G1 (Garbage-First), is a fantastic all-rounder, but it's not the only option. For applications where low latency is king, ZGC (the Z Garbage Collector) is a game-changer.
-XX:+UseG1GC
: The battle-tested default. It balances throughput and latency, making it a great choice for most applications. It aims for predictable pause times by dividing the heap into regions and collecting garbage from the most promising ones first.-XX:+UseZGC
: The low-latency champion. ZGC is designed for huge heaps (terabytes!) and ridiculously low pause times—typically under a millisecond. It achieves this by doing almost all its work concurrently, while your application threads are still running. The trade-off is slightly higher CPU usage and memory overhead.
So, G1 or ZGC?
Your choice depends entirely on your application's needs. Here's a simple breakdown:
Feature | G1 Garbage Collector | Z Garbage Collector (ZGC) |
---|---|---|
Primary Goal | Excellent Throughput & Predictability | Extremely Low Pause Times (<1ms) |
Typical Pauses | Tens to hundreds of milliseconds | Sub-millisecond |
Best For | Most general-purpose server applications, API backends, batch processing. | Latency-sensitive services like financial trading, ad bidding, or interactive APIs. |
If you don't know what you need, start with the default G1. If you then discover that GC pause times are your primary bottleneck, give ZGC a try.
3. Setting Your Goal: -XX:MaxGCPauseMillis
This flag is your way of communicating your performance goals to the garbage collector. It doesn't set a hard limit, but rather a target for the GC to aim for.
java -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -jar my-app.jar
How it works: You're telling the GC, "Hey, I'd really appreciate it if you tried to keep your stop-the-world pauses under 200 milliseconds." The GC will then adjust its heuristics—like how much work it does in each cycle—to try and meet that goal. A lower target might mean more frequent, shorter GC cycles (potentially higher CPU usage), while a higher target gives the GC more breathing room.
This is an essential tuning knob for G1 and is also respected by ZGC and Shenandoah. Don't set it unrealistically low; start with a reasonable goal (e.g., 200-500ms) and monitor the results.
4. The Proactive Trigger (for G1): -XX:InitiatingHeapOccupancyPercent
This one sounds complicated, but it's a powerful lever for optimizing G1. The "IHOP" setting tells G1 when to kick off a concurrent marking cycle.
The Cheat: The default is 45
(meaning 45% of the old generation is full). If you are seeing frequent full GCs or "Evacuation Failures" in your logs, your GC isn't starting early enough. Try lowering this value to 30
or 35
to be more proactive.
java -XX:+UseG1GC -XX:InitiatingHeapOccupancyPercent=35 -jar my-app.jar
An "Evacuation Failure" happens when G1 can't find enough free regions to move objects into during a young collection, forcing it into a more disruptive, slower cleanup. By starting the marking cycle earlier, you give G1 a head start on freeing up old generation regions before things get critical.
5. Beyond the Heap: -XX:MetaspaceSize
& -XX:MaxMetaspaceSize
The heap is for your objects, but where does the JVM store class metadata, like method definitions and bytecode? That goes into a separate memory area called Metaspace. And yes, it can also run out of memory.
By default, Metaspace is unbounded, but the JVM can be slow to resize it, leading to pauses. For applications that load a lot of classes (hello, Spring Boot and Hibernate!), it's wise to give the JVM a hint.
java -XX:MetaspaceSize=256m -XX:MaxMetaspaceSize=512m -jar my-app.jar
Setting MetaspaceSize
acts as a "high-water mark" that encourages the JVM to collect dead classes earlier, preventing a last-minute scramble. It's a good practice to set this to a reasonable starting value to improve startup time and avoid resizing-induced latency.
6. The Black Box Recorder: -XX:+HeapDumpOnOutOfMemoryError
This isn't a performance tuning flag; it's an essential diagnostic tool. It's your application's flight recorder. When your application inevitably crashes with an OutOfMemoryError
, this flag tells the JVM to write a complete snapshot of the heap to a file.
java -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/logs/my-app-dumps/ -jar my-app.jar
Why this is non-negotiable: An OutOfMemoryError
without a heap dump is a mystery. You know you ran out of memory, but you have no idea why. Was it a million tiny objects? One giant byte array? A session that never expired? A heap dump file, when analyzed with tools like Eclipse MAT or VisualVM, gives you the exact answer. You can see which objects are consuming memory and what is holding references to them.
Always have this enabled in production. The performance overhead is zero until the OOM actually occurs.
7. Your Eyes and Ears: -Xlog:gc*
How do you know if your tuning is working? You need to see what the GC is actually doing. The old GC logging flags are deprecated. The modern, unified logging framework is the way to go.
java -Xlog:gc*:file=./logs/gc.log:time,uptimemillis,level,tags:filecount=5,filesize=10m -jar my-app.jar
This command is a great starting point. It tells the JVM to:
- Log all topics with the
gc
tag (gc*
). - Write to a file named
gc.log
in the./logs
directory. - Include useful decorations like timestamps and uptime.
- Rotate the logs, keeping 5 files of 10MB each.
With these logs, you can see pause durations, memory usage before and after collections, and any error events. This data is the ground truth that validates your tuning decisions.
Conclusion: Measure, Tune, Repeat
The JVM is a powerful platform, and mastering its memory management is a key skill for any serious Java developer. This cheat sheet isn't a set of magic bullets, but a starting point for intelligent tuning.
The real secret to performance is a continuous feedback loop: set a baseline with these essential flags, monitor your application's behavior using GC logs and APM tools, and then make small, informed adjustments. In the world of performance tuning, data always wins. Now go make your applications fly!