2025 Guide: Debug Your Endless Compressed Class Space
Tired of the dreaded 'OutOfMemoryError: Compressed class space'? Our 2025 guide provides a step-by-step strategy to debug and fix endless class loading issues.
Adrian Ivanov
Senior JVM performance engineer specializing in memory management and production debugging.
There are few errors more frustrating for a Java developer than the infamous java.lang.OutOfMemoryError. But when it's followed by : Compressed class space, the frustration can turn to confusion. Your heap looks fine, so what on earth is filling up? If you've found yourself staring at this error, wondering why this memory space seems to be growing endlessly, you're in the right place. This is your 2025 guide to understanding, debugging, and finally conquering the compressed class space beast.
What is Compressed Class Space, Anyway?
Before we can fix the problem, we need to understand the territory. The Compressed Class Space isn't part of your Java heap (the -Xmx
part). It's a specific region in the JVM's native memory, right alongside Metaspace.
Its existence is a clever optimization for 64-bit systems. In a 64-bit JVM, memory addresses are—you guessed it—64 bits long. But what if we could use smaller, 32-bit pointers for class metadata? This feature, known as Compressed Class Pointers (part of Compressed Oops), does exactly that. It saves a significant amount of memory, reduces pressure on the CPU cache, and generally boosts performance.
The JVM reserves a dedicated chunk of native memory for this. By default, this space has a maximum size of 1 GB. When the metadata for all your loaded classes exceeds this 1 GB limit, the JVM throws the OutOfMemoryError: Compressed class space
and terminates.
Why is it Filling Up Endlessly? Common Culprits
An application that runs fine for an hour but then crashes with this error isn't just using a lot of classes—it's likely leaking them. An "endless" growth points to a situation where classes are being loaded continuously but never unloaded. Here are the usual suspects:
- ClassLoader Leaks: This is the number one cause. It happens when a ClassLoader is no longer needed (e.g., after a web app redeployment) but is still referenced by something that won't be garbage collected. Since the ClassLoader holds references to all the classes it loaded, none of them can be unloaded, and they remain stuck in the compressed class space forever.
- Dynamic Class Generation: Modern frameworks are built on magic, and that magic is often runtime class generation. Libraries like Spring (AOP proxies), Hibernate (entity proxies), and mocking tools (Mockito, EasyMock) use libraries like CGLIB or ByteBuddy to create new classes on the fly. A misconfiguration or a bug can cause these frameworks to generate thousands of unique, single-use classes instead of reusing them.
- Heavy Lambda Usage (with a twist): While lambdas are efficient, in certain scenarios (like reflection-based serialization of lambdas), the JVM might generate new "bridge" classes. In most applications, this is negligible, but in extreme cases, it can contribute to the bloat.
- Scripting Engines: Using engines like GraalVM's Polyglot API or older scripting engines can lead to class generation for each script evaluation if not managed carefully.
Your 2025 Debugging Toolkit
To hunt down these culprits, you need the right tools. Fortunately, the modern JDK is packed with them, and the external ecosystem is even better.
- JVM Flags: Your first line of defense. Use
-Xlog:class+load=info
and-Xlog:class+unload=info
to get a real-time feed of every class being loaded and unloaded. Seeing lots of loading but no unloading is your first major clue. - JDK Command-Line Tools:
jcmd <pid> VM.class_hierarchy
: Dumps the entire class hierarchy, which can be overwhelming but useful for spotting anomalies.jcmd <pid> GC.class_histogram
: Gives you a list of all loaded classes, their instance counts, and the total memory they occupy. Running this periodically and diffing the results is a powerful way to see what's growing.jstat -class <pid>
: A lightweight way to monitor the number of loaded classes and the bytes they consume in real-time.
- Heap Dump Analyzers: The ultimate weapon. Eclipse Memory Analyzer (MAT) is the undisputed champion here. It can take a multi-gigabyte heap dump and, in minutes, point you directly to the cause of a ClassLoader leak.
- Profilers: Tools like VisualVM (which comes with the JDK), JProfiler, and YourKit provide a visual interface to inspect loaded classes and track their growth over time.
A Step-by-Step Debugging Strategy
Alright, let's get our hands dirty. Here’s a methodical approach to finding and fixing the leak.
Step 1: Enable Observability
Restart your application with class logging enabled. This is non-negotiable. Add this flag to your JVM startup options:
-Xlog:class+load=info,class+unload=info:file=class_activity.log
This will log all class loading and unloading activities to a file named class_activity.log
. Let the application run until it's in the state where memory is growing.
Step 2: Monitor and Confirm
While the app is running, use jstat
to watch the class count grow. Open a terminal and run:
jstat -class <YOUR_PID> 10s
This will print the number of loaded classes and their size every 10 seconds. If you see the Loaded
column climbing steadily without ever decreasing, you've confirmed the leak.
Step 3: Analyze the Class Histogram
Now, let's find out what is growing. Use jcmd
to get a class histogram. Take one snapshot now, and another one in 10-15 minutes.
jcmd <YOUR_PID> GC.class_histogram > histogram_before.txt
...wait 15 minutes...
jcmd <YOUR_PID> GC.class_histogram > histogram_after.txt
Now, use a diff tool (like diff
, WinMerge, or Beyond Compare) to compare histogram_before.txt
and histogram_after.txt
. You are looking for classes that have appeared or dramatically increased in number. Often, you'll see a pattern, like thousands of classes with names like com.mycompany.MyService$$EnhancerByCGLIB$$a1b2c3d4
.
Step 4: Capture the Heap Dump
Once you've confirmed the leak and have an idea of the classes involved, it's time to bring out the heavy artillery. Capture a heap dump when the memory usage is high, but before the application crashes.
jcmd <YOUR_PID> GC.heap_dump -all filename.hprof
Note: This will pause your application, and the resulting file can be very large. Do this in a controlled environment.
Step 5: Unmask the Leak with MAT
Open the generated filename.hprof
file in Eclipse MAT. The moment it's done parsing, it will likely present you with a leak suspects report. This report is incredibly accurate and often points directly to the problem.
If the report isn't conclusive, you can manually investigate:
- Find the Leaking Classes: Open the Dominator Tree view and group by ClassLoader. You should see one or more ClassLoaders holding onto an unusually large amount of heap.
- Find the GC Root: Right-click on the suspect ClassLoader and select "Path to GC Roots" (excluding weak/soft references). This will show you the chain of objects that is keeping your ClassLoader alive. It might be a static field, a running thread that hasn't been shut down, or a thread-local variable.
This chain is your smoking gun. It tells you exactly what part of your code needs to be fixed to allow the ClassLoader to be garbage collected.
The Fix: Choosing Between a Band-Aid and a Cure
Once you've identified the root cause, you have two paths forward. One is a temporary fix, and the other is the permanent solution.
The quick fix or "band-aid" is to simply give the JVM more compressed class space. You can do this with the -XX:CompressedClassSpaceSize
flag:
-XX:CompressedClassSpaceSize=2g
This buys you time, but it doesn't solve the underlying leak. Your application's native memory footprint will be larger, and the OOM error will eventually return.
The real fix or "cure" is to address the root cause you found in your analysis. This means changing your application code or configuration.
Feature | Quick Fix (Increase Space) | Real Fix (Code/Config Change) |
---|---|---|
Effort | Low: A single JVM flag change. | High: Requires debugging, code changes, and re-deployment. |
Impact | Delays the OOM, but increases native memory footprint. | Permanently solves the OOM and reduces overall memory usage. |
Risk | High: Masks the underlying problem, which could have other side effects. | Low: Once the root cause is correctly identified and fixed. |
Best For | Emergency production hotfixes to buy time for a proper investigation. | Long-term application stability, performance, and health. |
Final Takeaways
Dealing with a compressed class space OOM can feel daunting, but it's a solvable problem with a systematic approach. Here's what to remember:
- An "endless" growth in compressed class space is almost always due to a ClassLoader leak or rampant dynamic class generation.
- Start your investigation with simple tools: JVM logging (
-Xlog:class+load
) and monitoring (jstat
,jcmd
).- A heap dump analyzed with Eclipse MAT is your most powerful weapon for finding the precise GC root that's causing the leak.
- Increasing
-XX:CompressedClassSpaceSize
is a temporary patch to stop the bleeding, not a long-term solution.- Focus your efforts on fixing the application code or framework configuration that's generating or leaking classes. Your application will be more stable and efficient for it.
Happy debugging!