RSS Feed for This PostCurrent Article

Java: Detect Memory Leak in Your Application

This is a recent problem that I faced. In this article I am going to show you how I detect memory leaks in a Java application that I have refactored.

If your application is leaking memory overtime, eventually you are going to get java.lang.OutOfMemoryError exception. Over time, you may see your application doing garbage collection more and more frequently in order to get free memory and eventually run out of memory.

The application is running on HP-UX 11i platform using JDK 1.5.0.9. To troubleshoot this memory issue, there is a free tool available for HP-UX platform, which is HPjmeter. There are similar tools available for other platforms as well, e.g. JProbe. So the approach to resolve the memory issue should be roughly the same.

Now I know my application is leaking memory and eventually does a core dump. Such, I need to monitor my application over time. To do this, I use the HPjmeter Node Agent. I specify to use the JVM agent in my startup script using the “-agentlib:jmeter” option.

E.g.

/opt/java1.5/bin/java -Xms256m -Xmx512m -agentlib:jmeter myapp

By putting in this option, it will make my application runs slower.

Once my application is started, I can connect to to JVM agent using the HPjmeter console.

image

You can configure HPjmeter to monitor different metric, filter or alter.

You can monitor Java Method HotSpots, Thread Histogram, Loaded Classes, etc..

image

You can do filtering for different application servers.

image

Most importantly, it can be used to alert us for Heap Usage, Memory Leak detection, Out of Memory error, Thread Deadlock and other abnormal behaviors.

image

 

image

Using JUnit, I simulate the input traffic for my application. After a few hours, I can clearly see that there is a memory leak

image

From the graph showing Heap Usage after Garbage Collection, I can see the heap usage is going up with time.

image

At this point, I can only see the alert for Out Of Memory error, but there is no alert for Memory Leak which tells me about the location of the code. Too bad….

However, using HPjmeter I can see the current live heap objects.

image

image

For the application, when it just started, the current live heap objects

image

After few hours of running,

image

So the initial suspect is the DelegatingResultSet which indirectly uses ArrayList.

Code analysis is performed for all legacy code using ResultSet, and the following pattern is found.

public class DataAccess {
    Connection myConn;
 
    public void init(){
        // Set up connection
    }
 
    public void executeSelect(String id) {
        try {
            ResultSet rs = myConn.executeQuery(....);
        } ......
    }
}

Obviously the ResultSet is not closed, but this code is written 5 years back, why the problem only comes now?

As part of the code refactoring, I have change the database connection pooling from the custom developed connection pooling class to Apache Commons DBCP. Searching through the Internet, I found the following

A Java Enterprise Edition (Java EE) application usually connects to the database by either making a direct connection to the database using JDBC thin drivers provided by the database vendor or creating a pool of database connections within the Java EE container using the JDBC drivers. If the application directly connects to the database, then on calling the close() method on the connection object, the database connection closes and the associated Statement and ResultSet objects close and are garbage collected. If a connection pool is used, a request to the database is made using one of the existing connections in the pool. In this case, on calling close() on the connection object, the database connection returns to the pool. So merely closing the connection does not automatically close the ResultSet and Statement objects. As a result, ResultSet and Statement will not become eligible for garbage collection, as they continue to remain tagged with the database connection in the connection pool.

So this problem is due to

  • Improper coding as all the open ResultSet must be closed explicity
  • Code refactoring to use Apache Commons DBCP. That is why I see leakage of DelegatingResultSet

A simple experiment shows it

public void testLeak() {
        try {
            for (int i = 0; i < 1000; i++) {
                executeSelect();
            }
        } catch (Exception e) {
            e.printStackTrace();
        }
    }
 
    public static void main(String args[]) {
        try {
            System.gc();
            System.out.println("Total Memory: " + Runtime.getRuntime().totalMemory());
            System.out.println("Free Memory: " + Runtime.getRuntime().freeMemory());
            TestMemoryLeak t = TestMemoryLeak.getInstance("DB");
            t.testLeak();
            System.gc();
            System.out.println("Total Memory: " + Runtime.getRuntime().totalMemory());
            System.out.println("Free Memory: " + Runtime.getRuntime().freeMemory());
 
   }

Without Closing the ResultSet

Before
Total Memory: 266403840
Free Memory: 265129216

After
Total Memory: 266403840
Free Memory: 262457176

With ResultSet Closed

Before
Total Memory: 266403840
Free Memory: 265129216

AfterTotal Memory: 266403840
Free Memory: 262753328

The leakage is roughly 296k.

I fix all the classes to close the ResultSet in the final block. After fixing the code, I test the application using HPjmeter again. Now there is no more leakage from the Current Live Heap Objects analysis as well as the Heap Usage after Garbage Collection graph.

image

 

References

HPROF: A Heap/CPU Profiling Tool in J2SE 5.0

Java Performance Tuning

Heap Analysis Tool 1.1 (HAT) which is part of JDK1.6 now, called jhat.

JDK 6 Project

Plugging memory leaks with weak references

Plugging memory leaks with soft references

Heap dumps are back with a vengeance

Plug memory leaks in enterprise Java applications


Trackback URL


Sorry, comments for this entry are closed at this time.