RSS Feed for This PostCurrent Article

Oracle TEMP Tablespace Full When Query Returning Large Results

This is a problem I encountered with Oracle 10g.

I have a connection pool defined, and when I used the connection pool to retrieve large number of records from a table repeatedly, eventually TEMP tablespace is full. The space is never released until I terminate the Java program.

While reading the Oracle documentation on memory architecture, it is noted

Oracle uses memory to store information such as the following:

  • Program code
  • Information about a connected session, even if it is not currently active
  • Information needed during program execution (for example, the current state of a query from which rows are being fetched)
  • Information that is shared and communicated among Oracle processes (for example, locking information)
  • Cached data that is also permanently stored on peripheral memory (for example, data blocks and redo log entries)

And then, for SQL Work Area

For complex queries (for example, decision-support queries), a big portion of the runtime area is dedicated to work areas allocated by memory-intensive operators such as the following:

  • Sort-based operators (order by, group-by, rollup, window function)
  • Hash-join
  • Bitmap merge
  • Bitmap create

For example, a sort operator uses a work area (sometimes called the sort area) to perform the in-memory sort of a set of rows. Similarly, a hash-join operator uses a work area (also called the hash area) to build a hash table from its left input. If the amount of data to be processed by these two operators does not fit into a work area, the input data is divided into smaller pieces. This allows some data pieces to be processed in memory while the rest are spilled to temporary disk storage to be processed later. Although bitmap operators do not spill to disk when their associated work area is too small, their complexity is inversely proportional to the size of their work area. Thus, these operators run faster with larger work area.

To resolve the problem, I have to use rownum to limit the number of records returned.

Alternatively, I can use “first_rows” optimization, see here.

Another way I could have tried is to set some session parameters like “sort_area_size”, “hash_area_size”, “sort_area_retained_size”. See here and here and here.

Memory Architecture

Trackback URL

Sorry, comments for this entry are closed at this time.