In
computer science
Computer science is the study of computation, information, and automation. Computer science spans Theoretical computer science, theoretical disciplines (such as algorithms, theory of computation, and information theory) to Applied science, ...
, a mark–compact algorithm is a type of
garbage collection algorithm
In mathematics and computer science, an algorithm () is a finite sequence of Rigour#Mathematics, mathematically rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algo ...
used to reclaim
unreachable memory
In computer programming, unreachable memory is a block of dynamically allocated memory where the program that allocated the memory no longer has any reachable pointer that refers to it. Similarly, an unreachable object is a dynamically alloc ...
. Mark–compact algorithms can be regarded as a combination of the
mark–sweep algorithm and
Cheney's copying algorithm. First, reachable objects are marked, then a compacting step relocates the reachable (marked) objects towards the beginning of the heap area. Compacting garbage collection is used by modern
JVMs,
Microsoft
Microsoft Corporation is an American multinational corporation and technology company, technology conglomerate headquartered in Redmond, Washington. Founded in 1975, the company became influential in the History of personal computers#The ear ...
's
Common Language Runtime
The Common Language Runtime (CLR), the virtual machine component of Microsoft .NET Framework, manages the execution of .NET programs. Just-in-time compilation converts the managed code (compiled intermediate language code) into machine instr ...
and by the
Glasgow Haskell Compiler
The Glasgow Haskell Compiler (GHC) is a native or machine code compiler for the functional programming language Haskell.
It provides a cross-platform software environment for writing and testing Haskell code and supports many extensions, libra ...
.
Algorithms
After marking the live objects in the heap in the same fashion as the
mark–sweep algorithm, the heap will often be
fragmented. The goal of mark–compact algorithms is to shift the live objects in memory together so the fragmentation is eliminated. The challenge is to correctly update all pointers to the moved objects, most of which will have new memory addresses after the compaction. The issue of handling pointer updates is handled in different ways.
Table-based compaction

A table-based algorithm was first described by Haddon and Waite in 1967. It preserves the relative placement of the live objects in the heap, and requires only a constant amount of overhead.
Compaction proceeds from the bottom of the heap (low addresses) to the top (high addresses). As live (that is, marked) objects are encountered, they are moved to the first available low address, and a record is appended to a break table of relocation information. For each live object, a record in the break table consists of the object's original address before the compaction and the difference between the original address and the new address after compaction. The break table is stored in the heap that is being compacted, but in an area that is marked as unused. To ensure that compaction will always succeed, the minimum object size in the heap must be larger than or the same size as a break table record.
As compaction progresses, relocated objects are copied towards the bottom of the heap. Eventually an object will need to be copied to the space occupied by the break table, which now must be relocated elsewhere. These movements of the break table, (called ''rolling the table'' by the authors) cause the relocation records to become disordered, requiring the break table to be
sorted after the compaction is complete. The cost of sorting the break table is ''
O''(''n'' log ''n''), where ''n'' is the number of live objects that were found in the mark stage of the algorithm.
Finally, the break table relocation records are used to adjust pointer fields inside the relocated objects. The live objects are examined for pointers, which can be looked up in the sorted break table of size ''n'' in O(log ''n'') time if the break table is sorted, for a total running time of ''O''(''n'' log ''n''). Pointers are then adjusted by the amount specified in the relocation table.
LISP 2 algorithm
In order to avoid ''O''(''n'' log ''n'') complexity, the algorithm uses three different passes over the heap. In addition, heap objects must have a separate forwarding pointer slot that is not used outside of garbage collection.
After standard marking, the algorithm proceeds in the following three passes:
# Compute the forwarding location for live objects.
#* Keep track of a ''free'' and ''live'' pointer and initialize both to the start of heap.
#* If the ''live'' pointer points to a live object, update that object's forwarding pointer to the current ''free'' pointer and increment the ''free'' pointer according to the object's size.
#* Move the ''live'' pointer to the next object
#* End when the ''live'' pointer reaches the end of heap.
# Update all pointers
#* For each live object, update its pointers according to the forwarding pointers of the objects they point to.
# Move objects
#* For each live object, move its data to its forwarding location.
This algorithm is ''O''(''n'') on the size of the heap; it has a better complexity than the table-based approach, but the table-based approach's ''n'' is the size of the used space only, not the entire heap space as in the LISP2 algorithm. However, the LISP2 algorithm is simpler to implement.
The Compressor
The Compressor compaction algorithm has the lowest complexity among compaction algorithms known today. It extends IBM’s garbage collection for Java.
The serial version of the Compressor maintains a relocation map that maps the old address of each object to its new address (i.e., its address before compaction is mapped to its address after compaction). In a first pass, the mapping is computed for all objects in the heap. In a second pass, each object is moved to its new location (compacted to the beginning of the heap), and all pointers within it are modified according to the relocation map.
The computation of the relocation map in the first pass can be made very efficient by working with small tables that do not require a pass over the entire heap. This keeps the Compressor complexity low, involving one pass over small tables and one pass over the full heap. This represents the best-known complexity for compaction algorithms.
The Compressor also has a parallel version in which multiple compacting threads can work together to compact all objects in parallel. The Compressor also has a concurrent version in which compacting threads can work concurrently with the program, carefully allowing the program to access objects as they are being moved towards the beginning of the heap. The parallel and concurrent versions of the Compressor make use of virtual memory primitives.
See also
*
Dead-code elimination
In compiler theory, dead-code elimination (DCE, dead-code removal, dead-code stripping, or dead-code strip) is a compiler optimization to remove dead code (code that does not affect the program results). Removing such code has several benefits: i ...
*
Tracing garbage collection
In computer programming, tracing garbage collection is a form of automatic memory management that consists of determining which objects should be deallocated ("garbage collected") by tracing which objects are ''reachable'' by a chain of referenc ...
References
{{DEFAULTSORT:Mark-Compact Algorithm
Automatic memory management
Memory management algorithms