In a
computer
A computer is a machine that can be programmed to Execution (computing), carry out sequences of arithmetic or logical operations (computation) automatically. Modern digital electronic computers can perform generic sets of operations known as C ...
operating system that uses
paging for
virtual memory management, page replacement algorithms decide which memory pages to page out, sometimes called swap out, or write to disk, when a
page of memory needs to be allocated.
Page replacement
In a computer operating system that uses paging for virtual memory management, page replacement algorithms decide which memory pages to page out, sometimes called swap out, or write to disk, when a page of memory needs to be allocated. Page ...
happens when a requested page is not in memory (
page fault) and a free page cannot be used to satisfy the allocation, either because there are none, or because the number of free pages is lower than some threshold.
When the page that was selected for replacement and paged out is referenced again it has to be paged in (read in from disk), and this involves waiting for I/O completion. This determines the ''quality'' of the page replacement algorithm: the less time waiting for page-ins, the better the algorithm. A page replacement algorithm looks at the limited information about accesses to the pages provided by hardware, and tries to guess which pages should be replaced to minimize the total number of page misses, while balancing this with the costs (primary storage and processor time) of the algorithm itself.
The page replacing problem is a typical
online problem from the competitive analysis perspective in the sense that the optimal deterministic algorithm is known.
History
Page replacement algorithms were a hot topic of research and debate in the 1960s and 1970s.
That mostly ended with the development of sophisticated
LRU (least recently used) approximations and
working set algorithms. Since then, some basic assumptions made by the traditional page replacement algorithms were invalidated, resulting in a revival of research. In particular, the following trends in the behavior of underlying hardware and user-level software have affected the performance of page replacement algorithms:
* Size of primary storage has increased by multiple orders of magnitude. With several gigabytes of primary memory, algorithms that require a periodic check of each and every memory frame are becoming less and less practical.
* Memory hierarchies have grown taller. The cost of a
CPU cache miss is far more expensive. This exacerbates the previous problem.
*
Locality of reference of user software has weakened. This is mostly attributed to the spread of
object-oriented programming techniques that favor large numbers of small functions, use of sophisticated data structures like
trees and
hash tables that tend to result in chaotic memory reference patterns, and the advent of
garbage collection that drastically changed memory access behavior of applications.
Requirements for page replacement algorithms have changed due to differences in operating system
kernel architectures. In particular, most modern OS kernels have unified virtual memory and file system caches, requiring the page replacement algorithm to select a page from among the pages of both user program virtual address spaces and cached files. The latter pages have specific properties. For example, they can be locked, or can have write ordering requirements imposed by
journaling. Moreover, as the goal of page replacement is to minimize total time waiting for memory, it has to take into account memory requirements imposed by other kernel sub-systems that allocate memory. As a result, page replacement in modern kernels (
Linux,
FreeBSD
FreeBSD is a free and open-source Unix-like operating system descended from the Berkeley Software Distribution (BSD), which was based on Research Unix. The first version of FreeBSD was released in 1993. In 2005, FreeBSD was the most popular ...
, and
Solaris
Solaris may refer to:
Arts and entertainment Literature, television and film
* ''Solaris'' (novel), a 1961 science fiction novel by Stanisław Lem
** ''Solaris'' (1968 film), directed by Boris Nirenburg
** ''Solaris'' (1972 film), directed by ...
) tends to work at the level of a general purpose kernel memory allocator, rather than at the higher level of a virtual memory subsystem.
Local vs. global replacement
Replacement algorithms can be ''local'' or ''global.''
When a process incurs a page fault, a local page replacement algorithm selects for replacement some page that belongs to that same process (or a group of processes sharing a
memory partition).
A global replacement algorithm is free to select any page in memory.
Local page replacement assumes some form of memory partitioning that determines how many pages are to be assigned to a given process or a group of processes. Most popular forms of partitioning are ''fixed partitioning'' and ''balanced set'' algorithms based on the
working set model. The advantage of local page replacement is its scalability: each process can handle its page faults independently, leading to more consistent performance for that process. However global page replacement is more efficient on an overall system basis.
Detecting which pages are referenced and modified
Modern general purpose computers and some embedded processors have support for
virtual memory. Each process has its own virtual address space. A
page table maps a subset of the process virtual addresses to physical addresses. In addition, in most architectures the page table holds an "access" bit and a "dirty" bit for each page in the page table. The CPU sets the access bit when the process reads or writes memory in that page. The CPU sets the dirty bit when the process writes memory in that page. The operating system can modify the access and dirty bits. The operating system can detect accesses to memory and files through the following means:
* By clearing the access bit in pages present in the process' page table. After some time, the OS scans the page table looking for pages that had the access bit set by the CPU. This is fast because the access bit is set automatically by the CPU and inaccurate because the OS does not immediately receive notice of the access nor does it have information about the order in which the process accessed these pages.
* By removing pages from the process' page table without necessarily removing them from physical memory. The next access to that page is detected immediately because it causes a
page fault. This is slow because a page fault involves a context switch to the OS, software lookup for the corresponding physical address, modification of the page table and a context switch back to the process and accurate because the access is detected immediately after it occurs.
* Directly when the process makes system calls that potentially access the page cache like
read
and
write
in POSIX.
Precleaning
Most replacement algorithms simply return the target page as their result. This means that if target page is ''dirty'' (that is, contains data that have to be written to the stable storage before page can be reclaimed), I/O has to be initiated to send that page to the stable storage (to ''clean'' the page). In the early days of virtual memory, time spent on cleaning was not of much concern, because virtual memory was first implemented on systems with
full duplex channels to the stable storage, and cleaning was customarily overlapped with paging. Contemporary commodity hardware, on the other hand, does not support full duplex transfers, and cleaning of target pages becomes an issue.
To deal with this situation, various ''precleaning'' policies are implemented. Precleaning is the mechanism that starts I/O on dirty pages that are (likely) to be replaced soon. The idea is that by the time the precleaned page is actually selected for the replacement, the I/O will complete and the page will be clean. Precleaning assumes that it is possible to identify pages that will be replaced ''next''. Precleaning that is too eager can waste I/O bandwidth by writing pages that manage to get re-dirtied before being selected for replacement.
Anticipatory paging
Some systems use
demand paging—waiting until a page is actually requested before loading it into RAM.
Other systems attempt to reduce latency by guessing which pages not in RAM are likely to be needed soon, and pre-loading such pages into RAM, before that page is requested. (This is often in combination with pre-cleaning, which guesses which pages currently in RAM are not likely to be needed soon, and pre-writing them out to storage).
When a page fault occurs, "anticipatory paging" systems will not only bring in the referenced page, but also the next few consecutive pages (analogous to a
prefetch input queue in a CPU).
The
swap prefetch mechanism goes even further in loading pages (even if they are not consecutive) that are likely to be needed soon.
The (h,k)-paging problem
The (h,k)-paging problem is a generalization of the model of paging problem: Let h,k be positive integers such that
. We measure the performance of an algorithm with cache of size
relative to
the theoretically optimal page replacement algorithm. If