Derandomization
   HOME

TheInfoList



OR:

A randomized algorithm is an
algorithm In mathematics and computer science, an algorithm () is a finite sequence of Rigour#Mathematics, mathematically rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algo ...
that employs a degree of
randomness In common usage, randomness is the apparent or actual lack of definite pattern or predictability in information. A random sequence of events, symbols or steps often has no order and does not follow an intelligible pattern or combination. ...
as part of its logic or procedure. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performance in the "average case" over all possible choices of random determined by the random bits; thus either the running time, or the output (or both) are random variables. There is a distinction between algorithms that use the random input so that they always terminate with the correct answer, but where the expected running time is finite (
Las Vegas algorithm In computing, a Las Vegas algorithm is a randomized algorithm that always gives Correctness (computer science), correct results; that is, it always produces the correct result or it informs about the failure. However, the runtime of a Las Vegas alg ...
s, for example
Quicksort Quicksort is an efficient, general-purpose sorting algorithm. Quicksort was developed by British computer scientist Tony Hoare in 1959 and published in 1961. It is still a commonly used algorithm for sorting. Overall, it is slightly faster than ...
), and algorithms which have a chance of producing an incorrect result (
Monte Carlo algorithm In computing, a Monte Carlo algorithm is a randomized algorithm whose output may be incorrect with a certain (typically small) probability. Two examples of such algorithms are the Karger–Stein algorithm and the Monte Carlo algorithm for mini ...
s, for example the Monte Carlo algorithm for the MFAS problem) or fail to produce a result either by signaling a failure or failing to terminate. In some cases, probabilistic algorithms are the only practical means of solving a problem. In common practice, randomized algorithms are approximated using a
pseudorandom number generator A pseudorandom number generator (PRNG), also known as a deterministic random bit generator (DRBG), is an algorithm for generating a sequence of numbers whose properties approximate the properties of sequences of random number generation, random n ...
in place of a true source of random bits; such an implementation may deviate from the expected theoretical behavior and mathematical guarantees which may depend on the existence of an ideal true random number generator.


Motivation

As a motivating example, consider the problem of finding an ‘''a''’ in an array of ''n'' elements. Input: An array of ''n''≥2 elements, in which half are ‘''a''’s and the other half are ‘''b''’s. Output: Find an ‘''a''’ in the array. We give two versions of the algorithm, one
Las Vegas algorithm In computing, a Las Vegas algorithm is a randomized algorithm that always gives Correctness (computer science), correct results; that is, it always produces the correct result or it informs about the failure. However, the runtime of a Las Vegas alg ...
and one
Monte Carlo algorithm In computing, a Monte Carlo algorithm is a randomized algorithm whose output may be incorrect with a certain (typically small) probability. Two examples of such algorithms are the Karger–Stein algorithm and the Monte Carlo algorithm for mini ...
. Las Vegas algorithm: findingA_LV(array A, n) begin repeat Randomly select one element out of n elements. until 'a' is found end This algorithm succeeds with probability 1. The number of iterations varies and can be arbitrarily large, but the expected number of iterations is : \lim_ \sum_^ \frac = 2 Since it is constant, the expected run time over many calls is \Theta(1). (See Big Theta notation) Monte Carlo algorithm: findingA_MC(array A, n, k) begin i := 0 repeat Randomly select one element out of n elements. i := i + 1 until i = k or 'a' is found end If an ‘''a''’ is found, the algorithm succeeds, else the algorithm fails. After ''k'' iterations, the probability of finding an ‘''a''’ is:
\Pr mathrm= 1 - (1/2)^k
This algorithm does not guarantee success, but the run time is bounded. The number of iterations is always less than or equal to k. Taking k to be constant the run time (expected and absolute) is \Theta(1). Randomized algorithms are particularly useful when faced with a malicious "adversary" or
attacker {{For, the term "attacker" in computer security, Hacker (computer security), Adversary (cryptography), Adversary (online algorithm) In some team sports, an attacker is a specific type of player, usually involved in aggressive play. Heavy attacker ...
who deliberately tries to feed a bad input to the algorithm (see worst-case complexity and
competitive analysis (online algorithm) Competitive analysis is a method invented for analyzing online algorithms, in which the performance of an online algorithm (which must satisfy an unpredictable sequence of requests, completing each request without being able to see the future) is c ...
) such as in the
Prisoner's dilemma The prisoner's dilemma is a game theory thought experiment involving two rational agents, each of whom can either cooperate for mutual benefit or betray their partner ("defect") for individual gain. The dilemma arises from the fact that while def ...
. It is for this reason that
randomness In common usage, randomness is the apparent or actual lack of definite pattern or predictability in information. A random sequence of events, symbols or steps often has no order and does not follow an intelligible pattern or combination. ...
is ubiquitous in
cryptography Cryptography, or cryptology (from "hidden, secret"; and ''graphein'', "to write", or ''-logy, -logia'', "study", respectively), is the practice and study of techniques for secure communication in the presence of Adversary (cryptography), ...
. In cryptographic applications, pseudo-random numbers cannot be used, since the adversary can predict them, making the algorithm effectively deterministic. Therefore, either a source of truly random numbers or a
cryptographically secure pseudo-random number generator A cryptographically secure pseudorandom number generator (CSPRNG) or cryptographic pseudorandom number generator (CPRNG) is a pseudorandom number generator (PRNG) with properties that make it suitable for use in cryptography. It is also referred t ...
is required. Another area in which randomness is inherent is
quantum computing A quantum computer is a computer that exploits quantum mechanical phenomena. On small scales, physical matter exhibits properties of wave-particle duality, both particles and waves, and quantum computing takes advantage of this behavior using s ...
. In the example above, the Las Vegas algorithm always outputs the correct answer, but its running time is a random variable. The Monte Carlo algorithm (related to the
Monte Carlo method Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be ...
for simulation) is guaranteed to complete in an amount of time that can be bounded by a function the input size and its parameter ''k'', but allows a ''small probability of error''. Observe that any Las Vegas algorithm can be converted into a Monte Carlo algorithm (via
Markov's inequality In probability theory, Markov's inequality gives an upper bound on the probability that a non-negative random variable is greater than or equal to some positive Constant (mathematics), constant. Markov's inequality is tight in the sense that for e ...
), by having it output an arbitrary, possibly incorrect answer if it fails to complete within a specified time. Conversely, if an efficient verification procedure exists to check whether an answer is correct, then a Monte Carlo algorithm can be converted into a Las Vegas algorithm by running the Monte Carlo algorithm repeatedly till a correct answer is obtained.


Computational complexity

Computational complexity theory In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and explores the relationships between these classifications. A computational problem ...
models randomized algorithms as ''
probabilistic Turing machine In theoretical computer science, a probabilistic Turing machine is a non-deterministic Turing machine that chooses between the available transitions at each point according to some probability distribution. As a consequence, a probabilistic Tur ...
s''. Both
Las Vegas Las Vegas, colloquially referred to as Vegas, is the most populous city in the U.S. state of Nevada and the county seat of Clark County. The Las Vegas Valley metropolitan area is the largest within the greater Mojave Desert, and second-l ...
and
Monte Carlo algorithm In computing, a Monte Carlo algorithm is a randomized algorithm whose output may be incorrect with a certain (typically small) probability. Two examples of such algorithms are the Karger–Stein algorithm and the Monte Carlo algorithm for mini ...
s are considered, and several
complexity class In computational complexity theory, a complexity class is a set (mathematics), set of computational problems "of related resource-based computational complexity, complexity". The two most commonly analyzed resources are time complexity, time and s ...
es are studied. The most basic randomized complexity class is RP, which is the class of
decision problem In computability theory and computational complexity theory, a decision problem is a computational problem that can be posed as a yes–no question on a set of input values. An example of a decision problem is deciding whether a given natura ...
s for which there is an efficient (polynomial time) randomized algorithm (or probabilistic Turing machine) which recognizes NO-instances with absolute certainty and recognizes YES-instances with a probability of at least 1/2. The complement class for RP is co-RP. Problem classes having (possibly nonterminating) algorithms with
polynomial time In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations p ...
average case running time whose output is always correct are said to be in ZPP. The class of problems for which both YES and NO-instances are allowed to be identified with some error is called BPP. This class acts as the randomized equivalent of P, i.e. BPP represents the class of efficient randomized algorithms.


Early history


Sorting

Quicksort Quicksort is an efficient, general-purpose sorting algorithm. Quicksort was developed by British computer scientist Tony Hoare in 1959 and published in 1961. It is still a commonly used algorithm for sorting. Overall, it is slightly faster than ...
was discovered by
Tony Hoare Sir Charles Antony Richard Hoare (; born 11 January 1934), also known as C. A. R. Hoare, is a British computer scientist who has made foundational contributions to programming languages, algorithms, operating systems, formal verification, and ...
in 1959, and subsequently published in 1961. In the same year, Hoare published the quickselect algorithm, which finds the median element of a list in linear expected time. It remained open until 1973 whether a deterministic linear-time algorithm existed.


Number theory

In 1917, Henry Cabourn Pocklington introduced a randomized algorithm known as Pocklington's algorithm for efficiently finding
square root In mathematics, a square root of a number is a number such that y^2 = x; in other words, a number whose ''square'' (the result of multiplying the number by itself, or y \cdot y) is . For example, 4 and −4 are square roots of 16 because 4 ...
s modulo prime numbers. In 1970, Elwyn Berlekamp introduced a randomized algorithm for efficiently computing the roots of a polynomial over a finite field. In 1977, Robert M. Solovay and Volker Strassen discovered a polynomial-time randomized primality test (i.e., determining the primality of a number). Soon afterwards Michael O. Rabin demonstrated that the 1976 Miller's primality test could also be turned into a polynomial-time randomized algorithm. At that time, no provably polynomial-time deterministic algorithms for primality testing were known.


Data structures

One of the earliest randomized data structures is the
hash table In computer science, a hash table is a data structure that implements an associative array, also called a dictionary or simply map; an associative array is an abstract data type that maps Unique key, keys to Value (computer science), values. ...
, which was introduced in 1953 by
Hans Peter Luhn Hans Peter Luhn (July 1, 1896 – August 19, 1964) was a German-American researcher in the field of computer science and Library & Information Science for IBM, and creator of the Luhn algorithm, KWIC (Key Words In Context) indexing, and s ...
at
IBM International Business Machines Corporation (using the trademark IBM), nicknamed Big Blue, is an American Multinational corporation, multinational technology company headquartered in Armonk, New York, and present in over 175 countries. It is ...
. Luhn's hash table used chaining to resolve collisions and was also one of the first applications of
linked lists In computer science, a linked list is a linear collection of data elements whose order is not given by their physical placement in memory. Instead, each element points to the next. It is a data structure consisting of a collection of nodes whi ...
. Subsequently, in 1954,
Gene Amdahl Gene Myron Amdahl (November 16, 1922 â€“ November 10, 2015) was an American computer architect and high-tech entrepreneur, chiefly known for his work on mainframe computers at IBM and later his own companies, especially Amdahl Corporation. ...
, Elaine M. McGraw, Nathaniel Rochester, and Arthur Samuel of
IBM Research IBM Research is the research and development division for IBM, an American Multinational corporation, multinational information technology company. IBM Research is headquartered at the Thomas J. Watson Research Center in Yorktown Heights, New York ...
introduced
linear probing Linear probing is a scheme in computer programming for resolving hash collision, collisions in hash tables, data structures for maintaining a collection of Attribute–value pair, key–value pairs and looking up the value associated with a giv ...
, although Andrey Ershov independently had the same idea in 1957. In 1962,
Donald Knuth Donald Ervin Knuth ( ; born January 10, 1938) is an American computer scientist and mathematician. He is a professor emeritus at Stanford University. He is the 1974 recipient of the ACM Turing Award, informally considered the Nobel Prize of comp ...
performed the first correct analysis of linear probing, although the memorandum containing his analysis was not published until much later. The first published analysis was due to Konheim and Weiss in 1966. Early works on hash tables either assumed access to a fully random hash function or assumed that the keys themselves were random. In 1979, Carter and Wegman introduced universal hash functions, which they showed could be used to implement chained hash tables with constant expected time per operation. Early work on randomized data structures also extended beyond hash tables. In 1970, Burton Howard Bloom introduced an approximate-membership data structure known as the
Bloom filter In computing, a Bloom filter is a space-efficient probabilistic data structure, conceived by Burton Howard Bloom in 1970, that is used to test whether an element is a member of a set. False positive matches are possible, but false negatives ar ...
. In 1989, Raimund Seidel and Cecilia R. Aragon introduced a randomized balanced search tree known as the treap. In the same year, William Pugh introduced another randomized search tree known as the
skip list In computer science, a skip list (or skiplist) is a Randomized algorithm, probabilistic data structure that allows O(\log n) Average-case complexity, average complexity for search as well as O(\log n) average complexity for insertion within an or ...
.


Implicit uses in combinatorics

Prior to the popularization of randomized algorithms in computer science,
Paul Erdős Paul Erdős ( ; 26March 191320September 1996) was a Hungarian mathematician. He was one of the most prolific mathematicians and producers of mathematical conjectures of the 20th century. pursued and proposed problems in discrete mathematics, g ...
popularized the use of randomized constructions as a mathematical technique for establishing the existence of mathematical objects. This technique has become known as the probabilistic method. Erdős gave his first application of the probabilistic method in 1947, when he used a simple randomized construction to establish the existence of Ramsey graphs. He famously used a more sophisticated randomized algorithm in 1959 to establish the existence of graphs with high girth and chromatic number.


Examples


Quicksort

Quicksort Quicksort is an efficient, general-purpose sorting algorithm. Quicksort was developed by British computer scientist Tony Hoare in 1959 and published in 1961. It is still a commonly used algorithm for sorting. Overall, it is slightly faster than ...
is a familiar, commonly used algorithm in which randomness can be useful. Many deterministic versions of this algorithm require '' O''(''n''2) time to sort ''n'' numbers for some well-defined class of degenerate inputs (such as an already sorted array), with the specific class of inputs that generate this behavior defined by the protocol for pivot selection. However, if the algorithm selects pivot elements uniformly at random, it has a provably high probability of finishing in ''O''(''n'' log ''n'') time regardless of the characteristics of the input.


Randomized incremental constructions in geometry

In computational geometry, a standard technique to build a structure like a
convex hull In geometry, the convex hull, convex envelope or convex closure of a shape is the smallest convex set that contains it. The convex hull may be defined either as the intersection of all convex sets containing a given subset of a Euclidean space, ...
or
Delaunay triangulation In computational geometry, a Delaunay triangulation or Delone triangulation of a set of points in the plane subdivides their convex hull into triangles whose circumcircles do not contain any of the points; that is, each circumcircle has its gen ...
is to randomly permute the input points and then insert them one by one into the existing structure. The randomization ensures that the expected number of changes to the structure caused by an insertion is small, and so the expected running time of the algorithm can be bounded from above. This technique is known as randomized incremental construction.


Min cut

Input: A
graph Graph may refer to: Mathematics *Graph (discrete mathematics), a structure made of vertices and edges **Graph theory, the study of such graphs and their properties *Graph (topology), a topological space resembling a graph in the sense of discret ...
''G''(''V'',''E'') Output: A cut partitioning the vertices into ''L'' and ''R'', with the minimum number of edges between ''L'' and ''R''. Recall that the contraction of two nodes, ''u'' and ''v'', in a (multi-)graph yields a new node ''u'' ' with edges that are the union of the edges incident on either ''u'' or ''v'', except from any edge(s) connecting ''u'' and ''v''. Figure 1 gives an example of contraction of vertex ''A'' and ''B''. After contraction, the resulting graph may have parallel edges, but contains no self loops. Karger's basic algorithm: begin i = 1 repeat repeat Take a random edge (u,v) ∈ E in G replace u and v with the contraction u' until only 2 nodes remain obtain the corresponding cut result Ci i = i + 1 until i = m output the minimum cut among C1, C2, ..., Cm. end In each execution of the outer loop, the algorithm repeats the inner loop until only 2 nodes remain, the corresponding cut is obtained. The run time of one execution is O(n), and ''n'' denotes the number of vertices. After ''m'' times executions of the outer loop, we output the minimum cut among all the results. The figure 2 gives an example of one execution of the algorithm. After execution, we get a cut of size 3.


Analysis of algorithm

The probability that the algorithm succeeds is 1 âˆ’ the probability that all attempts fail. By independence, the probability that all attempts fail is \prod_^m \Pr(C_i\neq C)=\prod_^m(1-\Pr(C_i=C)). By lemma 1, the probability that is the probability that no edge of ''C'' is selected during iteration ''i''. Consider the inner loop and let denote the graph after ''j'' edge contractions, where . has vertices. We use the chain rule of conditional possibilities. The probability that the edge chosen at iteration ''j'' is not in ''C'', given that no edge of ''C'' has been chosen before, is 1-\frac. Note that still has min cut of size ''k'', so by Lemma 2, it still has at least \frac edges. Thus, 1-\frac\geq 1-\frac=\frac. So by the chain rule, the probability of finding the min cut ''C'' is \Pr _i=C\geq \left(\frac\right)\left(\frac\right)\left(\frac\right)\ldots\left(\frac\right)\left(\frac\right)\left(\frac\right). Cancellation gives \Pr _i=C\geq \frac. Thus the probability that the algorithm succeeds is at least 1- \left(1-\frac\right)^m. For m = \frac\ln n, this is equivalent to 1-\frac. The algorithm finds the min cut with probability 1 - \frac, in time O(mn) = O(n^3 \log n).


Derandomization

Randomness can be viewed as a resource, like space and time. Derandomization is then the process of ''removing'' randomness (or using as little of it as possible). It is not currently known if all algorithms can be derandomized without significantly increasing their running time. For instance, in
computational complexity In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations ...
, it is unknown whether P = BPP, i.e., we do not know whether we can take an arbitrary randomized algorithm that runs in polynomial time with a small error probability and derandomize it to run in polynomial time without using randomness. There are specific methods that can be employed to derandomize particular randomized algorithms: * the
method of conditional probabilities In mathematics and computer science, the method of conditional probabilities is a systematic method for converting non-constructive probabilistic existence proofs into efficient Deterministic algorithm, deterministic algorithms that explicitly con ...
, and its generalization, pessimistic estimators * discrepancy theory (which is used to derandomize geometric algorithms) * the exploitation of limited independence in the random variables used by the algorithm, such as the
pairwise independence In probability theory, a pairwise independent collection of random variables is a set of random variables any two of which are statistical independence, independent. Any collection of Mutual independence, mutually independent random variables is p ...
used in
universal hashing In mathematics and computing, universal hashing (in a randomized algorithm or data structure) refers to selecting a hash function at random from a family of hash functions with a certain mathematical property (see definition below). This guarantees ...
* the use of
expander graph In graph theory, an expander graph is a sparse graph that has strong connectivity properties, quantified using vertex, edge or spectral expansion. Expander constructions have spawned research in pure and applied mathematics, with several appli ...
s (or dispersers in general) to ''amplify'' a limited amount of initial randomness (this last approach is also referred to as generating
pseudorandom A pseudorandom sequence of numbers is one that appears to be statistically random, despite having been produced by a completely deterministic and repeatable process. Pseudorandom number generators are often used in computer programming, as tradi ...
bits from a random source, and leads to the related topic of pseudorandomness) * changing the randomized algorithm to use a
hash function A hash function is any Function (mathematics), function that can be used to map data (computing), data of arbitrary size to fixed-size values, though there are some hash functions that support variable-length output. The values returned by a ...
as a source of randomness for the algorithm's tasks, and then derandomizing the algorithm by brute-forcing all possible parameters (seeds) of the hash function. This technique is usually used to exhaustively search a sample space and making the algorithm deterministic (e.g. randomized graph algorithms)


Where randomness helps

When the model of computation is restricted to
Turing machine A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, it is capable of implementing any computer algori ...
s, it is currently an open question whether the ability to make random choices allows some problems to be solved in polynomial time that cannot be solved in polynomial time without this ability; this is the question of whether P = BPP. However, in other contexts, there are specific examples of problems where randomization yields strict improvements. * Based on the initial motivating example: given an exponentially long string of 2''k'' characters, half a's and half b's, a
random-access machine In computer science, random-access machine (RAM or RA-machine) is a model of computation that describes an abstract machine in the general class of register machines. The RA-machine is very similar to the counter machine but with the added capab ...
requires 2''k''−1 lookups in the worst-case to find the index of an ''a''; if it is permitted to make random choices, it can solve this problem in an expected polynomial number of lookups. * The natural way of carrying out a numerical computation in
embedded systems An embedded system is a specialized computer system—a combination of a computer processor, computer memory, and input/output peripheral devices—that has a dedicated function within a larger mechanical or electronic system. It is em ...
or cyber-physical systems is to provide a result that approximates the correct one with high probability (or Probably Approximately Correct Computation (PACC)). The hard problem associated with the evaluation of the discrepancy loss between the approximated and the correct computation can be effectively addressed by resorting to randomization * In communication complexity, the equality of two strings can be verified to some reliability using \log n bits of communication with a randomized protocol. Any deterministic protocol requires \Theta(n) bits if defending against a strong opponent. * The volume of a convex body can be estimated by a randomized algorithm to arbitrary precision in polynomial time. Bárány and Füredi showed that no deterministic algorithm can do the same. This is true unconditionally, i.e. without relying on any complexity-theoretic assumptions, assuming the convex body can be queried only as a black box. * A more complexity-theoretic example of a place where randomness appears to help is the class IP. IP consists of all languages that can be accepted (with high probability) by a polynomially long interaction between an all-powerful prover and a verifier that implements a BPP algorithm. IP =
PSPACE In computational complexity theory, PSPACE is the set of all decision problems that can be solved by a Turing machine using a polynomial amount of space. Formal definition If we denote by SPACE(''f''(''n'')), the set of all problems that can ...
. However, if it is required that the verifier be deterministic, then IP = NP. * In a chemical reaction network (a finite set of reactions like A+B → 2C + D operating on a finite number of molecules), the ability to ever reach a given target state from an initial state is decidable, while even approximating the probability of ever reaching a given target state (using the standard concentration-based probability for which reaction will occur next) is undecidable. More specifically, a limited Turing machine can be simulated with arbitrarily high probability of running correctly for all time, only if a random chemical reaction network is used. With a simple nondeterministic chemical reaction network (any possible reaction can happen next), the computational power is limited to
primitive recursive functions In computability theory, a primitive recursive function is, roughly speaking, a function that can be computed by a computer program whose loops are all "for" loops (that is, an upper bound of the number of iterations of every loop is fixed befor ...
..


See also

* Approximate counting algorithm *
Atlantic City algorithm Atlantic City algorithm is a probabilistic polynomial time algorithm ( PP Complexity Class) that answers correctly at least 75% of the time (or, in some versions, some other value greater than 50%). The term " Atlantic City" was first introduced ...
*
Bogosort In computer science, bogosort (also known as permutation sort and stupid sort) is a sorting algorithm based on the generate and test paradigm. The function successively generates permutations of its input until it finds one that is sorted. It i ...
* Count–min sketch * HyperLogLog * Karger's algorithm *
Las Vegas algorithm In computing, a Las Vegas algorithm is a randomized algorithm that always gives Correctness (computer science), correct results; that is, it always produces the correct result or it informs about the failure. However, the runtime of a Las Vegas alg ...
*
Monte Carlo algorithm In computing, a Monte Carlo algorithm is a randomized algorithm whose output may be incorrect with a certain (typically small) probability. Two examples of such algorithms are the Karger–Stein algorithm and the Monte Carlo algorithm for mini ...
*
Principle of deferred decision Principle of deferred decisions is a technique used in analysis of randomized algorithms. Definition A randomized algorithm makes a set of random choices. These random In common usage, randomness is the apparent or actual lack of defini ...
*
Probabilistic analysis of algorithms In analysis of algorithms, probabilistic analysis of algorithms is an approach to estimate the computational complexity of an algorithm or a computational problem. It starts from an assumption about a probabilistic distribution of the set of all pos ...
* Probabilistic roadmap * Randomized algorithms as zero-sum games


Notes


References

* Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and
Clifford Stein Clifford Seth Stein (born December 14, 1965), a computer scientist, is a professor of industrial engineering and operations research at Columbia University in New York, NY, where he also holds an appointment in the Department of Computer Scien ...
. ''
Introduction to Algorithms ''Introduction to Algorithms'' is a book on computer programming by Thomas H. Cormen, Charles E. Leiserson, Ron Rivest, Ronald L. Rivest, and Clifford Stein. The book is described by its publisher as "the leading algorithms text in universities w ...
'', Second Edition. MIT Press and McGraw–Hill, 1990. . Chapter 5: Probabilistic Analysis and Randomized Algorithms, pp. 91–122. * Dirk Draheim
"''Semantics of the Probabilistic Typed Lambda Calculus (Markov Chain Semantics, Termination Behavior, and Denotational Semantics).''"
Springer, 2017. * Jon Kleinberg and Éva Tardos. ''Algorithm Design''. Chapter 13: "Randomized algorithms". * * M. Mitzenmacher and E. Upfal. ''Probability and Computing: Randomized Algorithms and Probabilistic Analysis''. Cambridge University Press, New York (NY), 2005. * Rajeev Motwani and P. Raghavan. ''Randomized Algorithms''. Cambridge University Press, New York (NY), 1995. * Rajeev Motwani and P. Raghavan
Randomized Algorithms
A survey on Randomized Algorithms. * Chapter 11: Randomized computation, pp. 241–278. * * A. A. Tsay, W. S. Lovejoy, David R. Karger, ''Random Sampling in Cut, Flow, and Network Design Problems'', Mathematics of Operations Research, 24(2):383–413, 1999.
"Randomized Algorithms for Scientific Computing" (RASC), OSTI.GOV (July 10th, 2021).
{{Algorithms and data structures Analysis of algorithms