In
mathematics, a low-discrepancy sequence is a
sequence
In mathematics, a sequence is an enumerated collection of objects in which repetitions are allowed and order matters. Like a set, it contains members (also called ''elements'', or ''terms''). The number of elements (possibly infinite) is called ...
with the property that for all values of ''N'', its subsequence ''x''
1, ..., ''x''
''N'' has a low
discrepancy
Discrepancy may refer to:
Mathematics
* Discrepancy of a sequence
* Discrepancy theory in structural modelling
* Discrepancy of hypergraphs, an area of discrepancy theory
* Discrepancy (algebraic geometry)
Statistics
* Discrepancy function in th ...
.
Roughly speaking, the discrepancy of a sequence is low if the proportion of points in the sequence falling into an arbitrary set ''B'' is close to proportional to the
measure of ''B'', as would happen on average (but not for particular samples) in the case of an
equidistributed sequence In mathematics, a sequence (''s''1, ''s''2, ''s''3, ...) of real numbers is said to be equidistributed, or uniformly distributed, if the proportion of terms falling in a subinterval is proportional to the length of that subinterval. Such sequences ...
. Specific definitions of discrepancy differ regarding the choice of ''B'' (
hyperspheres
In mathematics, an -sphere or a hypersphere is a topological space that is homeomorphic to a ''standard'' -''sphere'', which is the set of points in -dimensional Euclidean space that are situated at a constant distance from a fixed point, call ...
,
hypercubes, etc.) and how the discrepancy for every B is computed (usually normalized) and combined (usually by taking the worst value).
Low-discrepancy sequences are also called quasirandom sequences, due to their common use as a replacement of uniformly distributed
random numbers.
The "quasi" modifier is used to denote more clearly that the values of a low-discrepancy sequence are neither random nor
pseudorandom
A pseudorandom sequence of numbers is one that appears to be statistically random, despite having been produced by a completely deterministic and repeatable process.
Background
The generation of random numbers has many uses, such as for random ...
, but such sequences share some properties of random variables and in certain applications such as the
quasi-Monte Carlo method
In numerical analysis, the quasi-Monte Carlo method is a method for numerical integration and solving some other problems using low-discrepancy sequences (also called quasi-random sequences or sub-random sequences). This is in contrast to the regu ...
their lower discrepancy is an important advantage.
Applications
Quasirandom numbers have an advantage over pure random numbers in that they cover the domain of interest quickly and evenly.
Two useful applications are in finding the
characteristic function In mathematics, the term "characteristic function" can refer to any of several distinct concepts:
* The indicator function of a subset, that is the function
::\mathbf_A\colon X \to \,
:which for a given subset ''A'' of ''X'', has value 1 at point ...
of a
probability density function
In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) c ...
, and in finding the
derivative
In mathematics, the derivative of a function of a real variable measures the sensitivity to change of the function value (output value) with respect to a change in its argument (input value). Derivatives are a fundamental tool of calculus. ...
function of a deterministic function with a small amount of noise. Quasirandom numbers allow higher-order
moments to be calculated to high accuracy very quickly.
Applications that don't involve sorting would be in finding the
mean
There are several kinds of mean in mathematics, especially in statistics. Each mean serves to summarize a given group of data, often to better understand the overall value ( magnitude and sign) of a given data set.
For a data set, the '' ari ...
,
standard deviation,
skewness
In probability theory and statistics, skewness is a measure of the asymmetry of the probability distribution of a real-valued random variable about its mean. The skewness value can be positive, zero, negative, or undefined.
For a unimo ...
and
kurtosis
In probability theory and statistics, kurtosis (from el, κυρτός, ''kyrtos'' or ''kurtos'', meaning "curved, arching") is a measure of the "tailedness" of the probability distribution of a real-valued random variable. Like skewness, kur ...
of a statistical distribution, and in finding the
integral
In mathematics, an integral assigns numbers to functions in a way that describes displacement, area, volume, and other concepts that arise by combining infinitesimal data. The process of finding integrals is called integration. Along with ...
and global
maxima and minima
In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given r ...
of difficult deterministic functions. Quasirandom numbers can also be used for providing starting points for deterministic algorithms that only work locally, such as
Newton–Raphson iteration
In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a re ...
.
Quasirandom numbers can also be combined with search algorithms. A binary tree
Quicksort
Quicksort is an efficient, general-purpose sorting algorithm. Quicksort was developed by British computer scientist Tony Hoare in 1959 and published in 1961, it is still a commonly used algorithm for sorting. Overall, it is slightly faster than ...
-style algorithm ought to work exceptionally well because quasirandom numbers flatten the tree far better than random numbers, and the flatter the tree the faster the sorting. With a search algorithm, quasirandom numbers can be used to find the
mode,
median,
confidence intervals
In frequentist statistics, a confidence interval (CI) is a range of estimates for an unknown parameter. A confidence interval is computed at a designated ''confidence level''; the 95% confidence level is most common, but other levels, such as ...
and
cumulative distribution of a statistical distribution, and all
local minima
In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given ...
and all solutions of deterministic functions.
Low-discrepancy sequences in numerical integration
At least three methods of
numerical integration
In analysis, numerical integration comprises a broad family of algorithms for calculating the numerical value of a definite integral, and by extension, the term is also sometimes used to describe the numerical solution of differential equations ...
can be phrased as follows.
Given a set in the interval
,1/nowiki>, approximate the integral of a function ''f'' as the average of the function evaluated at those points:
:
If the points are chosen as ''x''''i'' = ''i''/''N'', this is the ''rectangle rule''.
If the points are chosen to be randomly (or pseudorandom
A pseudorandom sequence of numbers is one that appears to be statistically random, despite having been produced by a completely deterministic and repeatable process.
Background
The generation of random numbers has many uses, such as for random ...
ly) distributed, this is the ''Monte Carlo method
Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be deter ...
''.
If the points are chosen as elements of a low-discrepancy sequence, this is the ''quasi-Monte Carlo method''.
A remarkable result, the Koksma–Hlawka inequality (stated below), shows that the error of such a method can be bounded by the product of two terms, one of which depends only on ''f'', and the other one is the discrepancy of the set .
It is convenient to construct the set in such a way that if a set with ''N''+1 elements is constructed, the previous ''N'' elements need not be recomputed.
The rectangle rule uses points set which have low discrepancy, but in general the elements must be recomputed if ''N'' is increased.
Elements need not be recomputed in the random Monte Carlo method if ''N'' is increased,
but the point sets do not have minimal discrepancy.
By using low-discrepancy sequences we aim for low discrepancy and no need for recomputations, but actually low-discrepancy sequences can only be incrementally good on discrepancy if we allow no recomputation.
Definition of discrepancy
The ''discrepancy'' of a set P = is defined, using Niederreiter's notation, as
:
where
λ''s'' is the ''s''-dimensional Lebesgue measure
In measure theory, a branch of mathematics, the Lebesgue measure, named after French mathematician Henri Lebesgue, is the standard way of assigning a measure to subsets of ''n''-dimensional Euclidean space. For ''n'' = 1, 2, or 3, it coincides ...
,
''A''(''B'';''P'') is the number of points in ''P'' that fall into ''B'',
and ''J'' is the set of ''s''-dimensional intervals or boxes of the form
:
where .
The ''star-discrepancy'' ''D''*''N''(''P'') is defined similarly, except that the supremum is taken over the set ''J''* of rectangular boxes of the form
:
where ''u''''i'' is in the half-open interval [0, 1).
The two are related by
:
Note: With these definitions, discrepancy represents the worst-case or maximum point density deviation of a uniform set. However, also other error measures are meaningful, leading to other definitions and variation measures. For instance, L2 discrepancy or modified centered L2 discrepancy are also used intensively to compare the quality of uniform point sets. Both are much easier to calculate for large N and s.
The Koksma–Hlawka inequality
Let Ī''s'' be the ''s''-dimensional unit cube,
Ī''s'' = [0, 1] × ... × [0, 1].
Let ''f'' have bounded variation ''V''(''f'') on Ī''s'' in the sense of G. H. Hardy, Hardy and Krause.
Then for any ''x''1, ..., ''x''''N''
in ''I''''s'' =
0, 1) × ... ×
[0, 1),
:
The Koksma–Edmund Hlawka, Hlawka inequality is sharp in the following sense: For any point set in ''I''s and any , there is a function ''f'' with bounded variation and ''V''(''f'') = 1 such that
:
Therefore, the quality of a numerical integration rule depends only on the discrepancy D*N(''x''1,...,''x''N).
The formula of Hlawka–Zaremba
Let . For we
write
:
and denote by the point obtained from ''x'' by replacing the
coordinates not in ''u'' by . Then
:
where is the discrepancy function.
The ''L²'' version of the Koksma–Hlawka inequality
Applying the Cauchy–Schwarz inequality for integrals and sums to the Hlawka–Zaremba identity, we obtain an version of the Koksma–Hlawka inequality:
:
where
:
and
:
discrepancy has a high practical importance because fast explicit calculations are possible for a given point set. This way it is easy to create point set optimizers using discrepancy as criteria.
The Erdős–Turán–Koksma inequality
It is computationally hard to find the exact value of the discrepancy of large point sets. The Erdős– Turán– Koksma inequality provides an upper bound.
Let ''x''1,...,''x''''N'' be points in ''I''s and ''H'' be an arbitrary positive integer. Then
:
where
:
The main conjectures
Conjecture 1. There is a constant ''c''s depending only on the dimension ''s'', such that
:
for any finite point set .
Conjecture 2. There is a constant ''c'''s depending only on ''s'', such that
:
for infinite number of ''N'' for any infinite sequence ''x''1,''x''2,''x''3,....
These conjectures are equivalent. They have been proved for ''s'' ≤ 2 by W. M. Schmidt. In higher dimensions, the corresponding problem is still open. The best-known lower bounds are due to Michael Lacey
Michael Lacey may refer to:
* Michael Lacey (mathematician) (born 1959), an American mathematician
* Michael Lacey (editor), American newspaper editor
* Michael Pearse Lacey (1916–2014), Canadian bishop
* Mick Lacey, Irish hurler See also
* M ...
and collaborators.
Lower bounds
Let ''s'' = 1. Then
:
for any finite point set .
Let ''s'' = 2. W. M. Schmidt proved that for any finite point set ,
:
where
:
For arbitrary dimensions ''s'' > 1, K. F. Roth proved that
:
for any finite point set .
Jozef Beck established a double log improvement of this result in three dimensions.
This was improved by D. Bilyk and M. T. Lacey to a power of a single logarithm. The best known bound for ''s'' > 2 is due
D. Bilyk and M. T. Lacey and A. Vagharshakyan. For ''s'' > 2 there is a ''t'' > 0 so that
:
for any finite point set .
Construction of low-discrepancy sequences
Because any distribution of random numbers can be mapped onto a uniform distribution, and quasirandom numbers are mapped in the same way, this article only concerns generation of quasirandom numbers on a multidimensional uniform distribution.
There are constructions of sequences known such that
:
where ''C'' is a certain constant, depending on the sequence. After Conjecture 2, these sequences are believed to have the best possible order of convergence. Examples below are the van der Corput sequence, the Halton sequence
In statistics, Halton sequences are sequences used to generate points in space for numerical methods such as Monte Carlo simulations. Although these sequences are deterministic, they are of low discrepancy, that is, appear to be random for man ...
s, and the Sobol sequences. One general limitation is that construction methods can usually only guarantee the order of convergence. Practically, low discrepancy can be only achieved if N is large enough, and for large given s this minimum N can be very large. This means running a Monte-Carlo analysis with e.g. s=20 variables and N=1000 points from a low-discrepancy sequence generator may offer only a very minor accuracy improvement.
Random numbers
Sequences of quasirandom numbers can be generated from random numbers by imposing a negative correlation on those random numbers. One way to do this is to start with a set of random numbers on