HOME

TheInfoList



OR:

In probability theory and mathematical physics, a random matrix is a
matrix Matrix most commonly refers to: * ''The Matrix'' (franchise), an American media franchise ** '' The Matrix'', a 1999 science-fiction action film ** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchi ...
-valued
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the po ...
—that is, a matrix in which some or all elements are random variables. Many important properties of
physical system A physical system is a collection of physical objects. In physics, it is a portion of the physical universe chosen for analysis. Everything outside the system is known as the environment. The environment is ignored except for its effects on the ...
s can be represented mathematically as matrix problems. For example, the thermal conductivity of a lattice can be computed from the dynamical matrix of the particle-particle interactions within the lattice.


Applications


Physics

In nuclear physics, random matrices were introduced by Eugene Wigner to model the nuclei of heavy atoms. Wigner postulated that the spacings between the lines in the spectrum of a heavy atom nucleus should resemble the spacings between the
eigenvalues In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted b ...
of a random matrix, and should depend only on the symmetry class of the underlying evolution. In
solid-state physics Solid-state physics is the study of rigid matter, or solids, through methods such as quantum mechanics, crystallography, electromagnetism, and metallurgy. It is the largest branch of condensed matter physics. Solid-state physics studies how the l ...
, random matrices model the behaviour of large disordered Hamiltonians in the mean-field approximation. In quantum chaos, the Bohigas–Giannoni–Schmit (BGS) conjecture asserts that the spectral statistics of quantum systems whose classical counterparts exhibit chaotic behaviour are described by random matrix theory. In quantum optics, transformations described by random unitary matrices are crucial for demonstrating the advantage of quantum over classical computation (see, e.g., the
boson sampling Boson sampling is a restricted model of non-universal quantum computation introduced by Scott Aaronson and Alex Arkhipov after the original work of Lidror Troyansky and Naftali Tishby, that explored possible usage of boson scattering to evaluate ...
model). Moreover, such random unitary transformations can be directly implemented in an optical circuit, by mapping their parameters to optical circuit components (that is
beam splitter A beam splitter or ''beamsplitter'' is an optical device that splits a beam of light into a transmitted and a reflected beam. It is a crucial part of many optical experimental and measurement systems, such as interferometers, also finding wide ...
s and phase shifters). Random matrix theory has also found applications to the chiral Dirac operator in
quantum chromodynamics In theoretical physics, quantum chromodynamics (QCD) is the theory of the strong interaction between quarks mediated by gluons. Quarks are fundamental particles that make up composite hadrons such as the proton, neutron and pion. QCD is a type ...
,
quantum gravity Quantum gravity (QG) is a field of theoretical physics that seeks to describe gravity according to the principles of quantum mechanics; it deals with environments in which neither gravitational nor quantum effects can be ignored, such as in the vi ...
in two dimensions, mesoscopic physics, spin-transfer torque, the fractional quantum Hall effect, Anderson localization, quantum dots, and superconductors


Mathematical statistics and numerical analysis

In multivariate statistics, random matrices were introduced by John Wishart, who sought to estimate covariance matrices of large samples.
Chernoff Chernoff is a surname. Notable people with the surname include: * Herman Chernoff applied mathematician, statistician and physicist ** Chernoff bound, also called Chernoff's inequality ** Chernoff face ** Chernoff's distribution * Maxine Chernoff ...
-, Bernstein-, and Hoeffding-type inequalities can typically be strengthened when applied to the maximal eigenvalue of a finite sum of random Hermitian matrices. In numerical analysis, random matrices have been used since the work of John von Neumann and
Herman Goldstine Herman Heine Goldstine (September 13, 1913 – June 16, 2004) was a mathematician and computer scientist, who worked as the director of the IAS machine at Princeton University's Institute for Advanced Study and helped to develop ENIAC, the ...
to describe computation errors in operations such as matrix multiplication. Although random entries are traditional "generic" inputs to an algorithm, the concentration of measure associated with random matrix distributions implies that random matrices will not test large portions of an algorithm's input space.


Number theory

In number theory, the distribution of zeros of the
Riemann zeta function The Riemann zeta function or Euler–Riemann zeta function, denoted by the Greek letter (zeta), is a mathematical function of a complex variable defined as \zeta(s) = \sum_^\infty \frac = \frac + \frac + \frac + \cdots for \operatorname(s) > ...
(and other L-functions) is modeled by the distribution of eigenvalues of certain random matrices. The connection was first discovered by Hugh Montgomery and Freeman J. Dyson. It is connected to the
Hilbert–Pólya conjecture In mathematics, the Hilbert–Pólya conjecture states that the non-trivial zeros of the Riemann zeta function correspond to eigenvalues of a self-adjoint operator. It is a possible approach to the Riemann hypothesis, by means of spectral theory. ...
.


Theoretical neuroscience

In the field of theoretical neuroscience, random matrices are increasingly used to model the network of synaptic connections between neurons in the brain. Dynamical models of neuronal networks with random connectivity matrix were shown to exhibit a phase transition to chaos when the variance of the synaptic weights crosses a critical value, at the limit of infinite system size. Results on random matrices have also shown that the dynamics of random-matrix models are insensitive to mean connection strength. Instead, the stability of fluctuations depends on connection strength variation and time to synchrony depends on network topology.


Optimal control

In
optimal control Optimal control theory is a branch of mathematical optimization that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in science, engineering and ...
theory, the evolution of ''n'' state variables through time depends at any time on their own values and on the values of ''k'' control variables. With linear evolution, matrices of coefficients appear in the state equation (equation of evolution). In some problems the values of the parameters in these matrices are not known with certainty, in which case there are random matrices in the state equation and the problem is known as one of
stochastic control Stochastic control or stochastic optimal control is a sub field of control theory that deals with the existence of uncertainty either in observations or in the noise that drives the evolution of the system. The system designer assumes, in a Bayes ...
. A key result in the case of linear-quadratic control with stochastic matrices is that the
certainty equivalence principle Stochastic control or stochastic optimal control is a sub field of control theory that deals with the existence of uncertainty either in observations or in the noise that drives the evolution of the system. The system designer assumes, in a Bayesi ...
does not apply: while in the absence of multiplier uncertainty (that is, with only additive uncertainty) the optimal policy with a quadratic loss function coincides with what would be decided if the uncertainty were ignored, the optimal policy may differ if the state equation contains random coefficients.


Gaussian ensembles

The most-commonly studied random matrix distributions are the Gaussian ensembles. The Gaussian unitary ensemble \text(n) is described by the Gaussian measure with density : \frac e^ on the space of n \times n Hermitian matrices H = (H_)^_. Here Z_ = 2^ \pi^ is a normalization constant, chosen so that the integral of the density is equal to one. The term ''unitary'' refers to the fact that the distribution is invariant under unitary conjugation. The Gaussian unitary ensemble models Hamiltonians lacking time-reversal symmetry. The Gaussian orthogonal ensemble \text(n) is described by the Gaussian measure with density : \frac e^ on the space of ''n × n'' real symmetric matrices ''H'' = (''H''''ij''). Its distribution is invariant under orthogonal conjugation, and it models Hamiltonians with time-reversal symmetry. The Gaussian symplectic ensemble \text(n) is described by the Gaussian measure with density : \frac e^ \, on the space of ''n × n'' Hermitian quaternionic matrices, e.g. symmetric square matrices composed of
quaternion In mathematics, the quaternion number system extends the complex numbers. Quaternions were first described by the Irish mathematician William Rowan Hamilton in 1843 and applied to mechanics in three-dimensional space. Hamilton defined a quatern ...
s, ''H'' = (''H''''ij''). Its distribution is invariant under conjugation by the symplectic group, and it models Hamiltonians with time-reversal symmetry but no rotational symmetry. The Gaussian ensembles GOE, GUE and GSE are often denoted by their Dyson index, ''β'' = 1 for GOE, ''β'' = 2 for GUE, and ''β'' = 4 for GSE. This index counts the number of real components per matrix element. The ensembles as defined here have Gaussian distributed matrix elements with mean ⟨''H''''ij''⟩ = 0, and two-point correlations given by : \langle H_ H_^* \rangle = \langle H_ H_ \rangle = \frac \delta_ \delta_ + \frac\delta_\delta_ , from which all higher correlations follow by
Isserlis' theorem In probability theory, Isserlis' theorem or Wick's probability theorem is a formula that allows one to compute higher-order moments of the multivariate normal distribution in terms of its covariance matrix. It is named after Leon Isserlis. This t ...
. The joint probability density for the
eigenvalues In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted b ...
''λ''1,''λ''2,...,''λ''''n'' of GUE/GOE/GSE is given by : \frac \prod_^n e^\prod_\left, \lambda_j-\lambda_i\^\beta~, \quad (1) where ''Z''''β'',''n'' is a normalization constant which can be explicitly computed, see Selberg integral. In the case of GUE (''β'' = 2), the formula (1) describes a
determinantal point process In mathematics, a determinantal point process is a stochastic point process, the probability distribution of which is characterized as a determinant of some function. Such processes arise as important tools in random matrix theory, combinatorics, ph ...
. Eigenvalues repel as the joint probability density has a zero (of \betath order) for coinciding eigenvalues \lambda_j=\lambda_i. For the distribution of the largest eigenvalue for GOE, GUE and Wishart matrices of finite dimensions, see.


Distribution of level spacings

From the ordered sequence of eigenvalues \lambda_1 < \ldots < \lambda_n < \lambda_ < \ldots, one defines the normalized spacings s = (\lambda_ - \lambda_n)/\langle s \rangle, where \langle s \rangle =\langle \lambda_ - \lambda_n \rangle is the mean spacing. The probability distribution of spacings is approximately given by, : p_1(s) = \fracs\, \mathrm^ for the orthogonal ensemble GOE \beta=1, : p_2(s) = \fracs^2 \mathrm^ for the unitary ensemble GUE \beta=2, and : p_4(s) = \fracs^4 \mathrm^ for the symplectic ensemble GSE \beta=4. The numerical constants are such that p_\beta(s) is normalized: : \int_0^\infty ds\,p_\beta(s) = 1 and the mean spacing is, : \int_0^\infty ds\, s\, p_\beta(s) = 1, for \beta = 1,2,4 .


Generalizations

''Wigner matrices'' are random Hermitian matrices \textstyle H_n = (H_n(i,j))_^n such that the entries : \left\ above the main diagonal are independent random variables with zero mean and have identical second moments. ''Invariant matrix ensembles'' are random Hermitian matrices with density on the space of real symmetric/ Hermitian/ quaternionic Hermitian matrices, which is of the form \textstyle \frac e^~, where the function ''V'' is called the potential. The Gaussian ensembles are the only common special cases of these two classes of random matrices.


Spectral theory of random matrices

The spectral theory of random matrices studies the distribution of the eigenvalues as the size of the matrix goes to infinity.


Global regime

In the ''global regime'', one is interested in the distribution of linear statistics of the form N_ = n^ \text f(H).


Empirical spectral measure

The ''empirical spectral measure'' ''μH'' of ''H'' is defined by : \mu_(A) = \frac \, \# \left\ = N_, \quad A \subset \mathbb. Usually, the limit of \mu_ is a deterministic measure; this is a particular case of self-averaging. The
cumulative distribution function In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable X, or just distribution function of X, evaluated at x, is the probability that X will take a value less than or equal to x. Ev ...
of the limiting measure is called the integrated density of states and is denoted ''N''(''λ''). If the integrated density of states is differentiable, its derivative is called the density of states and is denoted ''ρ''(''λ''). The limit of the empirical spectral measure for Wigner matrices was described by Eugene Wigner; see Wigner semicircle distribution and
Wigner surmise In mathematical physics, the Wigner surmise is a statement about the probability distribution of the spaces between points in the spectra of nuclei of heavy atoms, which have many degrees of freedom, or quantum systems with few degrees of freedom ...
. As far as sample covariance matrices are concerned, a theory was developed by Marčenko and Pastur.. The limit of the empirical spectral measure of invariant matrix ensembles is described by a certain integral equation which arises from potential theory.


Fluctuations

For the linear statistics ''N''''f'',''H'' = ''n''−1 Σ ''f''(''λ''''j''), one is also interested in the fluctuations about ∫ ''f''(''λ'') ''dN''(''λ''). For many classes of random matrices, a central limit theorem of the form : \frac \overset N(0, 1) is known.


Local regime

In the ''local regime'', one is interested in the spacings between eigenvalues, and, more generally, in the joint distribution of eigenvalues in an interval of length of order 1/''n''. One distinguishes between ''bulk statistics'', pertaining to intervals inside the support of the limiting spectral measure, and ''edge statistics'', pertaining to intervals near the boundary of the support.


Bulk statistics

Formally, fix \lambda_0 in the
interior Interior may refer to: Arts and media * ''Interior'' (Degas) (also known as ''The Rape''), painting by Edgar Degas * ''Interior'' (play), 1895 play by Belgian playwright Maurice Maeterlinck * ''The Interior'' (novel), by Lisa See * Interior de ...
of the
support Support may refer to: Arts, entertainment, and media * Supporting character Business and finance * Support (technical analysis) * Child support * Customer support * Income Support Construction * Support (structure), or lateral support, a ...
of N(\lambda). Then consider the point process : \Xi(\lambda_0) = \sum_j \delta\Big( - n \rho(\lambda_0) (\lambda_j - \lambda_0) \Big)~, where \lambda_j are the eigenvalues of the random matrix. The point process \Xi(\lambda_0) captures the statistical properties of eigenvalues in the vicinity of \lambda_0. For the Gaussian ensembles, the limit of \Xi(\lambda_0) is known; thus, for GUE it is a
determinantal point process In mathematics, a determinantal point process is a stochastic point process, the probability distribution of which is characterized as a determinant of some function. Such processes arise as important tools in random matrix theory, combinatorics, ph ...
with the kernel : K(x, y) = \frac (the ''sine kernel''). The ''universality'' principle postulates that the limit of \Xi(\lambda_0) as n \to \infty should depend only on the symmetry class of the random matrix (and neither on the specific model of random matrices nor on \lambda_0). Rigorous proofs of universality are known for invariant matrix ensembles and Wigner matrices.


Edge statistics


Correlation functions

The joint probability density of the eigenvalues of n\times n random Hermitian matrices M \in \mathbf^ , with partition functions of the form : Z_n = \int_ d\mu_0(M)e^ where : V(x):=\sum_^\infty v_j x^j and d\mu_0(M) is the standard Lebesgue measure on the space \mathbf^ of Hermitian n \times n matrices, is given by : p_(x_1,\dots, x_n) = \frac\prod_ (x_i-x_j)^2 e^. The k-point correlation functions (or ''marginal distributions'') are defined as : R^_(x_1,\dots,x_k) = \frac \int_dx_ \cdots \int_ dx_ \, p_(x_1,x_2,\dots,x_n), which are skew symmetric functions of their variables. In particular, the one-point correlation function, or ''density of states'', is : R^_(x_1) = n\int_dx_ \cdots \int_ dx_ \, p_(x_1,x_2,\dots,x_n). Its integral over a Borel set B \subset \mathbf gives the expected number of eigenvalues contained in B: : \int_ R^_(x)dx = \mathbf\left(\#\\right). The following result expresses these correlation functions as determinants of the matrices formed from evaluating the appropriate integral kernel at the pairs (x_i, x_j) of points appearing within the correlator. Theorem yson-Mehta For any k, 1\leq k \leq n the k-point correlation function R^_ can be written as a determinant : R^_(x_1,x_2,\dots,x_k) = \det_\left(K_(x_i,x_j)\right), where K_(x,y) is the nth Christoffel-Darboux kernel : K_(x,y) := \sum_^\psi_k(x)\psi_k(y), associated to V, written in terms of the quasipolynomials : \psi_k(x) = \, p_k(z)\, e^ , where \_ is a complete sequence of monic polynomials, of the degrees indicated, satisfying the orthogonilty conditions : \int_ \psi_j(x) \psi_k(x) dx = \delta_.


Other classes of random matrices


Wishart matrices

''Wishart matrices'' are ''n × n'' random matrices of the form ''H'' = ''X'' ''X''*, where ''X'' is an ''n × m'' random matrix (''m'' ≥ ''n'') with independent entries, and ''X''* is its conjugate transpose. In the important special case considered by Wishart, the entries of ''X'' are identically distributed Gaussian random variables (either real or complex). The limit of the empirical spectral measure of Wishart matrices was found by Vladimir Marchenko and Leonid Pastur.


Random unitary matrices

:''See
circular ensembles In the theory of random matrices, the circular ensembles are measures on spaces of unitary matrices introduced by Freeman Dyson as modifications of the Gaussian matrix ensembles. The three main examples are the circular orthogonal ensemble (COE) ...
.''


Non-Hermitian random matrices

:''See circular law.''


References


Books

* * *


Survey articles

* * * * *


Historic works

* * *


Footnotes


External links

* * {{DEFAULTSORT:Random Matrix Algebra of random variables Mathematical physics