Polar Factorization Theorem
In optimal transport, a branch of mathematics, polar factorization of vector fields is a basic result due to Brenier (1987), with antecedents of Knott-Smith (1984) and Rachev (1985), that generalizes many existing results among which are the polar decomposition of real matrices, and the rearrangement of real-valued functions. The theorem '' Notation. ''Denote \xi_\# \mu the image measure of \mu through the map \xi. '' Definition: Measure preserving map. '' Let (X,\mu) and (Y,\nu) be some probability spaces and \sigma :X \rightarrow Y a map. Then, \sigma is said to be measure preserving if for every \nu-measurable subset \Omega of Y, \sigma^(\Omega) is \mu-measurable and \mu(\sigma^(\Omega))=\nu(\Omega ), that is: \textstyle \int_f\circ \sigma d\mu =\int_fd\nu with f that is \nu-integrable and \sigma \circ f that is \mu-integrable. '' Theorem. '' Consider a map \xi :\Omega \rightarrow R^ where \Omega is a convex subset of R^, and \mu a measure on \Omega which is absolutely con ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Transportation Theory (mathematics)
In mathematics and economics, transportation theory or transport theory is a name given to the study of optimal transportation and allocation of resources. The problem was formalized by the French mathematician Gaspard Monge in 1781.G. Monge. ''Mémoire sur la théorie des déblais et des remblais. Histoire de l’Académie Royale des Sciences de Paris, avec les Mémoires de Mathématique et de Physique pour la même année'', pages 666–704, 1781. In the 1920s A.N. Tolstoi was one of the first to study the transportation problem mathematically. In 1930, in the collection ''Transportation Planning Volume I'' for the National Commissariat of Transportation of the Soviet Union, he published a paper "Methods of Finding the Minimal Kilometrage in Cargo-transportation in space". Major advances were made in the field during World War II by the Soviet mathematician and economist Leonid Kantorovich.L. Kantorovich. ''On the translocation of masses.'' C.R. (Doklady) Acad. Sci. URSS (N.S ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Polar Decomposition
In mathematics, the polar decomposition of a square real or complex matrix A is a factorization of the form A = U P, where U is an orthogonal matrix and P is a positive semi-definite symmetric matrix (U is a unitary matrix and P is a positive semi-definite Hermitian matrix in the complex case), both square and of the same size. Intuitively, if a real n\times n matrix A is interpreted as a linear transformation of n-dimensional space \mathbb^n, the polar decomposition separates it into a rotation or reflection U of \mathbb^n, and a scaling of the space along a set of n orthogonal axes. The polar decomposition of a square matrix A always exists. If A is invertible, the decomposition is unique, and the factor P will be positive-definite. In that case, A can be written uniquely in the form A = U e^X , where U is unitary and X is the unique self-adjoint logarithm of the matrix P. This decomposition is useful in computing the fundamental group of (matrix) Lie groups. The po ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Identity Map
Graph of the identity function on the real numbers In mathematics, an identity function, also called an identity relation, identity map or identity transformation, is a function that always returns the value that was used as its argument, unchanged. That is, when is the identity function, the equality is true for all values of to which can be applied. Definition Formally, if is a set, the identity function on is defined to be a function with as its domain and codomain, satisfying In other words, the function value in the codomain is always the same as the input element in the domain . The identity function on is clearly an injective function as well as a surjective function, so it is bijective. The identity function on is often denoted by . In set theory, where a function is defined as a particular kind of binary relation, the identity function is given by the identity relation, or ''diagonal'' of . Algebraic properties If is any function, then we have ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Helmholtz Decomposition
In physics and mathematics, in the area of vector calculus, Helmholtz's theorem, also known as the fundamental theorem of vector calculus, states that any sufficiently smooth, rapidly decaying vector field in three dimensions can be resolved into the sum of an irrotational ( curl-free) vector field and a solenoidal (divergence-free) vector field; this is known as the Helmholtz decomposition or Helmholtz representation. It is named after Hermann von Helmholtz. As an irrotational vector field has a scalar potential and a solenoidal vector field has a vector potential, the Helmholtz decomposition states that a vector field (satisfying appropriate smoothness and decay conditions) can be decomposed as the sum of the form -\nabla \phi + \nabla \times \mathbf, where \phi is a scalar field called "scalar potential", and is a vector field, called a vector potential. Statement of the theorem Let \mathbf be a vector field on a bounded domain V\subseteq\mathbb^3, which is twice continuou ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Orthogonal Matrix
In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors. One way to express this is Q^\mathrm Q = Q Q^\mathrm = I, where is the transpose of and is the identity matrix. This leads to the equivalent characterization: a matrix is orthogonal if its transpose is equal to its inverse: Q^\mathrm=Q^, where is the inverse of . An orthogonal matrix is necessarily invertible (with inverse ), unitary (), where is the Hermitian adjoint (conjugate transpose) of , and therefore normal () over the real numbers. The determinant of any orthogonal matrix is either +1 or −1. As a linear transformation, an orthogonal matrix preserves the inner product of vectors, and therefore acts as an isometry of Euclidean space, such as a rotation, reflection or rotoreflection. In other words, it is a unitary transformation. The set of orthogonal matrices, under multiplication, forms the group , known as the o ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Definite Matrix
In mathematics, a symmetric matrix M with real entries is positive-definite if the real number z^\textsfMz is positive for every nonzero real column vector z, where z^\textsf is the transpose of More generally, a Hermitian matrix (that is, a complex matrix equal to its conjugate transpose) is positive-definite if the real number z^* Mz is positive for every nonzero complex column vector z, where z^* denotes the conjugate transpose of z. Positive semi-definite matrices are defined similarly, except that the scalars z^\textsfMz and z^* Mz are required to be positive ''or zero'' (that is, nonnegative). Negative-definite and negative semi-definite matrices are defined analogously. A matrix that is not positive semi-definite and not negative semi-definite is sometimes called indefinite. A matrix is thus positive-definite if and only if it is the matrix of a positive-definite quadratic form or Hermitian form. In other words, a matrix is positive-definite if and only if it defi ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Symmetric Matrix
In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, Because equal matrices have equal dimensions, only square matrices can be symmetric. The entries of a symmetric matrix are symmetric with respect to the main diagonal. So if a_ denotes the entry in the ith row and jth column then for all indices i and j. Every square diagonal matrix is symmetric, since all off-diagonal elements are zero. Similarly in characteristic different from 2, each diagonal element of a skew-symmetric matrix must be zero, since each is its own negative. In linear algebra, a real symmetric matrix represents a self-adjoint operator represented in an orthonormal basis over a real inner product space. The corresponding object for a complex inner product space is a Hermitian matrix with complex-valued entries, which is equal to its conjugate transpose. Therefore, in linear algebra over the complex numbers, it is often assumed that a symmetric ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Normal Distribution
In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu is the mean or expectation of the distribution (and also its median and mode), while the parameter \sigma is its standard deviation. The variance of the distribution is \sigma^2. A random variable with a Gaussian distribution is said to be normally distributed, and is called a normal deviate. Normal distributions are important in statistics and are often used in the natural and social sciences to represent real-valued random variables whose distributions are not known. Their importance is partly due to the central limit theorem. It states that, under some conditions, the average of many samples (observations) of a random variable with finite mean and variance is itself a random variable—whose distribution converges to a normal dist ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Polar Decomposition
In mathematics, the polar decomposition of a square real or complex matrix A is a factorization of the form A = U P, where U is an orthogonal matrix and P is a positive semi-definite symmetric matrix (U is a unitary matrix and P is a positive semi-definite Hermitian matrix in the complex case), both square and of the same size. Intuitively, if a real n\times n matrix A is interpreted as a linear transformation of n-dimensional space \mathbb^n, the polar decomposition separates it into a rotation or reflection U of \mathbb^n, and a scaling of the space along a set of n orthogonal axes. The polar decomposition of a square matrix A always exists. If A is invertible, the decomposition is unique, and the factor P will be positive-definite. In that case, A can be written uniquely in the form A = U e^X , where U is unitary and X is the unique self-adjoint logarithm of the matrix P. This decomposition is useful in computing the fundamental group of (matrix) Lie groups. The po ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Continuous Uniform Distribution
In probability theory and statistics, the continuous uniform distribution or rectangular distribution is a family of symmetric probability distributions. The distribution describes an experiment where there is an arbitrary outcome that lies between certain bounds. The bounds are defined by the parameters, ''a'' and ''b'', which are the minimum and maximum values. The interval can either be closed (e.g. , b or open (e.g. (a, b)). Therefore, the distribution is often abbreviated ''U'' (''a'', ''b''), where U stands for uniform distribution. The difference between the bounds defines the interval length; all intervals of the same length on the distribution's support are equally probable. It is the maximum entropy probability distribution for a random variable ''X'' under no constraint other than that it is contained in the distribution's support. Definitions Probability density function The probability density function of the continuous uniform distribution is: : f(x)=\begin ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Lebesgue Measure
In measure theory, a branch of mathematics, the Lebesgue measure, named after French mathematician Henri Lebesgue, is the standard way of assigning a measure to subsets of ''n''-dimensional Euclidean space. For ''n'' = 1, 2, or 3, it coincides with the standard measure of length, area, or volume. In general, it is also called ''n''-dimensional volume, ''n''-volume, or simply volume. It is used throughout real analysis, in particular to define Lebesgue integration. Sets that can be assigned a Lebesgue measure are called Lebesgue-measurable; the measure of the Lebesgue-measurable set ''A'' is here denoted by ''λ''(''A''). Henri Lebesgue described this measure in the year 1901, followed the next year by his description of the Lebesgue integral. Both were published as part of his dissertation in 1902. Definition For any interval I = ,b/math>, or I = (a, b), in the set \mathbb of real numbers, let \ell(I)= b - a denote its length. For any subset E\subseteq\mathbb, the Lebesg ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |