HOME

TheInfoList



OR:

In
linear algebra Linear algebra is the branch of mathematics concerning linear equations such as: :a_1x_1+\cdots +a_nx_n=b, linear maps such as: :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrice ...
, a Vandermonde matrix, named after Alexandre-Théophile Vandermonde, is a
matrix Matrix most commonly refers to: * ''The Matrix'' (franchise), an American media franchise ** '' The Matrix'', a 1999 science-fiction action film ** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchi ...
with the terms of a
geometric progression In mathematics, a geometric progression, also known as a geometric sequence, is a sequence of non-zero numbers where each term after the first is found by multiplying the previous one by a fixed, non-zero number called the ''common ratio''. For ex ...
in each row: an matrix :V=\begin 1 & x_1 & x_1^2 & \dots & x_1^\\ 1 & x_2 & x_2^2 & \dots & x_2^\\ 1 & x_3 & x_3^2 & \dots & x_3^\\ \vdots & \vdots & \vdots & \ddots &\vdots \\ 1 & x_m & x_m^2 & \dots & x_m^ \end, or :V_ = x_i^ \, for all indices and . Some authors define the Vandermonde matrix as the
transpose In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal; that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other notations). The tr ...
of the above matrix. The
determinant In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. It characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if a ...
of a
square In Euclidean geometry, a square is a regular quadrilateral, which means that it has four equal sides and four equal angles (90- degree angles, π/2 radian angles, or right angles). It can also be defined as a rectangle with two equal-length a ...
Vandermonde matrix is called a '' Vandermonde polynomial'' or ''Vandermonde determinant''. Its value is the
polynomial In mathematics, a polynomial is an expression consisting of indeterminates (also called variables) and coefficients, that involves only the operations of addition, subtraction, multiplication, and positive-integer powers of variables. An exampl ...
:\det(V) = \prod_ (x_j - x_i) which is non-zero if and only if all x_i are distinct. The Vandermonde determinant was sometimes called the ''discriminant'', although, presently, the
discriminant In mathematics, the discriminant of a polynomial is a quantity that depends on the coefficients and allows deducing some properties of the roots without computing them. More precisely, it is a polynomial function of the coefficients of the orig ...
of a polynomial is the square of the Vandermonde determinant of the
roots A root is the part of a plant, generally underground, that anchors the plant body, and absorbs and stores water and nutrients. Root or roots may also refer to: Art, entertainment, and media * ''The Root'' (magazine), an online magazine focusing ...
of the polynomial. The Vandermonde determinant is an alternating form in the x_i, meaning that exchanging two x_i changes the sign, while permuting the x_i by an
even permutation In mathematics, when ''X'' is a finite set with at least two elements, the permutations of ''X'' (i.e. the bijective functions from ''X'' to ''X'') fall into two classes of equal size: the even permutations and the odd permutations. If any total o ...
does not change the value of the determinant. It thus depends on the choice of an order for the x_i, while its square, the discriminant, does not depend on any order, and this implies, by
Galois theory In mathematics, Galois theory, originally introduced by Évariste Galois, provides a connection between field theory and group theory. This connection, the fundamental theorem of Galois theory, allows reducing certain problems in field theory to ...
, that the discriminant is a
polynomial function In mathematics, a polynomial is an expression consisting of indeterminates (also called variables) and coefficients, that involves only the operations of addition, subtraction, multiplication, and positive-integer powers of variables. An exa ...
of the coefficients of the polynomial that has the x_i as roots.


Proofs

The main property of a square Vandermonde matrix :V=\begin 1 & x_1 & x_1^2 & \dots & x_1^\\ 1 & x_2 & x_2^2 & \dots & x_2^\\ 1 & x_3 & x_3^2 & \dots & x_3^\\ \vdots & \vdots & \vdots & \ddots &\vdots \\ 1 & x_n & x_n^2 & \dots & x_n^ \end is that its determinant has the simple form :\det(V)=\prod_ (x_j-x_i). Three proofs of this equality are given below. The first one uses polynomial properties, especially the unique factorization property of
multivariate polynomial In mathematics, a polynomial is an expression consisting of indeterminates (also called variables) and coefficients, that involves only the operations of addition, subtraction, multiplication, and positive-integer powers of variables. An exampl ...
s. Although conceptually simple, it involves non-elementary concepts of
abstract algebra In mathematics, more specifically algebra, abstract algebra or modern algebra is the study of algebraic structures. Algebraic structures include groups, rings, fields, modules, vector spaces, lattices, and algebras over a field. The ter ...
. The second proof does not require any explicit computation, but involves the concepts of the ''determinant of a
linear map In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that ...
'' and
change of basis In mathematics, an ordered basis of a vector space of finite dimension allows representing uniquely any element of the vector space by a coordinate vector, which is a sequence of scalars called coordinates. If two different bases are consider ...
. It provides also the structure of the
LU decomposition In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix (see matrix decomposition). The product sometimes includes a p ...
of the Vandermonde matrix. The third one is more elementary and more complicated, using only elementary row and column operations.


Using polynomial properties

By the Leibniz formula, is a polynomial in the x_i, with integer coefficients. All entries of the th column have total degree . Thus, again by the Leibniz formula, all terms of the determinant have total degree :0 + 1 + 2 + \cdots + (n-1) = \frac2; (that is the determinant is a
homogeneous polynomial In mathematics, a homogeneous polynomial, sometimes called quantic in older texts, is a polynomial whose nonzero terms all have the same degree. For example, x^5 + 2 x^3 y^2 + 9 x y^4 is a homogeneous polynomial of degree 5, in two variables; ...
of this degree). If, for , one substitutes x_i for x_j, one gets a matrix with two equal rows, which has thus a zero determinant. Thus, by the factor theorem, x_j-x_i is a divisor of . By the unique factorization property of
multivariate polynomial In mathematics, a polynomial is an expression consisting of indeterminates (also called variables) and coefficients, that involves only the operations of addition, subtraction, multiplication, and positive-integer powers of variables. An exampl ...
s, the product of all x_j-x_i divides , that is :\det(V)=Q\prod_ (x_j-x_i), where is a polynomial. As the product of all x_j-x_i and have the same degree n(n -1)/2, the polynomial is, in fact, a constant. This constant is one, because the product of the diagonal entries of is x_2x_3^2\cdots x_n^, which is also the
monomial In mathematics, a monomial is, roughly speaking, a polynomial which has only one term. Two definitions of a monomial may be encountered: # A monomial, also called power product, is a product of powers of variables with nonnegative integer expon ...
that is obtained by taking the first term of all factors in \textstyle \prod_ (x_j-x_i). This proves that :\det(V)=\prod_ (x_j-x_i).


Using linear maps

Let be a field containing all x_i, and P_n the
vector space In mathematics and physics, a vector space (also called a linear space) is a set whose elements, often called '' vectors'', may be added together and multiplied ("scaled") by numbers called ''scalars''. Scalars are often real numbers, but can ...
of the polynomials of degree less than with coefficients in . Let :\varphi:P_n\to F^n be the
linear map In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that ...
defined by :p(x) \mapsto (p(x_1), \ldots, p(x_n)). The Vandermonde matrix is the matrix of \varphi with respect to the canonical bases of P_n and F^n. Changing the basis of P_n amounts to multiplying the Vandermonde matrix by a change-of-basis matrix (from the right). This does not change the determinant, if the determinant of is . The polynomials 1, x-x_, (x-x_)(x-x_), …, (x-x_) (x-x_) \cdots (x-x_) are monic of respective degrees . Their matrix on the
monomial basis In mathematics the monomial basis of a polynomial ring is its basis (as a vector space or free module over the field or ring of coefficients) that consists of all monomials. The monomials form a basis because every polynomial may be uniquely ...
is an
upper-triangular matrix In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called if all the entries ''above'' the main diagonal are zero. Similarly, a square matrix is called if all the entries ''below'' the main diagonal are ...
(if the monomials are ordered in increasing degrees), with all diagonal entries equal to one. This matrix is thus a change-of-basis matrix of determinant one. The matrix of \varphi on this new basis is :L= \begin 1 & 0 & 0 & \ldots & 0 \\ 1 & x_-x_ & 0 & \ldots & 0 \\ 1 & x_-x_ & (x_-x_)(x_-x_) & \ldots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & x_-x_ & (x_-x_)(x_-x_) & \ldots & (x_-x_)(x_-x_)\cdots (x_-x_) \end. Thus Vandermonde determinant equals the determinant of this matrix, which is the product of its diagonal entries. This proves the desired equality. Moreover, one gets the
LU decomposition In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix (see matrix decomposition). The product sometimes includes a p ...
of as :V=LU^.


By row and column operations

This third proof is based on the fact that if one adds to a column of a matrix the product by a scalar of another column then the determinant remains unchanged. So, by subtracting to each column – except the first one – the preceding column multiplied by x_1, the determinant is not changed. (These subtractions must be done by starting from last columns, for subtracting a column that has not yet been changed). This gives the matrix :\begin 1&0&0&0&\cdots&0\\ 1&x_2-x_1&x_2(x_2-x_1)&x_2^2(x_2-x_1)&\cdots&x_2^(x_2-x_1)\\ 1&x_3-x_1&x_3(x_3-x_1)&x_3^2(x_3-x_1)&\cdots&x_3^(x_3-x_1)\\ \vdots&\vdots&\vdots&\vdots&\ddots&\vdots\\ 1&x_n-x_1&x_n(x_n-x_1)&x_n^2(x_n-x_1)&\cdots&x_n^(x_n-x_1)\\ \end. Applying the Laplace expansion formula along the first row, we obtain \det(V)=\det(B), with :B=\begin x_2-x_1&x_2(x_2-x_1)&x_2^2(x_2-x_1)&\cdots&x_2^(x_2-x_1)\\ x_3-x_1&x_3(x_3-x_1)&x_3^2(x_3-x_1)&\cdots&x_3^(x_3-x_1)\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ x_n-x_1&x_n(x_n-x_1)&x_n^2(x_n-x_1)&\cdots&x_n^(x_n-x_1)\\ \end. As all the entries in the i-th row of B have a factor of x_-x_1, one can take these factors out and obtain :\det(V)=(x_2-x_1)(x_3-x_1)\cdots(x_n-x_1)\begin 1&x_2&x_2^2&\cdots&x_2^\\ 1&x_3&x_3^2&\cdots&x_3^\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 1&x_n&x_n^2&\cdots&x_n^\\ \end=\prod_(x_j-x_1)\det(V'), where V' is a Vandermonde matrix in x_2,\ldots, x_n. Iterating this process on this smaller Vandermonde matrix, one eventually gets the desired expression of as the product of all x_j-x_i such that .


Resulting properties

An rectangular Vandermonde matrix such that has maximum
rank Rank is the relative position, value, worth, complexity, power, importance, authority, level, etc. of a person or object within a ranking, such as: Level or position in a hierarchical organization * Academic rank * Diplomatic rank * Hierarchy * ...
if and only if In logic and related fields such as mathematics and philosophy, "if and only if" (shortened as "iff") is a biconditional logical connective between statements, where either both statements are true or both are false. The connective is bic ...
all are distinct. An rectangular Vandermonde matrix such that has maximum
rank Rank is the relative position, value, worth, complexity, power, importance, authority, level, etc. of a person or object within a ranking, such as: Level or position in a hierarchical organization * Academic rank * Diplomatic rank * Hierarchy * ...
if and only if there are of the that are distinct. A square Vandermonde matrix is invertible if and only if the are distinct. An explicit formula for the inverse is known.


Applications

The Vandermonde matrix ''evaluates'' a polynomial at a set of points; formally, it is the matrix of the
linear map In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that ...
that maps the vector of coefficients of a polynomial to the vector of the values of the polynomial at the values appearing in the Vandermonde matrix. The non-vanishing of the Vandermonde determinant for distinct points \alpha_i shows that, for distinct points, the map from coefficients to values at those points is a one-to-one correspondence, and thus that the polynomial interpolation problem is solvable with a unique solution; this result is called the
unisolvence theorem In numerical analysis, polynomial interpolation is the interpolation of a given data set by the polynomial of lowest possible degree that passes through the points of the dataset. Given a set of data points (x_0,y_0), \ldots, (x_n,y_n), with no ...
, and is a special case of the Chinese remainder theorem for polynomials. This may be useful in
polynomial interpolation In numerical analysis, polynomial interpolation is the interpolation of a given data set by the polynomial of lowest possible degree that passes through the points of the dataset. Given a set of data points (x_0,y_0), \ldots, (x_n,y_n), with no ...
, since inverting the Vandermonde matrix allows expressing the coefficients of the polynomial in terms of the \alpha_i and the values of the polynomial at the \alpha_i. However, the interpolation polynomial is generally easier to compute with the Lagrange interpolation formula, which may also be used for deriving a formula for the inverse of a Vandermonde matrix. The Vandermonde determinant is used in the representation theory of the symmetric group. When the values \alpha_k belong to a
finite field In mathematics, a finite field or Galois field (so-named in honor of Évariste Galois) is a field that contains a finite number of elements. As with any field, a finite field is a set on which the operations of multiplication, addition, subtr ...
, then the Vandermonde determinant is also called a Moore determinant and has specific properties that are used, for example, in the theory of
BCH code In coding theory, the Bose–Chaudhuri–Hocquenghem codes (BCH codes) form a class of cyclic error-correcting codes that are constructed using polynomials over a finite field (also called ''Galois field''). BCH codes were invented in 1959 ...
and
Reed–Solomon error correction Reed–Solomon codes are a group of error-correcting codes that were introduced by Irving S. Reed and Gustave Solomon in 1960. They have many applications, the most prominent of which include consumer technologies such as MiniDiscs, CDs, DVDs, ...
codes. The
discrete Fourier transform In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced samples of a function into a same-length sequence of equally-spaced samples of the discrete-time Fourier transform (DTFT), which is a comple ...
is defined by a specific Vandermonde matrix, the DFT matrix, where the numbers α''i'' are chosen to be
roots of unity In mathematics, a root of unity, occasionally called a de Moivre number, is any complex number that yields 1 when raised to some positive integer power . Roots of unity are used in many branches of mathematics, and are especially important in ...
. Using the Fast Fourier Transform it is possible to compute the product of a Vandermonde matrix with a vector in O(n(\log n)^2) time.Gauthier, J. "Fast Multipoint Evaluation On n Arbitrary Points." ''Simon Fraser University, Tech. Rep'' (2017). The
Laughlin wavefunction In condensed matter physics, the Laughlin wavefunction pp. 210-213 is an ansatz, proposed by Robert B. Laughlin, Robert Laughlin for the ground state of a two-dimensional electron gas placed in a uniform background magnetic field in the presence of ...
with filling factor one (appearing in the
Quantum Hall effect The quantum Hall effect (or integer quantum Hall effect) is a quantized version of the Hall effect which is observed in two-dimensional electron systems subjected to low temperatures and strong magnetic fields, in which the Hall resistance exh ...
), by the formula for the Vandermonde determinant, can be seen to be a
Slater determinant In quantum mechanics, a Slater determinant is an expression that describes the wave function of a multi-fermionic system. It satisfies anti-symmetry requirements, and consequently the Pauli principle, by changing sign upon exchange of two electro ...
. This is not true anymore for filling factors different from one, i.e., in the
fractional Quantum Hall effect The fractional quantum Hall effect (FQHE) is a physical phenomenon in which the Hall conductance of 2-dimensional (2D) electrons shows precisely quantized plateaus at fractional values of e^2/h. It is a property of a collective state in which elec ...
. It is the
design matrix In statistics and in particular in regression analysis, a design matrix, also known as model matrix or regressor matrix and often denoted by X, is a matrix of values of explanatory variables of a set of objects. Each row represents an individual ob ...
of
polynomial regression In statistics, polynomial regression is a form of regression analysis in which the relationship between the independent variable ''x'' and the dependent variable ''y'' is modelled as an ''n''th degree polynomial in ''x''. Polynomial regression fi ...
. It is the normalized volume of arbitrary k-faces of cyclic polytopes. Specifically, if F = C_(t_, \dots, t_) is a k-face of the cyclic polytope C_d(T) \subset \mathbb^ (where T = \_ \subset \mathbb), then \mathrm(F) = \frac\prod_.


Confluent Vandermonde matrices

As described before, a Vandermonde matrix describes the linear algebra interpolation problem of finding the coefficients of a polynomial p(x) of degree n - 1 based on the values p(\alpha_1),\, ...,\, p(\alpha_n), where \alpha_1,\, ...,\, \alpha_n are ''distinct'' points. If \alpha_i are not distinct, then this problem does not have a unique solution (which is reflected by the fact that the corresponding Vandermonde matrix is singular). However, if we give the values of the derivatives at the repeated points, then the problem can have a unique solution. For example, the problem :\begin p(0) = a \\ p'(0) = b \\ p(1) = c \end where p is a polynomial of degree ≤ 2, has a unique solution for all a, b, c. In general, suppose that \alpha_1, \alpha_2, ..., \alpha_n are (not necessarily distinct) numbers, and suppose for ease of notation that equal values come in continuous sequences in the list. That is : \alpha_1 = \cdots = \alpha_,\ \alpha_ = \cdots = \alpha_,\ \ldots,\ \alpha_ = \cdots = \alpha_ where m_k = n, m_1 < m_2 < \cdots < m_k, and \alpha_, \ldots ,\alpha_ are distinct. Then the corresponding interpolation problem is :\begin p(\alpha_1) = \beta_1, & p'(\alpha_1) = \beta_2, & \ldots, & p^(\alpha_1) = \beta_ \\ p(\alpha_) = \beta_, & p'(\alpha_)=\beta_, & \ldots, & p^(\alpha_) = \beta_ \\ \qquad \vdots \\ p(\alpha_) = \beta_, & p'(\alpha_) = \beta_, & \ldots, & p^(\alpha_) = \beta_ \end And the corresponding matrix for this problem is called a confluent Vandermonde matrices. In our case (which is the general case, up to permuting the rows of the matrix) the formula for it is given as follows: if 1 \leq i,j \leq n, then m_\ell < i \leq m_ for some (unique) 0 \leq \ell \leq k-1 (we consider m_0 = 0). Then, we have :V_ = \begin 0, & \text j < i - m_\ell; \\ pt \dfrac \alpha_i^, & \text j \geq i - m_\ell. \end This generalization of the Vandermonde matrix makes it non-singular (such that there exists a unique solution to the system of equations) while retaining most properties of the Vandermonde matrix. Its rows are derivatives (of some order) of the original Vandermonde rows. Another way to receive this formula is to let some of the \alpha_i's go arbitrarily close to each other. For example, if \alpha_1 = \alpha_2, then letting \alpha_2\to\alpha_1 in the original Vandermonde matrix, the difference between the first and second rows yields the corresponding row in the confluent Vandermonde matrix. This allows us to link the generalized interpolation problem (given value and derivatives on a point) to the original case where all points are distinct: Being given p(\alpha), p'(\alpha) is similar to being given p(\alpha), p(\alpha + \varepsilon) where \varepsilon is very small.


See also

* Schur polynomial – a generalization * Alternant matrix *
Lagrange polynomial In numerical analysis, the Lagrange interpolating polynomial is the unique polynomial of lowest degree that interpolates a given set of data. Given a data set of coordinate pairs (x_j, y_j) with 0 \leq j \leq k, the x_j are called ''nodes'' an ...
* Wronskian * List of matrices *
Moore determinant over a finite field In linear algebra, a Moore matrix, introduced by , is a matrix defined over a finite field. When it is a square matrix its determinant is called a Moore determinant (this is unrelated to the Moore determinant of a quaternionic Hermitian matrix). Th ...
*
Vieta's formulas In mathematics, Vieta's formulas relate the coefficients of a polynomial to sums and products of its roots. They are named after François Viète (more commonly referred to by the Latinised form of his name, "Franciscus Vieta"). Basic formula ...


References


Further reading

*.


External links

{{Matrix classes Matrices Determinants Numerical linear algebra