Vandermonde Matrices
   HOME

TheInfoList



OR:

In
linear algebra Linear algebra is the branch of mathematics concerning linear equations such as :a_1x_1+\cdots +a_nx_n=b, linear maps such as :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrix (mathemat ...
, a Vandermonde matrix, named after
Alexandre-Théophile Vandermonde Alexandre-Théophile Vandermonde (28 February 1735 – 1 January 1796) was a French mathematician, musician, and chemist who worked with Bézout and Lavoisier; his name is now principally associated with determinant theory in mathematics. He was ...
, is a
matrix Matrix (: matrices or matrixes) or MATRIX may refer to: Science and mathematics * Matrix (mathematics), a rectangular array of numbers, symbols or expressions * Matrix (logic), part of a formula in prenex normal form * Matrix (biology), the m ...
with the terms of a
geometric progression A geometric progression, also known as a geometric sequence, is a mathematical sequence of non-zero numbers where each term after the first is found by multiplying the previous one by a fixed number called the ''common ratio''. For example, the s ...
in each row: an (m + 1) \times (n + 1) matrix :V = V(x_0, x_1, \cdots, x_m) = \begin 1 & x_0 & x_0^2 & \dots & x_0^n\\ 1 & x_1 & x_1^2 & \dots & x_1^n\\ 1 & x_2 & x_2^2 & \dots & x_2^n\\ \vdots & \vdots & \vdots & \ddots &\vdots \\ 1 & x_m & x_m^2 & \dots & x_m^n \end with entries V_ = x_i^j , the ''j''th power of the number x_i, for all zero-based indices i and j . Some authors define the Vandermonde matrix as the
transpose In linear algebra, the transpose of a Matrix (mathematics), matrix is an operator which flips a matrix over its diagonal; that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other ...
of the above matrix. The
determinant In mathematics, the determinant is a Scalar (mathematics), scalar-valued function (mathematics), function of the entries of a square matrix. The determinant of a matrix is commonly denoted , , or . Its value characterizes some properties of the ...
of a
square In geometry, a square is a regular polygon, regular quadrilateral. It has four straight sides of equal length and four equal angles. Squares are special cases of rectangles, which have four equal angles, and of rhombuses, which have four equal si ...
Vandermonde matrix (when n=m) is called a Vandermonde determinant or
Vandermonde polynomial In algebra, the Vandermonde polynomial of an ordered set of ''n'' variables X_1,\dots, X_n, named after Alexandre-Théophile Vandermonde, is the polynomial: :V_n = \prod_ (X_j-X_i). (Some sources use the opposite order (X_i-X_j), which changes the s ...
. Its value is: :\det(V) = \prod_ (x_j - x_i). This is non-zero if and only if all x_i are distinct (no two are equal), making the Vandermonde matrix
invertible In mathematics, the concept of an inverse element generalises the concepts of opposite () and reciprocal () of numbers. Given an operation denoted here , and an identity element denoted , if , one says that is a left inverse of , and that ...
.


Applications

The
polynomial interpolation In numerical analysis, polynomial interpolation is the interpolation of a given data set by the polynomial of lowest possible degree that passes through the points in the dataset. Given a set of data points (x_0,y_0), \ldots, (x_n,y_n), with no ...
problem is to find a polynomial p(x) = a_0 + a_1 x + a_2 x^2 + \dots + a_n x^n which satisfies p(x_0)=y_0, \ldots,p(x_m)=y_m for given data points (x_0,y_0),\ldots,(x_m,y_m). This problem can be reformulated in terms of
linear algebra Linear algebra is the branch of mathematics concerning linear equations such as :a_1x_1+\cdots +a_nx_n=b, linear maps such as :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrix (mathemat ...
by means of the Vandermonde matrix, as follows. V computes the values of p(x) at the points x=x_0,\ x_1,\dots,\ x_m via a matrix multiplication Va = y, where a = (a_0,\ldots,a_n) is the vector of coefficients and y = (y_0,\ldots,y_m)= (p(x_0),\ldots,p(x_m)) is the vector of values (both written as column vectors): \begin 1 & x_0 & x_0^2 & \dots & x_0^n\\ 1 & x_1 & x_1^2 & \dots & x_1^n\\ 1 & x_2 & x_2^2 & \dots & x_2^n\\ \vdots & \vdots & \vdots & \ddots &\vdots \\ 1 & x_m & x_m^2 & \dots & x_m^n \end \cdot \begin a_0\\ a_1\\ \vdots\\ a_n \end = \begin p(x_0)\\ p(x_1)\\ \vdots\\ p(x_m) \end. If n = m and x_0,\dots,\ x_n are distinct, then ''V'' is a square matrix with non-zero determinant, i.e. an
invertible matrix In linear algebra, an invertible matrix (''non-singular'', ''non-degenarate'' or ''regular'') is a square matrix that has an inverse. In other words, if some other matrix is multiplied by the invertible matrix, the result can be multiplied by a ...
. Thus, given ''V'' and ''y'', one can find the required p(x) by solving for its coefficients a in the equation Va = y:
a = V^y.
That is, the map from coefficients to values of polynomials is a bijective linear mapping with matrix ''V'', and the interpolation problem has a unique solution. This result is called the unisolvence theorem, and is a special case of the Chinese remainder theorem for polynomials. In
statistics Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
, the equation Va = y means that the Vandermonde matrix is the
design matrix In statistics and in particular in regression analysis, a design matrix, also known as model matrix or regressor matrix and often denoted by X, is a matrix of values of explanatory variables of a set of objects. Each row represents an individual o ...
of
polynomial regression In statistics, polynomial regression is a form of regression analysis in which the relationship between the independent variable ''x'' and the dependent variable ''y'' is modeled as a polynomial in ''x''. Polynomial regression fits a nonlinear ...
. In
numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic computation, symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of ...
, solving the equation Va = y naïvely by Gaussian elimination results in an algorithm with
time complexity In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations ...
O(''n''3). Exploiting the structure of the Vandermonde matrix, one can use Newton's divided differences method (or the
Lagrange interpolation formula Joseph-Louis Lagrange (born Giuseppe Luigi LagrangiaUL factorization of V^. The resulting algorithm produces extremely accurate solutions, even if V is
ill-conditioned In numerical analysis, the condition number of a function measures how much the output value of the function can change for a small change in the input argument. This is used to measure how sensitive a function is to changes or errors in the inpu ...
. (See
polynomial interpolation In numerical analysis, polynomial interpolation is the interpolation of a given data set by the polynomial of lowest possible degree that passes through the points in the dataset. Given a set of data points (x_0,y_0), \ldots, (x_n,y_n), with no ...
.) The Vandermonde determinant is used in the
representation theory of the symmetric group In mathematics, the representation theory of the symmetric group is a particular case of the representation theory of finite groups, for which a concrete and detailed theory can be obtained. This has a large area of potential applications, from sy ...
. When the values x_i belong to a
finite field In mathematics, a finite field or Galois field (so-named in honor of Évariste Galois) is a field (mathematics), field that contains a finite number of Element (mathematics), elements. As with any field, a finite field is a Set (mathematics), s ...
, the Vandermonde determinant is also called the Moore determinant, and has properties which are important in the theory of
BCH code In coding theory, the Bose–Chaudhuri–Hocquenghem codes (BCH codes) form a class of cyclic error-correcting codes that are constructed using polynomials over a finite field (also called a '' Galois field''). BCH codes were invented in ...
s and
Reed–Solomon error correction In information theory and coding theory, Reed–Solomon codes are a group of error-correcting codes that were introduced by Irving S. Reed and Gustave Solomon in 1960. They have many applications, including consumer technologies such as MiniDiscs, ...
codes. The
discrete Fourier transform In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced Sampling (signal processing), samples of a function (mathematics), function into a same-length sequence of equally-spaced samples of the discre ...
is defined by a specific Vandermonde matrix, the
DFT matrix In applied mathematics, a DFT matrix is a ''square matrix'' as an expression of a discrete Fourier transform (DFT) as a transformation matrix, which can be applied to a signal through matrix multiplication. Definition An ''N''-point DFT is expres ...
, where the x_i are chosen to be th
roots of unity In mathematics, a root of unity is any complex number that yields 1 when raised to some positive integer power . Roots of unity are used in many branches of mathematics, and are especially important in number theory, the theory of group char ...
. The
Fast Fourier transform A fast Fourier transform (FFT) is an algorithm that computes the discrete Fourier transform (DFT) of a sequence, or its inverse (IDFT). A Fourier transform converts a signal from its original domain (often time or space) to a representation in ...
computes the product of this matrix with a vector in O(n\log^2n) time. See the article on Multipoint Polynomial evaluation for details. In the physical theory of the
quantum Hall effect The quantum Hall effect (or integer quantum Hall effect) is a quantized version of the Hall effect which is observed in two-dimensional electron systems subjected to low temperatures and strong magnetic fields, in which the Hall resistance exhi ...
, the Vandermonde determinant shows that the
Laughlin wavefunction In condensed matter physics, the Laughlin wavefunction pp. 210-213 is an ansatz, proposed by Robert Laughlin for the ground state of a two-dimensional electron gas placed in a uniform background magnetic field in the presence of a uniform jellium ...
with filling factor 1 is equal to a
Slater determinant In quantum mechanics, a Slater determinant is an expression that describes the wave function of a multi-fermionic system. It satisfies anti-symmetry requirements, and consequently the Pauli principle, by changing sign upon exchange of two fermion ...
. This is no longer true for filling factors different from 1 in the
fractional quantum Hall effect The fractional quantum Hall effect (fractional QHE or FQHE) is the observation of precisely quantized plateaus in the Hall conductance of 2-dimensional (2D) electrons at fractional values of e^2/h, where ''e'' is the electron charge and ''h'' i ...
. In the geometry of
polyhedra In geometry, a polyhedron (: polyhedra or polyhedrons; ) is a three-dimensional figure with flat polygonal faces, straight edges and sharp corners or vertices. The term "polyhedron" may refer either to a solid figure or to its boundary su ...
, the Vandermonde matrix gives the normalized volume of arbitrary k-faces of
cyclic polytope In mathematics, a cyclic polytope, denoted ''C''(''n'', ''d''), is a convex polytope formed as a convex hull of ''n'' distinct points on a rational normal curve in R''d'', where ''n'' is greater than ''d''. These polytopes were studied by Constanti ...
s. Specifically, if F = C_(t_, \dots, t_) is a k-face of the cyclic polytope C_d(T) \subset \mathbb^ corresponding to T = \ \subset \mathbb, then\mathrm(F) = \frac\prod_.


Determinant

The
determinant In mathematics, the determinant is a Scalar (mathematics), scalar-valued function (mathematics), function of the entries of a square matrix. The determinant of a matrix is commonly denoted , , or . Its value characterizes some properties of the ...
of a
square In geometry, a square is a regular polygon, regular quadrilateral. It has four straight sides of equal length and four equal angles. Squares are special cases of rectangles, which have four equal angles, and of rhombuses, which have four equal si ...
Vandermonde matrix is called a ''
Vandermonde polynomial In algebra, the Vandermonde polynomial of an ordered set of ''n'' variables X_1,\dots, X_n, named after Alexandre-Théophile Vandermonde, is the polynomial: :V_n = \prod_ (X_j-X_i). (Some sources use the opposite order (X_i-X_j), which changes the s ...
'' or ''Vandermonde determinant''. Its value is the
polynomial In mathematics, a polynomial is a Expression (mathematics), mathematical expression consisting of indeterminate (variable), indeterminates (also called variable (mathematics), variables) and coefficients, that involves only the operations of addit ...
:\det(V) = \prod_ (x_j - x_i) which is non-zero if and only if all x_i are distinct. The Vandermonde determinant was formerly sometimes called the ''discriminant'', but in current terminology the
discriminant In mathematics, the discriminant of a polynomial is a quantity that depends on the coefficients and allows deducing some properties of the zero of a function, roots without computing them. More precisely, it is a polynomial function of the coef ...
of a polynomial p(x)=(x-x_0)\cdots(x-x_n) is the ''square'' of the Vandermonde determinant of the
roots A root is the part of a plant, generally underground, that anchors the plant body, and absorbs and stores water and nutrients. Root or roots may also refer to: Art, entertainment, and media * ''The Root'' (magazine), an online magazine focusin ...
x_i. The Vandermonde determinant is an
alternating form In mathematics, the exterior algebra or Grassmann algebra of a vector space V is an associative algebra that contains V, which has a product, called exterior product or wedge product and denoted with \wedge, such that v\wedge v=0 for every vector ...
in the x_i, meaning that exchanging two x_i changes the sign, and \det(V) thus depends on order for the x_i. By contrast, the discriminant \det(V)^2 does not depend on any order, so that
Galois theory In mathematics, Galois theory, originally introduced by Évariste Galois, provides a connection between field (mathematics), field theory and group theory. This connection, the fundamental theorem of Galois theory, allows reducing certain problems ...
implies that the discriminant is a
polynomial function In mathematics, a polynomial is a mathematical expression consisting of indeterminates (also called variables) and coefficients, that involves only the operations of addition, subtraction, multiplication and exponentiation to nonnegative int ...
of the coefficients of p(x). The determinant formula is proved below in three ways. The first uses polynomial properties, especially the unique factorization property of
multivariate polynomial In mathematics, a polynomial is a mathematical expression consisting of indeterminates (also called variables) and coefficients, that involves only the operations of addition, subtraction, multiplication and exponentiation to nonnegative intege ...
s. Although conceptually simple, it involves non-elementary concepts of
abstract algebra In mathematics, more specifically algebra, abstract algebra or modern algebra is the study of algebraic structures, which are set (mathematics), sets with specific operation (mathematics), operations acting on their elements. Algebraic structur ...
. The second proof is based on the
linear algebra Linear algebra is the branch of mathematics concerning linear equations such as :a_1x_1+\cdots +a_nx_n=b, linear maps such as :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrix (mathemat ...
concepts of
change of basis In mathematics, an ordered basis of a vector space of finite dimension allows representing uniquely any element of the vector space by a coordinate vector, which is a sequence of scalars called coordinates. If two different bases are conside ...
in a vector space and the determinant of a
linear map In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that p ...
. In the process, it computes the
LU decomposition In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix (see matrix multiplication and matrix decomposition). The produ ...
of the Vandermonde matrix. The third proof is more elementary but more complicated, using only elementary row and column operations.


First proof: Polynomial properties

The first proof relies on properties of polynomials. By the Leibniz formula, \det(V) is a polynomial in the x_i, with
integer An integer is the number zero (0), a positive natural number (1, 2, 3, ...), or the negation of a positive natural number (−1, −2, −3, ...). The negations or additive inverses of the positive natural numbers are referred to as negative in ...
coefficients. All entries of the (i-1)-th column have total degree i. Thus, again by the Leibniz formula, all terms of the determinant have total degree :0 + 1 + 2 + \cdots + n = \frac2; (that is, the determinant is a
homogeneous polynomial In mathematics, a homogeneous polynomial, sometimes called quantic in older texts, is a polynomial whose nonzero terms all have the same degree. For example, x^5 + 2 x^3 y^2 + 9 x y^4 is a homogeneous polynomial of degree 5, in two variables ...
of this degree). If, for i \neq j, one substitutes x_i for x_j, one gets a matrix with two equal rows, which has thus a zero determinant. Thus, considering the determinant as
univariate In mathematics, a univariate object is an expression (mathematics), expression, equation, function (mathematics), function or polynomial involving only one Variable (mathematics), variable. Objects involving more than one variable are ''wikt:multi ...
in x_i, the
factor theorem In algebra, the factor theorem connects polynomial factors with polynomial roots. Specifically, if f(x) is a polynomial, then x - a is a factor of f(x) if and only if f (a) = 0 (that is, a is a root of the polynomial). The theorem is a special cas ...
implies that x_j-x_i is a divisor of \det(V). It thus follows that for all i and j, x_j-x_i is a divisor of \det(V). This will now be strengthened to show that the product of all those divisors of \det(V) is a divisor of \det(V). Indeed, let p be a polynomial with x_i-x_j as a factor, then p=(x_i-x_j)\,q, for some polynomial q. If x_k-x_l is another factor of p, then p becomes zero after the substitution of x_k for x_l. If \\neq \, the factor q becomes zero after this substitution, since the factor x_i-x_j remains nonzero. So, by the factor theorem, x_k-x_l divides q, and (x_i-x_j)\,(x_k-x_l) divides p. Iterating this process by starting from \det(V), one gets that \det(V) is divisible by the product of all x_i-x_j with i that is :\det(V)=Q\prod_ (x_j-x_i), where Q is a polynomial. As the product of all x_j-x_i and \det(V) have the same degree n(n + 1)/2, the polynomial Q is, in fact, a constant. This constant is one, because the product of the diagonal entries of V is x_1 x_2^2\cdots x_n^n, which is also the
monomial In mathematics, a monomial is, roughly speaking, a polynomial which has only one term. Two definitions of a monomial may be encountered: # A monomial, also called a power product or primitive monomial, is a product of powers of variables with n ...
that is obtained by taking the first term of all factors in \textstyle \prod_ (x_j-x_i). This proves that Q=1, and finishes the proof. :\det(V)=\prod_ (x_j-x_i).


Second proof: linear maps

Let be a
field Field may refer to: Expanses of open ground * Field (agriculture), an area of land used for agricultural purposes * Airfield, an aerodrome that lacks the infrastructure of an airport * Battlefield * Lawn, an area of mowed grass * Meadow, a grass ...
containing all x_i, and P_n the
vector space In mathematics and physics, a vector space (also called a linear space) is a set (mathematics), set whose elements, often called vector (mathematics and physics), ''vectors'', can be added together and multiplied ("scaled") by numbers called sc ...
of the polynomials of degree at most with coefficients in . Let :\varphi:P_n\to F^ be the
linear map In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that p ...
that maps every polynomial in P_n to the -
tuple In mathematics, a tuple is a finite sequence or ''ordered list'' of numbers or, more generally, mathematical objects, which are called the ''elements'' of the tuple. An -tuple is a tuple of elements, where is a non-negative integer. There is o ...
of its values at the that is, :\phi(p) \mapsto (p(x_0), p(x_1), \ldots, p(x_n)). The Vandermonde matrix is the matrix of \varphi with respect to the canonical bases of P_n and F^. Changing the basis of P_n amounts to multiplying the Vandermonde matrix by a change-of-basis matrix (from the right). This does not change the determinant, if the determinant of is . The polynomials 1, x-x_0, (x-x_0)(x-x_1), …, (x-x_0) (x-x_1) \cdots (x-x_) are monic of respective degrees 0, 1, …, . Their matrix on the
monomial basis In mathematics the monomial basis of a polynomial ring is its basis (as a vector space or free module over the field or ring of coefficients) that consists of all monomials. The monomials form a basis because every polynomial may be uniquely w ...
is an
upper-triangular matrix In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called if all the entries ''above'' the main diagonal are zero. Similarly, a square matrix is called if all the entries ''below'' the main diagonal are z ...
(if the monomials are ordered in increasing degrees), with all diagonal entries equal to one. This matrix is thus a change-of-basis matrix of determinant one. The matrix of \varphi on this new basis is :\begin 1 & 0 & 0 & \ldots & 0 \\ 1 & x_1-x_0 & 0 & \ldots & 0 \\ 1 & x_2-x_0 & (x_2-x_0)(x_2-x_1) & \ldots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & x_n-x_0 & (x_n-x_0)(x_n-x_1) & \ldots & (x_n-x_0)(x_n-x_1)\cdots (x_n-x_) \end. Thus Vandermonde determinant equals the determinant of this matrix, which is the product of its diagonal entries. This proves the desired equality. Moreover, one gets the
LU decomposition In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix (see matrix multiplication and matrix decomposition). The produ ...
of as V=LU^.


Third proof: row and column operations

The third proof is based on the fact that if one adds to a column of a matrix the product by a scalar of another column then the determinant remains unchanged. So, by subtracting to each column – except the first one – the preceding column multiplied by x_0, the determinant is not changed. (These subtractions must be done by starting from last columns, for subtracting a column that has not yet been changed). This gives the matrix :V = \begin 1&0&0&0&\cdots&0\\ 1&x_1-x_0&x_1(x_1-x_0)&x_1^2(x_1-x_0)&\cdots&x_1^(x_1-x_0)\\ 1&x_2-x_0&x_2(x_2-x_0)&x_2^2(x_2-x_0)&\cdots&x_2^(x_2-x_0)\\ \vdots&\vdots&\vdots&\vdots&\ddots&\vdots\\ 1&x_n-x_0&x_n(x_n-x_0)&x_n^2(x_n-x_0)&\cdots&x_n^(x_n-x_0)\\ \end Applying the Laplace expansion formula along the first row, we obtain \det(V)=\det(B), with :B = \begin x_1-x_0&x_1(x_1-x_0)&x_1^2(x_1-x_0)&\cdots&x_1^(x_1-x_0)\\ x_2-x_0&x_2(x_2-x_0)&x_2^2(x_2-x_0)&\cdots&x_2^(x_2-x_0)\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ x_n-x_0&x_n(x_n-x_0)&x_n^2(x_n-x_0)&\cdots&x_n^(x_n-x_0)\\ \end As all the entries in the i-th row of B have a factor of x_-x_0, one can take these factors out and obtain :\det(V)=(x_1-x_0)(x_2-x_0)\cdots(x_n-x_0)\begin 1&x_1&x_1^2&\cdots&x_1^\\ 1&x_2&x_2^2&\cdots&x_2^\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 1&x_n&x_n^2&\cdots&x_n^\\ \end=\prod_(x_i-x_0)\det(V'), where V' is a Vandermonde matrix in x_1,\ldots, x_n. Iterating this process on this smaller Vandermonde matrix, one eventually gets the desired expression of \det(V) as the product of all x_j-x_i such that i.


Rank of the Vandermonde matrix

* An rectangular Vandermonde matrix such that has
rank A rank is a position in a hierarchy. It can be formally recognized—for example, cardinal, chief executive officer, general, professor—or unofficial. People Formal ranks * Academic rank * Corporate title * Diplomatic rank * Hierarchy ...
if and only if In logic and related fields such as mathematics and philosophy, "if and only if" (often shortened as "iff") is paraphrased by the biconditional, a logical connective between statements. The biconditional is true in two cases, where either bo ...
all are distinct. * An rectangular Vandermonde matrix such that has
rank A rank is a position in a hierarchy. It can be formally recognized—for example, cardinal, chief executive officer, general, professor—or unofficial. People Formal ranks * Academic rank * Corporate title * Diplomatic rank * Hierarchy ...
if and only if there are of the that are distinct. * A square Vandermonde matrix is
invertible In mathematics, the concept of an inverse element generalises the concepts of opposite () and reciprocal () of numbers. Given an operation denoted here , and an identity element denoted , if , one says that is a left inverse of , and that ...
if and only if the are distinct. An explicit formula for the inverse is known (see below).


Generalizations

If the columns of the Vandermonde matrix, instead of 1, x, x^2, ..., are general polynomials p_0, p_1, ..., p_n, such that each one has degree 0, 1, ..., n, that is, if V = _i(x_j), \det V(x_) = \prod_k c_k \Delta(x)where c_0, ..., c_n are the head coefficients of p_0, p_1, ..., p_n, and \Delta(x) = \prod_ (x_j - x_k) is the Vandermonde determinant. By multiplying with the Hermitian conjugate, we find that \det \left sum_l p_j(z_l) p_k(z_l^*)\right= \prod_k , c_k, ^2 , \Delta(z), ^2


Inverse Vandermonde matrix

As explained above in Applications, the
polynomial interpolation In numerical analysis, polynomial interpolation is the interpolation of a given data set by the polynomial of lowest possible degree that passes through the points in the dataset. Given a set of data points (x_0,y_0), \ldots, (x_n,y_n), with no ...
problem for p(x) = a_0 + a_1 x + a_2 x^2 + \dots + a_n x^nsatisfying p(x_0)=y_0, \ldots,p(x_n)=y_n is equivalent to the matrix equation Va = y, which has the unique solution a = V^y. There are other known formulas which solve the interpolation problem, which must be equivalent to the unique a = V^y, so they must give explicit formulas for the inverse matrix V^. In particular,
Lagrange interpolation In numerical analysis, the Lagrange interpolating polynomial is the unique polynomial of lowest degree that interpolates a given set of data. Given a data set of coordinate pairs (x_j, y_j) with 0 \leq j \leq k, the x_j are called ''nodes'' ...
shows that the columns of the inverse matrix V^= \begin 1 & x_0 & \dots & x_0^n\\ \vdots & \vdots & &\vdots \\ 5em1 & x_n & \dots & x_n^n \end^ = L = \begin L_ & \!\!\!\!\cdots\!\!\!\! & L_ \\ \vdots & & \vdots \\ L_ & \!\!\!\!\cdots\!\!\!\! & L_ \end are the coefficients of the Lagrange polynomials
L_j(x)=L_+L_x+\cdots+L_x^ = \prod_\frac = \frac\,,
where f(x)=(x-x_0)\cdots(x-x_n). This is easily demonstrated: the polynomials clearly satisfy L_(x_i)=0 for i\neq j while L_(x_j)=1, so we may compute the product VL = _j(x_i)^n = I, the
identity matrix In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere. It has unique properties, for example when the identity matrix represents a geometric transformation, the obje ...
.


Confluent Vandermonde matrices

As described before, a Vandermonde matrix describes the linear algebra interpolation problem of finding the coefficients of a polynomial p(x) of degree n - 1 based on the values p(x_1),\, ...,\, p(x_n), where x_1,\, ...,\, x_n are ''distinct'' points. If x_i are not distinct, then this problem does not have a unique solution (and the corresponding Vandermonde matrix is singular). However, if we specify the values of the derivatives at the repeated points, then the problem can have a unique solution. For example, the problem :\begin p(0) = y_1 \\ p'(0) = y_2 \\ p(1) = y_3 \end where p(x) = ax^2+bx+c, has a unique solution for all y_1,y_2,y_3 with y_1\neq y_3. In general, suppose that x_1, x_2, ..., x_n are (not necessarily distinct) numbers, and suppose for simplicity that equal values are adjacent: : x_1 = \cdots = x_,\ x_ = \cdots = x_,\ \ldots,\ x_ = \cdots = x_ where m_1 < m_2 < \cdots < m_k=n, and x_, \ldots ,x_ are distinct. Then the corresponding interpolation problem is :\begin p(x_) = y_1, & p'(x_) = y_2, & \ldots, & p^(x_) = y_, \\ p(x_) = y_, & p'(x_)=y_, & \ldots, & p^(x_) = y_, \\ \qquad \vdots & & & \qquad\vdots \\ p(x_) = y_, & p'(x_) = y_, & \ldots, & p^(x_) = y_. \end The corresponding matrix for this problem is called a confluent Vandermonde matrix, given as follows. If 1 \leq i,j \leq n, then m_\ell < i \leq m_ for a unique 0 \leq \ell \leq k-1 (denoting m_0 = 0). We let :V_ = \begin 0 & \text j < i - m_\ell, \\ pt \dfrac x_i^ & \text j \geq i - m_\ell. \end This generalization of the Vandermonde matrix makes it
non-singular Singular may refer to: * Singular, the grammatical number that denotes a unit quantity, as opposed to the plural and other forms * Singular or sounder, a group of boar, see List of animal names * Singular (band), a Thai jazz pop duo *'' Singular ...
, so that there exists a unique solution to the system of equations, and it possesses most of the other properties of the Vandermonde matrix. Its rows are derivatives (of some order) of the original Vandermonde rows. Another way to derive the above formula is by taking a limit of the Vandermonde matrix as the x_i's approach each other. For example, to get the case of x_1 = x_2, take subtract the first row from second in the original Vandermonde matrix, and let x_2\to x_1: this yields the corresponding row in the confluent Vandermonde matrix. This derives the generalized interpolation problem with given values and derivatives as a limit of the original case with distinct points: giving p(x_i), p'(x_i) is similar to giving p(x_i), p(x_i + \varepsilon) for small \varepsilon. Geometers have studied the problem of tracking confluent points along their tangent lines, known as compacitification of configuration space.


See also

* *
Schur polynomial In mathematics, Schur polynomials, named after Issai Schur, are certain symmetric polynomials in ''n'' variables, indexed by partitions, that generalize the elementary symmetric polynomials and the complete homogeneous symmetric polynomials. In ...
– a generalization * Alternant matrix *
Lagrange polynomial In numerical analysis, the Lagrange interpolating polynomial is the unique polynomial of lowest degree that interpolates a given set of data. Given a data set of coordinate pairs (x_j, y_j) with 0 \leq j \leq k, the x_j are called ''nodes'' ...
*
Wronskian In mathematics, the Wronskian of ''n'' differentiable functions is the determinant formed with the functions and their derivatives up to order . It was introduced in 1812 by the Polish mathematician Józef Wroński, and is used in the study of ...
*
List of matrices A list is a set of discrete items of information collected and set forth in some format for utility, entertainment, or other purposes. A list may be memorialized in any number of ways, including existing only in the mind of the list-maker, but ...
*
Moore determinant over a finite field In linear algebra, a Moore matrix, introduced by , is a matrix (mathematics), matrix defined over a finite field. When it is a square matrix its determinant is called a Moore determinant (this is unrelated to the Moore determinant of a quaternionic ...
*
Vieta's formulas In mathematics, Vieta's formulas relate the coefficients of a polynomial to sums and products of its roots. They are named after François Viète (1540-1603), more commonly referred to by the Latinised form of his name, "Franciscus Vieta." Basi ...


References

*


Further reading

*. {{Matrix classes Matrices (mathematics) Determinants Numerical linear algebra