HOME

TheInfoList



OR:

In
linear algebra Linear algebra is the branch of mathematics concerning linear equations such as :a_1x_1+\cdots +a_nx_n=b, linear maps such as :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrix (mathemat ...
, eigendecomposition is the
factorization In mathematics, factorization (or factorisation, see American and British English spelling differences#-ise, -ize (-isation, -ization), English spelling differences) or factoring consists of writing a number or another mathematical object as a p ...
of a
matrix Matrix (: matrices or matrixes) or MATRIX may refer to: Science and mathematics * Matrix (mathematics), a rectangular array of numbers, symbols or expressions * Matrix (logic), part of a formula in prenex normal form * Matrix (biology), the m ...
into a
canonical form In mathematics and computer science, a canonical, normal, or standard form of a mathematical object is a standard way of presenting that object as a mathematical expression. Often, it is one which provides the simplest representation of an obje ...
, whereby the matrix is represented in terms of its
eigenvalues and eigenvectors In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a ...
. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real
symmetric matrix In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, Because equal matrices have equal dimensions, only square matrices can be symmetric. The entries of a symmetric matrix are symmetric with ...
, the decomposition is called "spectral decomposition", derived from the
spectral theorem In linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized (that is, represented as a diagonal matrix in some basis). This is extremely useful because computations involvin ...
.


Fundamental theory of matrix eigenvectors and eigenvalues

A (nonzero) vector of dimension is an eigenvector of a square matrix if it satisfies a
linear equation In mathematics, a linear equation is an equation that may be put in the form a_1x_1+\ldots+a_nx_n+b=0, where x_1,\ldots,x_n are the variables (or unknowns), and b,a_1,\ldots,a_n are the coefficients, which are often real numbers. The coeffici ...
of the form \mathbf \mathbf = \lambda \mathbf for some scalar . Then is called the eigenvalue corresponding to . Geometrically speaking, the eigenvectors of are the vectors that merely elongates or shrinks, and the amount that they elongate/shrink by is the eigenvalue. The above equation is called the eigenvalue equation or the eigenvalue problem. This yields an equation for the eigenvalues p\left(\lambda\right) = \det\left(\mathbf - \lambda\mathbf\right) = 0. We call the
characteristic polynomial In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The ...
, and the equation, called the characteristic equation, is an th-order polynomial equation in the unknown . This equation will have distinct solutions, where . The set of solutions, that is, the eigenvalues, is called the
spectrum A spectrum (: spectra or spectrums) is a set of related ideas, objects, or properties whose features overlap such that they blend to form a continuum. The word ''spectrum'' was first used scientifically in optics to describe the rainbow of co ...
of . If the field of scalars is
algebraically closed In mathematics, a field is algebraically closed if every non-constant polynomial in (the univariate polynomial ring with coefficients in ) has a root in . In other words, a field is algebraically closed if the fundamental theorem of algebra h ...
, then we can factor as p(\lambda) = \left(\lambda - \lambda_1\right)^\left(\lambda - \lambda_2\right)^ \cdots \left(\lambda-\lambda_\right)^ = 0. The integer is termed the algebraic multiplicity of eigenvalue . The algebraic multiplicities sum to : \sum_^ = N. For each eigenvalue , we have a specific eigenvalue equation \left(\mathbf - \lambda_i \mathbf\right)\mathbf = 0. There will be
linearly independent In the theory of vector spaces, a set of vectors is said to be if there exists no nontrivial linear combination of the vectors that equals the zero vector. If such a linear combination exists, then the vectors are said to be . These concep ...
solutions to each eigenvalue equation. The linear combinations of the solutions (except the one which gives the zero vector) are the eigenvectors associated with the eigenvalue . The integer is termed the
geometric multiplicity In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a c ...
of . It is important to keep in mind that the algebraic multiplicity and geometric multiplicity may or may not be equal, but we always have . The simplest case is of course when . The total number of linearly independent eigenvectors, , can be calculated by summing the geometric multiplicities \sum_^ = N_. The eigenvectors can be indexed by eigenvalues, using a double index, with being the th eigenvector for the th eigenvalue. The eigenvectors can also be indexed using the simpler notation of a single index , with .


Eigendecomposition of a matrix

Let be a square matrix with linearly independent eigenvectors (where ). Then can be factored as \mathbf=\mathbf\mathbf\mathbf^ where is the square matrix whose th column is the eigenvector of , and is the
diagonal matrix In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagon ...
whose diagonal elements are the corresponding eigenvalues, . Note that only diagonalizable matrices can be factorized in this way. For example, the
defective matrix In linear algebra, a defective matrix is a square matrix that does not have a complete basis of eigenvectors, and is therefore not diagonalizable. In particular, an n \times n matrix is defective if and only if it does not have n linearly indepe ...
\left \begin 1 & 1 \\ 0 & 1 \end \right/math> (which is a
shear matrix Shear may refer to: Textile production *Animal shearing, the collection of wool from various species **Sheep shearing *The removal of Nap (textile), nap during wool cloth production *Scissors, a hand-operated cutting equipment Science and techno ...
) cannot be diagonalized. The eigenvectors are usually normalized, but they don't have to be. A non-normalized set of eigenvectors, can also be used as the columns of . That can be understood by noting that the magnitude of the eigenvectors in gets canceled in the decomposition by the presence of . If one of the eigenvalues has multiple linearly independent eigenvectors (that is, the geometric multiplicity of is greater than 1), then these eigenvectors for this eigenvalue can be chosen to be mutually
orthogonal In mathematics, orthogonality (mathematics), orthogonality is the generalization of the geometric notion of ''perpendicularity''. Although many authors use the two terms ''perpendicular'' and ''orthogonal'' interchangeably, the term ''perpendic ...
; however, if two eigenvectors belong to two different eigenvalues, it may be impossible for them to be orthogonal to each other (see Example below). One special case is that if is a normal matrix, then by the spectral theorem, it's always possible to diagonalize in an
orthonormal basis In mathematics, particularly linear algebra, an orthonormal basis for an inner product space V with finite Dimension (linear algebra), dimension is a Basis (linear algebra), basis for V whose vectors are orthonormal, that is, they are all unit vec ...
. The decomposition can be derived from the fundamental property of eigenvectors: \begin \mathbf \mathbf &= \lambda \mathbf \\ \mathbf \mathbf &= \mathbf \mathbf \\ \mathbf &= \mathbf\mathbf\mathbf^ . \end The linearly independent eigenvectors with nonzero eigenvalues form a basis (not necessarily orthonormal) for all possible products , for , which is the same as the
image An image or picture is a visual representation. An image can be Two-dimensional space, two-dimensional, such as a drawing, painting, or photograph, or Three-dimensional space, three-dimensional, such as a carving or sculpture. Images may be di ...
(or range) of the corresponding
matrix transformation In linear algebra, linear transformations can be represented by matrices. If T is a linear transformation mapping \mathbb^n to \mathbb^m and \mathbf x is a column vector with n entries, then there exists an m \times n matrix A, called the transfo ...
, and also the
column space In linear algebra, the column space (also called the range or image) of a matrix ''A'' is the span (set of all possible linear combinations) of its column vectors. The column space of a matrix is the image or range of the corresponding matr ...
of the matrix . The number of linearly independent eigenvectors with nonzero eigenvalues is equal to the rank of the matrix , and also the dimension of the image (or range) of the corresponding matrix transformation, as well as its column space. The linearly independent eigenvectors with an eigenvalue of zero form a basis (which can be chosen to be orthonormal) for the
null space In mathematics, the kernel of a linear map, also known as the null space or nullspace, is the part of the domain which is mapped to the zero vector of the co-domain; the kernel is always a linear subspace of the domain. That is, given a linear ...
(also known as the kernel) of the matrix transformation .


Example

The 2 × 2 real matrix \mathbf = \begin 1 & 0 \\ 1 & 3 \\ \end may be decomposed into a diagonal matrix through multiplication of a non-singular matrix \mathbf = \begin a & b \\ c & d \end \in \mathbb^. Then \begin a & b \\ c & d \end^\begin 1 & 0 \\ 1 & 3 \end\begin a & b \\ c & d \end = \begin x & 0 \\ 0 & y \end, for some real diagonal matrix \left \begin x & 0 \\ 0 & y \end \right/math>. Multiplying both sides of the equation on the left by : \begin 1 & 0 \\ 1 & 3 \end \begin a & b \\ c & d \end = \begin a & b \\ c & d \end \begin x & 0 \\ 0 & y \end. The above equation can be decomposed into two
simultaneous equation In mathematics, a set of simultaneous equations, also known as a system of equations or an equation system, is a finite set of equations for which common solutions are sought. An equation system is usually classified in the same manner as single e ...
s: \begin \begin 1 & 0\\ 1 & 3 \end \begin a \\ c \end = \begin ax \\ cx \end \\ .2ex\begin 1 & 0\\ 1 & 3 \end \begin b \\ d \end = \begin by \\ dy \end \end . Factoring out the
eigenvalue In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a ...
s and : \begin \begin 1 & 0\\ 1 & 3 \end \begin a \\ c \end = x\begin a \\ c \end \\ .2ex\begin 1 & 0\\ 1 & 3 \end \begin b \\ d \end = y\begin b \\ d \end \end Letting \mathbf = \begin a \\ c \end, \quad \mathbf = \begin b \\ d \end, this gives us two vector equations: \begin \mathbf \mathbf = x \mathbf \\ \mathbf \mathbf = y \mathbf \end And can be represented by a single vector equation involving two solutions as eigenvalues: \mathbf \mathbf = \lambda \mathbf where represents the two eigenvalues and , and represents the vectors and . Shifting to the left hand side and factoring out \left(\mathbf - \lambda \mathbf\right) \mathbf = \mathbf Since is non-singular, it is essential that is nonzero. Therefore, \det(\mathbf - \lambda \mathbf) = 0 Thus (1- \lambda)(3 - \lambda) = 0 giving us the solutions of the eigenvalues for the matrix as or , and the resulting diagonal matrix from the eigendecomposition of is thus Putting the solutions back into the above simultaneous equations \begin \begin 1 & 0 \\ 1 & 3 \end \begin a \\ c \end = 1\begin a \\ c \end \\ .2ex\begin 1 & 0 \\ 1 & 3 \end \begin b \\ d \end = 3\begin b \\ d \end \end Solving the equations, we have a = -2c \quad\text \quad b = 0, \qquad c,d \in \mathbb. Thus the matrix required for the eigendecomposition of is \mathbf = \begin -2c & 0 \\ c & d \end,\qquad c, d\in \mathbb, that is: \begin -2c & 0 \\ c & d \end^ \begin 1 & 0 \\ 1 & 3 \end \begin -2c & 0 \\ c & d \end = \begin 1 & 0 \\ 0 & 3 \end,\qquad c, d\in \mathbb


Matrix inverse via eigendecomposition

If a matrix can be eigendecomposed and if none of its eigenvalues are zero, then is
invertible In mathematics, the concept of an inverse element generalises the concepts of opposite () and reciprocal () of numbers. Given an operation denoted here , and an identity element denoted , if , one says that is a left inverse of , and that ...
and its inverse is given by \mathbf^ = \mathbf\mathbf^\mathbf^ If \mathbf is a symmetric matrix, since \mathbf is formed from the eigenvectors of \mathbf, \mathbf is guaranteed to be an
orthogonal matrix In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors. One way to express this is Q^\mathrm Q = Q Q^\mathrm = I, where is the transpose of and is the identi ...
, therefore \mathbf^ = \mathbf^\mathrm. Furthermore, because is a
diagonal matrix In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagon ...
, its inverse is easy to calculate: \left mathbf^\right = \frac


Practical implications

When eigendecomposition is used on a matrix of measured, real
data Data ( , ) are a collection of discrete or continuous values that convey information, describing the quantity, quality, fact, statistics, other basic units of meaning, or simply sequences of symbols that may be further interpreted for ...
, the inverse may be less valid when all eigenvalues are used unmodified in the form above. This is because as eigenvalues become relatively small, their contribution to the inversion is large. Those near zero or at the "noise" of the measurement system will have undue influence and could hamper solutions (detection) using the inverse. Two mitigations have been proposed: truncating small or zero eigenvalues, and extending the lowest reliable eigenvalue to those below it. See also Tikhonov regularization as a statistically motivated but biased method for rolling off eigenvalues as they become dominated by noise. The first mitigation method is similar to a sparse sample of the original matrix, removing components that are not considered valuable. However, if the solution or detection process is near the noise level, truncating may remove components that influence the desired solution. The second mitigation extends the eigenvalue so that lower values have much less influence over inversion, but do still contribute, such that solutions near the noise will still be found. The reliable eigenvalue can be found by assuming that eigenvalues of extremely similar and low value are a good representation of measurement noise (which is assumed low for most systems). If the eigenvalues are rank-sorted by value, then the reliable eigenvalue can be found by minimization of the
Laplacian In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space. It is usually denoted by the symbols \nabla\cdot\nabla, \nabla^2 (where \nabla is th ...
of the sorted eigenvalues: \min\left, \nabla^2 \lambda_\mathrm\ where the eigenvalues are subscripted with an to denote being sorted. The position of the minimization is the lowest reliable eigenvalue. In measurement systems, the square root of this reliable eigenvalue is the average noise over the components of the system.


Functional calculus

The eigendecomposition allows for much easier computation of
power series In mathematics, a power series (in one variable) is an infinite series of the form \sum_^\infty a_n \left(x - c\right)^n = a_0 + a_1 (x - c) + a_2 (x - c)^2 + \dots where ''a_n'' represents the coefficient of the ''n''th term and ''c'' is a co ...
of matrices. If is given by f(x) = a_0 + a_1 x + a_2 x^2 + \cdots then we know that f\!\left(\mathbf\right) = \mathbf\,f\!\left(\mathbf\right)\mathbf^ Because is a
diagonal matrix In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagon ...
, functions of are very easy to calculate: \left \left(\mathbf\right)\right = f\left(\lambda_i\right) The off-diagonal elements of are zero; that is, is also a diagonal matrix. Therefore, calculating reduces to just calculating the function on each of the eigenvalues. A similar technique works more generally with the
holomorphic functional calculus In mathematics, holomorphic functional calculus is functional calculus with holomorphic functions. That is to say, given a holomorphic function ''f'' of a complex argument ''z'' and an operator ''T'', the aim is to construct an operator, ''f''(''T ...
, using \mathbf^ = \mathbf \mathbf^ \mathbf^ from above. Once again, we find that \left \left(\mathbf\right)\right = f\left(\lambda_i\right)


Examples

\begin \mathbf^2 &= \left(\mathbf\mathbf\mathbf^\right) \left(\mathbf\mathbf\mathbf^\right) = \mathbf\mathbf\left(\mathbf^\mathbf\right)\mathbf\mathbf^ = \mathbf\mathbf^2\mathbf^ \\ .2ex \mathbf^n &= \mathbf\mathbf^n\mathbf^ \\ .2ex \exp \mathbf &= \mathbf \exp(\mathbf) \mathbf^ \end which are examples for the functions f(x)=x^2, \; f(x)=x^n, \; f(x)=\exp . Furthermore, \exp is the
matrix exponential In mathematics, the matrix exponential is a matrix function on square matrix, square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exp ...
.


Decomposition for spectral matrices

Spectral matrices are matrices that possess distinct eigenvalues and a complete set of eigenvectors. This characteristic allows spectral matrices to be fully diagonalizable, meaning they can be decomposed into simpler forms using eigendecomposition. This decomposition process reveals fundamental insights into the matrix's structure and behavior, particularly in fields such as quantum mechanics, signal processing, and numerical analysis.


Normal matrices

A complex-valued square matrix A is normal (meaning , \mathbf^*\mathbf=\mathbf \mathbf^*, where \mathbf^* is the
conjugate transpose In mathematics, the conjugate transpose, also known as the Hermitian transpose, of an m \times n complex matrix \mathbf is an n \times m matrix obtained by transposing \mathbf and applying complex conjugation to each entry (the complex conjugate ...
) if and only if it can be decomposed as \mathbf=\mathbf \mathbf\Lambda\mathbf^*, where \mathbf is a unitary matrix (meaning \mathbf^*=\mathbf^) and \mathbf\Lambda = diag(\lambda_1, \ldots,\lambda_n) is a
diagonal matrix In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagon ...
. The columns \mathbf_1,\cdots,\mathbf_n of \mathbf form an
orthonormal basis In mathematics, particularly linear algebra, an orthonormal basis for an inner product space V with finite Dimension (linear algebra), dimension is a Basis (linear algebra), basis for V whose vectors are orthonormal, that is, they are all unit vec ...
and are eigenvectors of \mathbf with corresponding eigenvalues \lambda_1, \ldots,\lambda_n. For example, consider the 2 x 2 normal matrix \mathbf=\begin 1 & 2 \\ 2 & 1 \end. The eigenvalues are \lambda_1=3 and \lambda_2 = -1. The (normalized) eigenvectors corresponding to these eigenvalues are \mathbf_1=\frac\begin1\\1\end and \mathbf_2=\frac\begin-1\\1\end. The diagonalization is \mathbf=\mathbf \mathbf\Lambda\mathbf^*, where \mathbf=\begin 1/\sqrt & 1/\sqrt \\ 1/\sqrt & -1/\sqrt \end, \mathbf\Lambda =\begin 3 & 0 \\ 0 & -1\end and \mathbf^*=\mathbf^=\begin 1/\sqrt & 1/\sqrt \\ 1/\sqrt & -1/\sqrt \end. The verification is \mathbf \mathbf\Lambda\mathbf^*=\begin 1/\sqrt & 1/\sqrt \\ 1/\sqrt & -1/\sqrt \end\begin 3 & 0 \\ 0 & -1\end\begin 1/\sqrt & 1/\sqrt \\ 1/\sqrt & -1/\sqrt \end=\begin 1 & 2 \\ 2 & 1 \end=\mathbf. This example illustrates the process of diagonalizing a normal matrix \mathbf by finding its eigenvalues and eigenvectors, forming the unitary matrix \mathbf, the diagonal matrix \mathbf\Lambda, and verifying the decomposition.


Real symmetric matrices

As a special case, for every real
symmetric matrix In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, Because equal matrices have equal dimensions, only square matrices can be symmetric. The entries of a symmetric matrix are symmetric with ...
, the eigenvalues are real and the eigenvectors can be chosen real and orthonormal. Thus a real symmetric matrix can be decomposed as \mathbf=\mathbf \mathbf\mathbf^\mathsf, where is an
orthogonal matrix In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors. One way to express this is Q^\mathrm Q = Q Q^\mathrm = I, where is the transpose of and is the identi ...
whose columns are the real, orthonormal eigenvectors of , and is a diagonal matrix whose entries are the eigenvalues of .


Diagonalizable matrices

Diagonalizable matrices can be decomposed using eigendecomposition, provided they have a full set of linearly independent eigenvectors. They can be expressed as\mathbf=\mathbf \mathbf\mathbf^, where \mathbf is a matrix whose columns are eigenvectors of \mathbf and \mathbf is a diagonal matrix consisting of the corresponding eigenvalues of \mathbf.


Positive definite matrices

Positive definite matrices are matrices for which all eigenvalues are positive. They can be decomposed as \mathbf=\mathbf \mathbf^\mathsf using the
Cholesky decomposition In linear algebra, the Cholesky decomposition or Cholesky factorization (pronounced ) is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for eff ...
, where \mathbf is a lower triangular matrix.


Unitary and Hermitian matrices

Unitary matrices satisfy \mathbf\mathbf^*=\mathbf (real case) or \mathbf\mathbf^\dagger=\mathbf (complex case), where \mathbf^*denotes the
conjugate transpose In mathematics, the conjugate transpose, also known as the Hermitian transpose, of an m \times n complex matrix \mathbf is an n \times m matrix obtained by transposing \mathbf and applying complex conjugation to each entry (the complex conjugate ...
and \mathbf^\daggerdenotes the conjugate transpose. They diagonalize using unitary transformations. Hermitian matrices satisfy \mathbf=\mathbf^\dagger, where \mathbf^\daggerdenotes the conjugate transpose. They can be diagonalized using unitary or orthogonal matrices.


Useful facts


Useful facts regarding eigenvalues

*The product of the eigenvalues is equal to the
determinant In mathematics, the determinant is a Scalar (mathematics), scalar-valued function (mathematics), function of the entries of a square matrix. The determinant of a matrix is commonly denoted , , or . Its value characterizes some properties of the ...
of \det\left(\mathbf\right) = \prod_^ Note that each eigenvalue is raised to the power , the algebraic multiplicity. *The sum of the eigenvalues is equal to the trace of \operatorname\left(\mathbf\right) = \sum_^ Note that each eigenvalue is multiplied by , the algebraic multiplicity. *If the eigenvalues of are , and is invertible, then the eigenvalues of are simply . *If the eigenvalues of are , then the eigenvalues of are simply , for any
holomorphic function In mathematics, a holomorphic function is a complex-valued function of one or more complex variables that is complex differentiable in a neighbourhood of each point in a domain in complex coordinate space . The existence of a complex de ...
.


Useful facts regarding eigenvectors

* If is
Hermitian {{Short description, none Numerous things are named after the French mathematician Charles Hermite (1822–1901): Hermite * Cubic Hermite spline, a type of third-degree spline * Gauss–Hermite quadrature, an extension of Gaussian quadrature me ...
and full-rank, the basis of eigenvectors may be chosen to be mutually
orthogonal In mathematics, orthogonality (mathematics), orthogonality is the generalization of the geometric notion of ''perpendicularity''. Although many authors use the two terms ''perpendicular'' and ''orthogonal'' interchangeably, the term ''perpendic ...
. The eigenvalues are real. * The eigenvectors of are the same as the eigenvectors of . * Eigenvectors are only defined up to a multiplicative constant. That is, if then is also an eigenvector for any scalar . In particular, and (for any ''θ'') are also eigenvectors. * In the case of degenerate eigenvalues (an eigenvalue having more than one eigenvector), the eigenvectors have an additional freedom of linear transformation, that is to say, any linear (orthonormal) combination of eigenvectors sharing an eigenvalue (in the degenerate subspace) is itself an eigenvector (in the subspace).


Useful facts regarding eigendecomposition

* can be eigendecomposed if and only if the number of linearly independent eigenvectors, , equals the dimension of an eigenvector: * If the field of scalars is algebraically closed and if has no repeated roots, that is, if N_\lambda = N, then can be eigendecomposed. * The statement " can be eigendecomposed" does ''not'' imply that has an inverse as some eigenvalues may be zero, which is not invertible. * The statement " has an inverse" does ''not'' imply that can be eigendecomposed. A counterexample is \left \begin 1 & 1 \\ 0 & 1 \end \right/math>, which is an invertible
defective matrix In linear algebra, a defective matrix is a square matrix that does not have a complete basis of eigenvectors, and is therefore not diagonalizable. In particular, an n \times n matrix is defective if and only if it does not have n linearly indepe ...
.


Useful facts regarding matrix inverse

* can be inverted
if and only if In logic and related fields such as mathematics and philosophy, "if and only if" (often shortened as "iff") is paraphrased by the biconditional, a logical connective between statements. The biconditional is true in two cases, where either bo ...
all eigenvalues are nonzero: \lambda_i \ne 0 \quad \forall \,i * If ''and'' , the inverse is given by \mathbf^ = \mathbf\mathbf^\mathbf^


Numerical computations


Numerical computation of eigenvalues

Suppose that we want to compute the eigenvalues of a given matrix. If the matrix is small, we can compute them symbolically using the
characteristic polynomial In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The ...
. However, this is often impossible for larger matrices, in which case we must use a
numerical method In numerical analysis, a numerical method is a mathematical tool designed to solve numerical problems. The implementation of a numerical method with an appropriate convergence check in a programming language is called a numerical algorithm. Mathem ...
. In practice, eigenvalues of large matrices are not computed using the characteristic polynomial. Computing the polynomial becomes expensive in itself, and exact (symbolic) roots of a high-degree polynomial can be difficult to compute and express: the
Abel–Ruffini theorem In mathematics, the Abel–Ruffini theorem (also known as Abel's impossibility theorem) states that there is no solution in radicals to general polynomial equations of degree five or higher with arbitrary coefficients. Here, ''general'' means t ...
implies that the roots of high-degree (5 or above) polynomials cannot in general be expressed simply using th roots. Therefore, general algorithms to find eigenvectors and eigenvalues are iterative. Iterative numerical algorithms for approximating roots of polynomials exist, such as
Newton's method In numerical analysis, the Newton–Raphson method, also known simply as Newton's method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a ...
, but in general it is impractical to compute the characteristic polynomial and then apply these methods. One reason is that small
round-off error In computing, a roundoff error, also called rounding error, is the difference between the result produced by a given algorithm using exact arithmetic and the result produced by the same algorithm using finite-precision, rounded arithmetic. Roun ...
s in the coefficients of the characteristic polynomial can lead to large errors in the eigenvalues and eigenvectors: the roots are an extremely
ill-conditioned In numerical analysis, the condition number of a function measures how much the output value of the function can change for a small change in the input argument. This is used to measure how sensitive a function is to changes or errors in the inpu ...
function of the coefficients. A simple and accurate iterative method is the power method: a
random In common usage, randomness is the apparent or actual lack of definite pattern or predictability in information. A random sequence of events, symbols or steps often has no order and does not follow an intelligible pattern or combination. ...
vector is chosen and a sequence of
unit vector In mathematics, a unit vector in a normed vector space is a Vector (mathematics and physics), vector (often a vector (geometry), spatial vector) of Norm (mathematics), length 1. A unit vector is often denoted by a lowercase letter with a circumfle ...
s is computed as \frac, \frac, \frac, \ldots This
sequence In mathematics, a sequence is an enumerated collection of objects in which repetitions are allowed and order matters. Like a set, it contains members (also called ''elements'', or ''terms''). The number of elements (possibly infinite) is cal ...
will almost always converge to an eigenvector corresponding to the eigenvalue of greatest magnitude, provided that has a nonzero component of this eigenvector in the eigenvector basis (and also provided that there is only one eigenvalue of greatest magnitude). This simple algorithm is useful in some practical applications; for example,
Google Google LLC (, ) is an American multinational corporation and technology company focusing on online advertising, search engine technology, cloud computing, computer software, quantum computing, e-commerce, consumer electronics, and artificial ...
uses it to calculate the page rank of documents in their search engine. Also, the power method is the starting point for many more sophisticated algorithms. For instance, by keeping not just the last vector in the sequence, but instead looking at the span of ''all'' the vectors in the sequence, one can get a better (faster converging) approximation for the eigenvector, and this idea is the basis of Arnoldi iteration. Alternatively, the important QR algorithm is also based on a subtle transformation of a power method.


Numerical computation of eigenvectors

Once the eigenvalues are computed, the eigenvectors could be calculated by solving the equation \left(\mathbf - \lambda_i \mathbf\right)\mathbf_ = \mathbf using
Gaussian elimination In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of row-wise operations performed on the corresponding matrix of coefficients. This method can a ...
or any other method for solving
matrix equation In mathematics, a matrix (: matrices) is a rectangle, rectangular array or table of numbers, symbol (formal), symbols, or expression (mathematics), expressions, with elements or entries arranged in rows and columns, which is used to represent ...
s. However, in practical large-scale eigenvalue methods, the eigenvectors are usually computed in other ways, as a byproduct of the eigenvalue computation. In power iteration, for example, the eigenvector is actually computed before the eigenvalue (which is typically computed by the
Rayleigh quotient In mathematics, the Rayleigh quotient () for a given complex Hermitian matrix M and nonzero vector ''x'' is defined as:R(M,x) = .For real matrices and vectors, the condition of being Hermitian reduces to that of being symmetric, and the conjugat ...
of the eigenvector). In the QR algorithm for a
Hermitian matrix In mathematics, a Hermitian matrix (or self-adjoint matrix) is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the -th row and -th column is equal to the complex conjugate of the element in the ...
(or any normal matrix), the orthonormal eigenvectors are obtained as a product of the matrices from the steps in the algorithm. (For more general matrices, the QR algorithm yields the
Schur decomposition In the mathematical discipline of linear algebra, the Schur decomposition or Schur triangulation, named after Issai Schur, is a matrix decomposition. It allows one to write an arbitrary complex square matrix as unitarily similar to an upper tria ...
first, from which the eigenvectors can be obtained by a backsubstitution procedure.) For Hermitian matrices, the
Divide-and-conquer eigenvalue algorithm The term divide and conquer in politics refers to an entity gaining and maintaining political power by using divisive measures. This includes the exploitation of existing divisions within a political group by its political opponents, and also ...
is more efficient than the QR algorithm if both eigenvectors and eigenvalues are desired.


Additional topics


Generalized eigenspaces

Recall that the ''geometric'' multiplicity of an eigenvalue can be described as the dimension of the associated eigenspace, the nullspace of . The algebraic multiplicity can also be thought of as a dimension: it is the dimension of the associated generalized eigenspace (1st sense), which is the nullspace of the matrix for ''any sufficiently large ''. That is, it is the space of ''
generalized eigenvector In linear algebra, a generalized eigenvector of an n\times n matrix A is a vector which satisfies certain criteria which are more relaxed than those for an (ordinary) eigenvector. Let V be an n-dimensional vector space and let A be the matrix r ...
s'' (first sense), where a generalized eigenvector is any vector which ''eventually'' becomes 0 if is applied to it enough times successively. Any eigenvector is a generalized eigenvector, and so each eigenspace is contained in the associated generalized eigenspace. This provides an easy proof that the geometric multiplicity is always less than or equal to the algebraic multiplicity. This usage should not be confused with the ''generalized eigenvalue problem'' described below.


Conjugate eigenvector

A conjugate eigenvector or coneigenvector is a vector sent after transformation to a scalar multiple of its conjugate, where the scalar is called the conjugate eigenvalue or coneigenvalue of the linear transformation. The coneigenvectors and coneigenvalues represent essentially the same information and meaning as the regular eigenvectors and eigenvalues, but arise when an alternative coordinate system is used. The corresponding equation is \mathbf\mathbf = \lambda \mathbf^*. For example, in coherent electromagnetic scattering theory, the linear transformation represents the action performed by the scattering object, and the eigenvectors represent polarization states of the electromagnetic wave. In
optics Optics is the branch of physics that studies the behaviour and properties of light, including its interactions with matter and the construction of optical instruments, instruments that use or Photodetector, detect it. Optics usually describes t ...
, the coordinate system is defined from the wave's viewpoint, known as the Forward Scattering Alignment (FSA), and gives rise to a regular eigenvalue equation, whereas in
radar Radar is a system that uses radio waves to determine the distance ('' ranging''), direction ( azimuth and elevation angles), and radial velocity of objects relative to the site. It is a radiodetermination method used to detect and track ...
, the coordinate system is defined from the radar's viewpoint, known as the Back Scattering Alignment (BSA), and gives rise to a coneigenvalue equation.


Generalized eigenvalue problem

A generalized eigenvalue problem (second sense) is the problem of finding a (nonzero) vector that obeys \mathbf\mathbf = \lambda \mathbf \mathbf where and are matrices. If obeys this equation, with some , then we call the ''generalized eigenvector'' of and (in the second sense), and is called the ''generalized eigenvalue'' of and (in the second sense) which corresponds to the generalized eigenvector . The possible values of must obey the following equation \det(\mathbf - \lambda \mathbf) = 0. If linearly independent vectors can be found, such that for every , , then we define the matrices and such that P = \begin , & & , \\ \mathbf_1 & \cdots & \mathbf_n \\ , & & , \end \equiv \begin (\mathbf_1)_1 & \cdots & (\mathbf_n)_1 \\ \vdots & & \vdots \\ (\mathbf_1)_n & \cdots & (\mathbf_n)_n \end (D)_ = \begin \lambda_i, & \texti = j\\ 0, & \text \end Then the following equality holds \mathbf = \mathbf\mathbf\mathbf\mathbf^ And the proof is \mathbf\mathbf= \mathbf \begin , & & , \\ \mathbf_1 & \cdots & \mathbf_n \\ , & & , \end = \begin , & & , \\ A\mathbf_1 & \cdots & A\mathbf_n \\ , & & , \end = \begin , & & , \\ \lambda_1B\mathbf_1 & \cdots & \lambda_nB\mathbf_n \\ , & & , \end = \begin , & & , \\ B\mathbf_1 & \cdots & B\mathbf_n \\ , & & , \end \mathbf = \mathbf\mathbf\mathbf And since is invertible, we multiply the equation from the right by its inverse, finishing the proof. The set of matrices of the form , where is a complex number, is called a ''pencil''; the term '' matrix pencil'' can also refer to the pair of matrices. If is invertible, then the original problem can be written in the form \mathbf^\mathbf\mathbf = \lambda \mathbf which is a standard eigenvalue problem. However, in most situations it is preferable not to perform the inversion, but rather to solve the generalized eigenvalue problem as stated originally. This is especially important if and are Hermitian matrices, since in this case is not generally Hermitian and important properties of the solution are no longer apparent. If and are both symmetric or Hermitian, and is also a
positive-definite matrix In mathematics, a symmetric matrix M with real entries is positive-definite if the real number \mathbf^\mathsf M \mathbf is positive for every nonzero real column vector \mathbf, where \mathbf^\mathsf is the row vector transpose of \mathbf. Mo ...
, the eigenvalues are real and eigenvectors and with distinct eigenvalues are -orthogonal (). In this case, eigenvectors can be chosen so that the matrix defined above satisfies \mathbf^* \mathbf B \mathbf = \mathbf or \mathbf\mathbf^*\mathbf B = \mathbf, and there exists a basis of generalized eigenvectors (it is not a defective problem). This case is sometimes called a ''Hermitian definite pencil'' or ''definite pencil''.


See also

* Eigenvalue perturbation * Frobenius covariant *
Householder transformation In linear algebra, a Householder transformation (also known as a Householder reflection or elementary reflector) is a linear transformation that describes a reflection (mathematics), reflection about a plane (mathematics), plane or hyperplane conta ...
*
Jordan normal form \begin \lambda_1 1\hphantom\hphantom\\ \hphantom\lambda_1 1\hphantom\\ \hphantom\lambda_1\hphantom\\ \hphantom\lambda_2 1\hphantom\hphantom\\ \hphantom\hphantom\lambda_2\hphantom\\ \hphantom\lambda_3\hphantom\\ \hphantom\ddots\hphantom\\ ...
* List of matrices * Matrix decomposition *
Singular value decomposition In linear algebra, the singular value decomposition (SVD) is a Matrix decomposition, factorization of a real number, real or complex number, complex matrix (mathematics), matrix into a rotation, followed by a rescaling followed by another rota ...
* Sylvester's formula


Notes


References

* * * * {{refend


External links


Interactive program & tutorial of Spectral Decomposition
Matrix theory Matrix decompositions