HOME

TheInfoList



OR:

In numerical linear algebra, the Jacobi eigenvalue algorithm is an
iterative method In computational mathematics, an iterative method is a mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the ''n''-th approximation is derived from the pre ...
for the calculation of the
eigenvalue In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denot ...
s and
eigenvector In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denote ...
s of a real
symmetric matrix In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, Because equal matrices have equal dimensions, only square matrices can be symmetric. The entries of a symmetric matrix are symmetric with ...
(a process known as diagonalization). It is named after
Carl Gustav Jacob Jacobi Carl Gustav Jacob Jacobi (; ; 10 December 1804 – 18 February 1851) was a German mathematician who made fundamental contributions to elliptic functions, dynamics, differential equations, determinants, and number theory. His name is occasio ...
, who first proposed the method in 1846, but only became widely used in the 1950s with the advent of computers.


Description

Let S be a symmetric matrix, and G=G(i,j,\theta) be a Givens rotation matrix. Then: :S'=G S G^\top \, is symmetric and similar to S. Furthermore, S^\prime has entries: :\begin S'_ &= c^2\, S_ - 2\, s c \,S_ + s^2\, S_ \\ S'_ &= s^2 \,S_ + 2 s c\, S_ + c^2 \, S_ \\ S'_ &= S'_ = (c^2 - s^2 ) \, S_ + s c \, (S_ - S_ ) \\ S'_ &= S'_ = c \, S_ - s \, S_ & k \ne i,j \\ S'_ &= S'_ = s \, S_ + c \, S_ & k \ne i,j \\ S'_ &= S_ &k,l \ne i,j \end where s=\sin(\theta) and c=\cos(\theta). Since G is orthogonal, S and S^\prime have the same
Frobenius norm In mathematics, a matrix norm is a vector norm in a vector space whose elements (vectors) are matrices (of given dimensions). Preliminaries Given a field K of either real or complex numbers, let K^ be the -vector space of matrices with m ...
, , \cdot, , _F (the square-root sum of squares of all components), however we can choose \theta such that S^\prime_=0, in which case S^\prime has a larger sum of squares on the diagonal: : S'_ = \cos(2\theta) S_ + \tfrac \sin(2\theta) (S_ - S_) Set this equal to 0, and rearrange: : \tan(2\theta) = \frac if S_ = S_ : \theta = \frac In order to optimize this effect, ''S''''ij'' should be the
off-diagonal element In geometry, a diagonal is a line segment joining two vertices of a polygon or polyhedron, when those vertices are not on the same edge. Informally, any sloping line is called diagonal. The word ''diagonal'' derives from the ancient Greek δ ...
with the largest absolute value, called the ''pivot''. The Jacobi eigenvalue method repeatedly performs rotations until the matrix becomes almost diagonal. Then the elements in the diagonal are approximations of the (real) eigenvalues of ''S''.


Convergence

If p = S_ is a pivot element, then by definition , S_ , \le , p, for 1 \le i, j \le n, i \ne j . Let \Gamma(S)^2 denote the sum of squares of all off-diagonal entries of S. Since S has exactly 2N := n(n-1) off-diagonal elements, we have p^2 \le \Gamma(S )^2 \le 2 N p^2 or 2 p^2 \ge \Gamma(S )^2 / N . Now \Gamma(S^J)^2=\Gamma(S)^2-2p^2. This implies \Gamma(S^J )^2 \le (1 - 1 / N ) \Gamma (S )^2 or \Gamma (S^ J ) \le (1 - 1 / N )^ \Gamma(S ) , i.e. the sequence of Jacobi rotations converges at least linearly by a factor (1 - 1 / N )^ to a diagonal matrix. A number of N Jacobi rotations is called a sweep; let S^ denote the result. The previous estimate yields : \Gamma(S^ ) \le \left(1 - \frac \right)^ \Gamma(S ) , i.e. the sequence of sweeps converges at least linearly with a factor ≈ e ^ . However the following result of Schönhage yields locally quadratic convergence. To this end let ''S'' have ''m'' distinct eigenvalues \lambda_1, ... , \lambda_m with multiplicities \nu_1, ... , \nu_m and let ''d'' > 0 be the smallest distance of two different eigenvalues. Let us call a number of : N_S := \frac - \sum_^ \frac \nu_ (\nu_ - 1) \le N Jacobi rotations a Schönhage-sweep. If S^ s denotes the result then : \Gamma(S^ s ) \le\sqrt \left(\frac\right), \quad \gamma := \Gamma(S ) . Thus convergence becomes quadratic as soon as \Gamma(S ) < \frac


Cost

Each Jacobi rotation can be done in O(''n'') steps when the pivot element ''p'' is known. However the search for ''p'' requires inspection of all ''N'' ≈  ''n''2 off-diagonal elements. We can reduce this to O(''n'') complexity too if we introduce an additional index array m_1, \, \dots \, , \, m_ with the property that m_i is the index of the largest element in row ''i'', (''i'' = 1, ..., ''n'' − 1) of the current ''S''. Then the indices of the pivot (''k'', ''l'') must be one of the pairs (i, m_i) . Also the updating of the index array can be done in O(''n'') average-case complexity: First, the maximum entry in the updated rows ''k'' and ''l'' can be found in O(''n'') steps. In the other rows ''i'', only the entries in columns ''k'' and ''l'' change. Looping over these rows, if m_i is neither ''k'' nor ''l'', it suffices to compare the old maximum at m_i to the new entries and update m_i if necessary. If m_i should be equal to ''k'' or ''l'' and the corresponding entry decreased during the update, the maximum over row ''i'' has to be found from scratch in O(''n'') complexity. However, this will happen on average only once per rotation. Thus, each rotation has O(''n'') and one sweep O(''n''3) average-case complexity, which is equivalent to one matrix multiplication. Additionally the m_i must be initialized before the process starts, which can be done in ''n''2 steps. Typically the Jacobi method converges within numerical precision after a small number of sweeps. Note that multiple eigenvalues reduce the number of iterations since N_S < N.


Algorithm

The following algorithm is a description of the Jacobi method in math-like notation. It calculates a vector ''e'' which contains the eigenvalues and a matrix ''E'' which contains the corresponding eigenvectors, i.e. e_i is an eigenvalue and the column E_i an orthonormal eigenvector for e_i , ''i'' = 1, ..., ''n''. procedure jacobi(''S'' ∈ R''n''×''n''; out ''e'' ∈ R''n''; out ''E'' ∈ R''n''×''n'') var ''i'', ''k'', ''l'', ''m'', ''state'' ∈ N ''s'', ''c'', ''t'', ''p'', ''y'', ''d'', ''r'' ∈ R ''ind'' ∈ N''n'' ''changed'' ∈ L''n'' function maxind(''k'' ∈ N) ∈ N ! ''index of largest off-diagonal element in row k'' ''m'' := ''k''+1 for ''i'' := ''k''+2 to ''n'' do if │''S''''ki''│ > │''S''''km''│ then ''m'' := ''i'' endif endfor return ''m'' endfunc procedure update(''k'' ∈ N; ''t'' ∈ R) ! ''update ek and its status'' ''y'' := ''e''''k''; ''e''''k'' := ''y''+''t'' if ''changed''''k'' and (''y''=''e''''k'') then ''changed''''k'' := false; ''state'' := ''state''−1 elsif (not ''changed''''k'') and (''y''≠''e''''k'') then ''changed''''k'' := true; ''state'' := ''state''+1 endif endproc procedure rotate(''k'',''l'',''i'',''j'' ∈ N) ! ''perform rotation of Sij, Skl'' ┌ ┐ ┌ ┐┌ ┐ │''S''''kl''│ │''c'' −''s''││''S''''kl''│ │ │ := │ ││ │ │''S''''ij''│ │''s'' ''c''││''S''''ij''│ └ ┘ └ ┘└ ┘ endproc ! ''init e, E, and arrays ind, changed'' ''E'' := ''I''; ''state'' := ''n'' for ''k'' := 1 to ''n'' do ''ind''''k'' := maxind(''k''); ''e''''k'' := ''S''''kk''; ''changed''''k'' := true endfor while ''state''≠0 do ! ''next rotation'' ''m'' := 1 ! ''find index (k,l) of pivot p'' for ''k'' := 2 to ''n''−1 do if │''S''''k'' ''ind''''k''│ > │''S''''m'' ''ind''''m''│ then ''m'' := ''k'' endif endfor ''k'' := ''m''; ''l'' := ''ind''''m''; ''p'' := ''S''''kl'' ! ''calculate c = cos φ, s = sin φ'' ''y'' := (''e''''l''−''e''''k'')/2; ''d'' := │''y''│+√(''p''2+''y''2) ''r'' := √(''p''2+''d''2); ''c'' := ''d''/''r''; ''s'' := ''p''/''r''; ''t'' := ''p''2/''d'' if ''y''<0 then ''s'' := −''s''; ''t'' := −''t'' endif ''S''''kl'' := 0.0; update(''k'',−''t''); update(''l'',''t'') ! ''rotate rows and columns k and l for ''i'' := 1 to ''k''−1 do rotate(''i'',''k'',''i'',''l'') endfor for ''i'' := ''k''+1 to ''l''−1 do rotate(''k'',''i'',''i'',''l'') endfor for ''i'' := ''l''+1 to ''n'' do rotate(''k'',''i'',''l'',''i'') endfor ! ''rotate eigenvectors'' for ''i'' := 1 to ''n'' do ┌ ┐ ┌ ┐┌ ┐ │''E''''ik''│ │''c'' −''s''││''E''''ik''│ │ │ := │ ││ │ │''E''''il''│ │''s'' ''c''││''E''''il''│ └ ┘ └ ┘└ ┘ endfor ! ''update all potentially changed indi'' for ''i'' := 1 to ''n'' do ''ind''''i'' := maxind(''i'') endfor loop endproc


Notes

1. The logical array ''changed'' holds the status of each eigenvalue. If the numerical value of e_k or e_l changes during an iteration, the corresponding component of ''changed'' is set to ''true'', otherwise to ''false''. The integer ''state'' counts the number of components of ''changed'' which have the value ''true''. Iteration stops as soon as ''state'' = 0. This means that none of the approximations e_1,\, ...\, , e_n has recently changed its value and thus it is not very likely that this will happen if iteration continues. Here it is assumed that floating point operations are optimally rounded to the nearest floating point number. 2. The upper triangle of the matrix ''S'' is destroyed while the lower triangle and the diagonal are unchanged. Thus it is possible to restore ''S'' if necessary according to for ''k'' := 1 to ''n''−1 do ! ''restore matrix S'' for ''l'' := ''k''+1 to ''n'' do ''S''''kl'' := ''S''''lk'' endfor endfor 3. The eigenvalues are not necessarily in descending order. This can be achieved by a simple sorting algorithm. for ''k'' := 1 to ''n''−1 do ''m'' := ''k'' for ''l'' := ''k''+1 to ''n'' do if ''e''''l'' > ''e''''m'' then ''m'' := ''l'' endif endfor if ''k'' ≠ ''m'' then swap ''e''''m'',''e''''k'' swap ''E''''m'',''E''''k'' endif endfor 4. The algorithm is written using matrix notation (1 based arrays instead of 0 based). 5. When implementing the algorithm, the part specified using matrix notation must be performed simultaneously. 6. This implementation does not correctly account for the case in which one dimension is an independent subspace. For example, if given a diagonal matrix, the above implementation will never terminate, as none of the eigenvalues will change. Hence, in real implementations, extra logic must be added to account for this case.


Example

Let S = \begin 4 & -30 & 60 & -35 \\ -30 & 300 & -675 & 420 \\ 60 & -675 & 1620 & -1050 \\ -35 & 420 & -1050 & 700 \end Then ''jacobi'' produces the following eigenvalues and eigenvectors after 3 sweeps (19 iterations) : e_1 = 2585.25381092892231 E_1 = \begin0.0291933231647860588\\ -0.328712055763188997\\ 0.791411145833126331\\ -0.514552749997152907\end e_2 = 37.1014913651276582 E_2 = \begin-0.179186290535454826\\ 0.741917790628453435\\ -0.100228136947192199\\ -0.638282528193614892\end e_3 = 1.4780548447781369 E_3 = \begin-0.582075699497237650\\ 0.370502185067093058\\ 0.509578634501799626\\ 0.514048272222164294\end e_4 = 0.1666428611718905 E_4 = \begin0.792608291163763585\\ 0.451923120901599794\\ 0.322416398581824992\\ 0.252161169688241933\end


Applications for real symmetric matrices

When the eigenvalues (and eigenvectors) of a symmetric matrix are known, the following values are easily calculated. ;Singular values :The singular values of a (square) matrix ''A'' are the square roots of the (non-negative) eigenvalues of A^T A . In case of a symmetric matrix ''S'' we have of S^T S = S^2 , hence the singular values of ''S'' are the absolute values of the eigenvalues of ''S'' ;2-norm and spectral radius :The 2-norm of a matrix ''A'' is the norm based on the Euclidean vectornorm, i.e. the largest value \, A x\, _2 when x runs through all vectors with \, x\, _2 = 1 . It is the largest singular value of ''A''. In case of a symmetric matrix it is the largest absolute value of its eigenvectors and thus equal to its spectral radius. ;Condition number :The condition number of a nonsingular matrix ''A'' is defined as \mbox (A) = \, A \, _2 \, A^\, _2 . In case of a symmetric matrix it is the absolute value of the quotient of the largest and smallest eigenvalue. Matrices with large condition numbers can cause numerically unstable results: small perturbation can result in large errors. Hilbert matrices are the most famous ill-conditioned matrices. For example, the fourth-order Hilbert matrix has a condition of 15514, while for order 8 it is 2.7 × 108. ;Rank :A matrix ''A'' has rank ''r'' if it has ''r'' columns that are linearly independent while the remaining columns are linearly dependent on these. Equivalently, ''r'' is the dimension of the range of ''A''. Furthermore it is the number of nonzero singular values. :In case of a symmetric matrix r is the number of nonzero eigenvalues. Unfortunately because of rounding errors numerical approximations of zero eigenvalues may not be zero (it may also happen that a numerical approximation is zero while the true value is not). Thus one can only calculate the ''numerical'' rank by making a decision which of the eigenvalues are close enough to zero. ;Pseudo-inverse :The pseudo inverse of a matrix ''A'' is the unique matrix X = A^+ for which ''AX'' and ''XA'' are symmetric and for which ''AXA = A, XAX = X'' holds. If ''A'' is nonsingular, then ' A^+ = A^ . :When procedure jacobi (S, e, E) is called, then the relation S = E^T \mbox (e) E holds where Diag(''e'') denotes the diagonal matrix with vector ''e'' on the diagonal. Let e^+ denote the vector where e_i is replaced by 1/e_i if e_i \le 0 and by 0 if e_i is (numerically close to) zero. Since matrix ''E'' is orthogonal, it follows that the pseudo-inverse of S is given by S^+ = E^T \mbox (e^+) E . ;Least squares solution :If matrix ''A'' does not have full rank, there may not be a solution of the linear system ''Ax = b''. However one can look for a vector x for which \, Ax - b \, _2 is minimal. The solution is x = A^+ b . In case of a symmetric matrix ''S'' as before, one has x = S^+ b = E^T \mbox (e^+) E b . ;Matrix exponential :From S = E^T \mbox (e) E one finds \exp S = E^T \mbox (\exp e) E where exp ''e'' is the vector where e_i is replaced by \exp e_i . In the same way, ''f''(''S'') can be calculated in an obvious way for any (analytic) function ''f''. ;Linear differential equations :The differential equation ''x' '' = ''Ax'', ''x''(0) = ''a'' has the solution ''x''(''t'') = exp(''t A'') ''a''. For a symmetric matrix ''S'', it follows that x(t) = E^T \mbox