QR Algorithm
   HOME

TheInfoList



OR:

In
numerical linear algebra Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathemati ...
, the QR algorithm or QR iteration is an
eigenvalue algorithm In numerical analysis, one of the most important problems is designing efficient and Numerical stability, stable algorithms for finding the eigenvalues of a Matrix (mathematics), matrix. These eigenvalue algorithms may also find eigenvectors. Eig ...
: that is, a procedure to calculate the
eigenvalues and eigenvectors In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a ...
of a
matrix Matrix (: matrices or matrixes) or MATRIX may refer to: Science and mathematics * Matrix (mathematics), a rectangular array of numbers, symbols or expressions * Matrix (logic), part of a formula in prenex normal form * Matrix (biology), the m ...
. The QR algorithm was developed in the late 1950s by John G. F. Francis and by
Vera N. Kublanovskaya Vera Nikolaevna Kublanovskaya (''née'' Totubalina; November 21, 1920 – February 21, 2012 ) was a Russian mathematician noted for her work on developing computational methods for solving spectral problems of algebra. She proposed the QR algorithm ...
, working independently. The basic idea is to perform a
QR decomposition In linear algebra, a QR decomposition, also known as a QR factorization or QU factorization, is a decomposition of a matrix ''A'' into a product ''A'' = ''QR'' of an orthonormal matrix ''Q'' and an upper triangular matrix ''R''. QR decom ...
, writing the matrix as a product of an
orthogonal matrix In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors. One way to express this is Q^\mathrm Q = Q Q^\mathrm = I, where is the transpose of and is the identi ...
and an upper
triangular matrix In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called if all the entries ''above'' the main diagonal are zero. Similarly, a square matrix is called if all the entries ''below'' the main diagonal are z ...
, multiply the factors in the reverse order, and iterate.


The practical QR algorithm

Formally, let be a real matrix of which we want to compute the eigenvalues, and let . At the -th step (starting with ), we compute the
QR decomposition In linear algebra, a QR decomposition, also known as a QR factorization or QU factorization, is a decomposition of a matrix ''A'' into a product ''A'' = ''QR'' of an orthonormal matrix ''Q'' and an upper triangular matrix ''R''. QR decom ...
where is an
orthogonal matrix In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors. One way to express this is Q^\mathrm Q = Q Q^\mathrm = I, where is the transpose of and is the identi ...
(i.e., ) and is an upper triangular matrix. We then form . Note that A_ = R_k Q_k = Q_k^ Q_k R_k Q_k = Q_k^ A_k Q_k = Q_k^ A_k Q_k, so all the are similar and hence they have the same eigenvalues. The algorithm is
numerically stable In the mathematical subfield of numerical analysis, numerical stability is a generally desirable property of numerical algorithms. The precise definition of stability depends on the context: one important context is numerical linear algebra, and a ...
because it proceeds by ''orthogonal'' similarity transforms. Under certain conditions, the matrices ''A''''k'' converge to a triangular matrix, the
Schur form In the mathematical discipline of linear algebra, the Schur decomposition or Schur triangulation, named after Issai Schur, is a matrix decomposition. It allows one to write an arbitrary complex square matrix as unitarily similar to an upper triang ...
of ''A''. The eigenvalues of a triangular matrix are listed on the diagonal, and the eigenvalue problem is solved. In testing for convergence it is impractical to require exact zeros, but the
Gershgorin circle theorem In mathematics, the Gershgorin circle theorem may be used to bound the spectrum of a square matrix. It was first published by the Soviet mathematician Semyon Aronovich Gershgorin in 1931. Gershgorin's name has been transliterated in several diffe ...
provides a bound on the error. If the matrices converge, then the eigenvalues along the diagonal will appear according to their geometric multiplicity. To guarantee convergence, A must be a symmetric matrix, and for all non zero eigenvalues \lambda there must not be a corresponding eigenvalue -\lambda. Due to the fact that a single QR iteration has a cost of \mathcal(n^3) and the convergence is linear, the standard QR algorithm is extremely expensive to compute, especially considering it is not guaranteed to converge.


Using Hessenberg form

In the above crude form the iterations are relatively expensive. This can be mitigated by first bringing the matrix to upper
Hessenberg form In linear algebra, a Hessenberg matrix is a special kind of square matrix, one that is "almost" triangular. To be exact, an upper Hessenberg matrix has zero entries below the first subdiagonal, and a lower Hessenberg matrix has zero entries above t ...
(which costs \tfrac n^3 + \mathcal(n^2) arithmetic operations using a technique based on Householder reduction), with a finite sequence of orthogonal similarity transforms, somewhat like a two-sided QR decomposition. (For QR decomposition, the Householder reflectors are multiplied only on the left, but for the Hessenberg case they are multiplied on both left and right.) Determining the QR decomposition of an upper Hessenberg matrix costs 6 n^2 + \mathcal(n) arithmetic operations. Moreover, because the Hessenberg form is already nearly upper-triangular (it has just one nonzero entry below each diagonal), using it as a starting point reduces the number of steps required for convergence of the QR algorithm. If the original matrix is
symmetric Symmetry () in everyday life refers to a sense of harmonious and beautiful proportion and balance. In mathematics, the term has a more precise definition and is usually used to refer to an object that is invariant under some transformations ...
, then the upper Hessenberg matrix is also symmetric and thus
tridiagonal In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal (the first diagonal below this), and the supradiagonal/upper diagonal (the first diagonal above the main di ...
, and so are all the . In this case reaching Hessenberg form costs \tfrac n^3 + \mathcal(n^2) arithmetic operations using a technique based on Householder reduction. Determining the QR decomposition of a symmetric tridiagonal matrix costs \mathcal(n) operations.


Iteration phase

If a Hessenberg matrix A has element a_ = 0 for some k, i.e., if one of the elements just below the diagonal is in fact zero, then it decomposes into blocks whose eigenproblems may be solved separately; an eigenvalue is either an eigenvalue of the submatrix of the first k-1 rows and columns, or an eigenvalue of the submatrix of remaining rows and columns. The purpose of the QR iteration step is to shrink one of these a_ elements so that effectively a small block along the diagonal is split off from the bulk of the matrix. In the case of a real eigenvalue that is usually the 1 \times 1 block in the lower right corner (in which case element a_ holds that eigenvalue), whereas in the case of a pair of conjugate complex eigenvalues it is the 2 \times 2 block in the lower right corner. The
rate of convergence In mathematical analysis, particularly numerical analysis, the rate of convergence and order of convergence of a sequence that converges to a limit are any of several characterizations of how quickly that sequence approaches its limit. These are ...
depends on the separation between eigenvalues, so a practical algorithm will use shifts, either explicit or implicit, to increase separation and accelerate convergence. A typical symmetric QR algorithm isolates each eigenvalue (then reduces the size of the matrix) with only one or two iterations, making it efficient as well as robust.


A single iteration with explicit shift

The steps of a QR iteration with explicit shift on a real Hessenberg matrix A are: # Pick a shift \mu and subtract it from all diagonal elements, producing the matrix A - \mu I . A basic strategy is to use \mu = a_ , but there are more refined strategies that would further accelerate convergence. The idea is that \mu should be close to an eigenvalue, since making this shift will accelerate convergence to that eigenvalue. # Perform a sequence of
Givens rotation In numerical linear algebra, a Givens rotation is a rotation in the plane spanned by two coordinates axes. Givens rotations are named after Wallace Givens, who introduced them to numerical analysts in the 1950s while he was working at Argonne Natio ...
s G_1, G_2, \dots, G_ on A - \mu I , where G_i acts on rows i and i+1, and G_i is chosen to zero out position (i+1,i) of G_ \dotsb G_1 (A - \mu I) . This produces the upper triangular matrix R = G_ \dotsb G_1 (A - \mu I) . The orthogonal factor Q would be G_1^\mathrm G_2^\mathrm \dotsb G_^\mathrm , but it is neither necessary nor efficient to produce that explicitly. # Now multiply R by the Givens matrices G_1^\mathrm , G_2^\mathrm , ..., G_^\mathrm on the right, where G_i^\mathrm instead acts on columns i and i+1 . This produces the matrix RQ = R G_1^\mathrm G_2^\mathrm \dotsb G_^\mathrm , which is again on Hessenberg form. # Finally undo the shift by adding \mu to all diagonal entries. The result is A' = RQ + \mu I . Since Q commutes with I, we have that A' = Q^\mathrm (A-\mu I) Q + \mu I = Q^\mathrm A Q . The purpose of the shift is to change which Givens rotations are chosen. In more detail, the structure of one of these G_i matrices are G_i = \begin I & 0 & 0 & 0 \\ 0 & c & -s & 0 \\ 0 & s & c & 0 \\ 0 & 0 & 0 & I \end where the I in the upper left corner is an (n-1) \times (n-1) identity matrix, and the two scalars c = \cos\theta and s = \sin\theta are determined by what rotation angle \theta is appropriate for zeroing out position (i+1,i). It is not necessary to exhibit \theta ; the factors c and s can be determined directly from elements in the matrix G_i should act on. Nor is it necessary to produce the whole matrix; multiplication (from the left) by G_i only affects rows i and i+1 , so it is easier to just update those two rows in place. Likewise, for the Step 3 multiplication by G_i^\mathrm from the right, it is sufficient to remember i, c, and s. If using the simple \mu = a_ strategy, then at the beginning of Step 2 we have a matrix A - a_ I = \begin \times & \times & \times & \times & \times \\ \times & \times & \times & \times & \times \\ 0 & \times & \times & \times & \times \\ 0 & 0 & \times & \times & \times \\ 0 & 0 & 0 & \times & 0 \end where the \times denotes “could be whatever”. The first Givens rotation G_1 zeroes out the (i+1,i) position of this, producing G_1 (A - a_ I) = \begin \times & \times & \times & \times & \times \\ 0 & \times & \times & \times & \times \\ 0 & \times & \times & \times & \times \\ 0 & 0 & \times & \times & \times \\ 0 & 0 & 0 & \times & 0 \end \text Each new rotation zeroes out another subdiagonal element, thus increasing the number of known zeroes until we are at H = G_ \dotsb G_1 (A - a_ I) = \begin \times & \times & \times & \times & \times \\ 0 & \times & \times & \times & \times \\ 0 & 0 & \times & \times & \times \\ 0 & 0 & 0 & h_ & h_ \\ 0 & 0 & 0 & h_ & 0 \end \text The final rotation G_ has (c,s) chosen so that s h_ + c h_ = 0 . If , h_, \gg , h_, , as is typically the case when we approach convergence, then c \approx 1 and , s, \ll 1 . Making this rotation produces R = G_ G_ \dotsb G_1 (A - a_ I) = \begin \times & \times & \times & \times & \times \\ 0 & \times & \times & \times & \times \\ 0 & 0 & \times & \times & \times \\ 0 & 0 & 0 & \times & c h_ \\ 0 & 0 & 0 & 0 & s h_ \end \text which is our upper triangular matrix. But now we reach Step 3, and need to start rotating data between columns. The first rotation acts on columns 1 and 2, producing R G_1^\mathrm = \begin \times & \times & \times & \times & \times \\ \times & \times & \times & \times & \times \\ 0 & 0 & \times & \times & \times \\ 0 & 0 & 0 & \times & c h_ \\ 0 & 0 & 0 & 0 & s h_ \end \text The expected pattern is that each rotation moves some nonzero value from the diagonal out to the subdiagonal, returning the matrix to Hessenberg form. This ends at R G_1^\mathrm \dotsb G_^\mathrm = \begin \times & \times & \times & \times & \times \\ \times & \times & \times & \times & \times \\ 0 & \times & \times & \times & \times \\ 0 & 0 & \times & \times & \times \\ 0 & 0 & 0 & -s^2 h_ & cs h_ \end \text Algebraically the form is unchanged, but numerically the element in position (n,n-1) has gotten a lot closer to zero: there used to be a factor s gap between it and the diagonal element above, but now the gap is more like a factor s^2 , and another iteration would make it factor s^4 ; we have quadratic convergence. Practically that means O(1) iterations per eigenvalue suffice for convergence, and thus overall we can complete in O(n) QR steps, each of which does a mere O(n^2) arithmetic operations (or as little as O(n) operations, in the case that A is symmetric).


Visualization

The basic QR algorithm can be visualized in the case where ''A'' is a positive-definite symmetric matrix. In that case, ''A'' can be depicted as an
ellipse In mathematics, an ellipse is a plane curve surrounding two focus (geometry), focal points, such that for all points on the curve, the sum of the two distances to the focal points is a constant. It generalizes a circle, which is the special ty ...
in 2 dimensions or an
ellipsoid An ellipsoid is a surface that can be obtained from a sphere by deforming it by means of directional Scaling (geometry), scalings, or more generally, of an affine transformation. An ellipsoid is a quadric surface;  that is, a Surface (mathemat ...
in higher dimensions. The relationship between the input to the algorithm and a single iteration can then be depicted as in Figure 1 (click to see an animation). Note that the LR algorithm is depicted alongside the QR algorithm. A single iteration causes the ellipse to tilt or "fall" towards the x-axis. In the event where the large semi-axis of the ellipse is parallel to the x-axis, one iteration of QR does nothing. Another situation where the algorithm "does nothing" is when the large semi-axis is parallel to the y-axis instead of the x-axis. In that event, the ellipse can be thought of as balancing precariously without being able to fall in either direction. In both situations, the matrix is diagonal. A situation where an iteration of the algorithm "does nothing" is called a fixed point. The strategy employed by the algorithm is iteration towards a fixed-point. Observe that one fixed point is stable while the other is unstable. If the ellipse were tilted away from the unstable fixed point by a very small amount, one iteration of QR would cause the ellipse to tilt away from the fixed point instead of towards. Eventually though, the algorithm would converge to a different fixed point, but it would take a long time.


Finding eigenvalues versus finding eigenvectors

It's worth pointing out that finding even a single eigenvector of a symmetric matrix is not computable (in exact real arithmetic according to the definitions in
computable analysis In mathematics and computer science, computable analysis is the study of mathematical analysis from the perspective of computability theory. It is concerned with the parts of real analysis and functional analysis that can be carried out in a comp ...
). This difficulty exists whenever the multiplicities of a matrix's eigenvalues are not knowable. On the other hand, the same problem does not exist for finding eigenvalues. The eigenvalues of a matrix are always computable. We will now discuss how these difficulties manifest in the basic QR algorithm. This is illustrated in Figure 2. Recall that the ellipses represent positive-definite symmetric matrices. As the two eigenvalues of the input matrix approach each other, the input ellipse changes into a circle. A circle corresponds to a multiple of the identity matrix. A near-circle corresponds to a near-multiple of the identity matrix whose eigenvalues are nearly equal to the diagonal entries of the matrix. Therefore, the problem of approximately finding the eigenvalues is shown to be easy in that case. But notice what happens to the semi-axes of the ellipses. An iteration of QR (or LR) tilts the semi-axes less and less as the input ellipse gets closer to being a circle. The eigenvectors can only be known when the semi-axes are parallel to the x-axis and y-axis. The number of iterations needed to achieve near-parallelism increases without bound as the input ellipse becomes more circular. While it may be impossible to compute the
eigendecomposition In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the mat ...
of an arbitrary symmetric matrix, it is always possible to perturb the matrix by an arbitrarily small amount and compute the eigendecomposition of the resulting matrix. In the case when the matrix is depicted as a near-circle, the matrix can be replaced with one whose depiction is a perfect circle. In that case, the matrix is a multiple of the identity matrix, and its eigendecomposition is immediate. Be aware though that the resulting
eigenbasis In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a c ...
can be quite far from the original eigenbasis.


Speeding up: Shifting and deflation

The slowdown when the ellipse gets more circular has a converse: It turns out that when the ellipse gets more stretched - and less circular - then the rotation of the ellipse becomes faster. Such a stretch can be induced when the matrix M which the ellipse represents gets replaced with M-\lambda I where \lambda is approximately the smallest eigenvalue of M. In this case, the ratio of the two semi-axes of the ellipse approaches \infty. In higher dimensions, shifting like this makes the length of the smallest semi-axis of an ellipsoid small relative to the other semi-axes, which speeds up convergence to the smallest eigenvalue, but does not speed up convergence to the other eigenvalues. This becomes useless when the smallest eigenvalue is fully determined, so the matrix must then be ''deflated'', which simply means removing its last row and column. The issue with the unstable fixed point also needs to be addressed. The shifting heuristic is often designed to deal with this problem as well: Practical shifts are often discontinuous and randomised. Wilkinson's shift—which is well-suited for symmetric matrices like the ones we're visualising—is in particular discontinuous.


The implicit QR algorithm

In modern computational practice, the QR algorithm is performed in an implicit version which makes the use of multiple shifts easier to introduce. The matrix is first brought to upper Hessenberg form A_0=QAQ^ as in the explicit version; then, at each step, the first column of A_k is transformed via a small-size Householder similarity transformation to the first column of p(A_k) (or p(A_k)e_1), where p(A_k), of degree r, is the polynomial that defines the shifting strategy (often p(x)=(x-\lambda)(x-\bar\lambda), where \lambda and \bar\lambda are the two eigenvalues of the trailing 2 \times 2 principal submatrix of A_k, the so-called ''implicit double-shift''). Then successive Householder transformations of size r+1 are performed in order to return the working matrix A_k to upper Hessenberg form. This operation is known as ''bulge chasing'', due to the peculiar shape of the non-zero entries of the matrix along the steps of the algorithm. As in the first version, deflation is performed as soon as one of the sub-diagonal entries of A_k is sufficiently small.


Renaming proposal

Since in the modern implicit version of the procedure no
QR decomposition In linear algebra, a QR decomposition, also known as a QR factorization or QU factorization, is a decomposition of a matrix ''A'' into a product ''A'' = ''QR'' of an orthonormal matrix ''Q'' and an upper triangular matrix ''R''. QR decom ...
s are explicitly performed, some authors, for instance Watkins, suggested changing its name to ''Francis algorithm''. Golub and Van Loan use the term ''Francis QR step''.


Interpretation and convergence

The QR algorithm can be seen as a more sophisticated variation of the basic "power" eigenvalue algorithm. Recall that the power algorithm repeatedly multiplies ''A'' times a single vector, normalizing after each iteration. The vector converges to an eigenvector of the largest eigenvalue. Instead, the QR algorithm works with a complete basis of vectors, using QR decomposition to renormalize (and orthogonalize). For a symmetric matrix ''A'', upon convergence, ''AQ'' = ''QΛ'', where ''Λ'' is the
diagonal matrix In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagon ...
of eigenvalues to which ''A'' converged, and where ''Q'' is a composite of all the orthogonal similarity transforms required to get there. Thus the columns of ''Q'' are the eigenvectors.


History

The QR algorithm was preceded by the ''LR algorithm'', which uses the
LU decomposition In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix (see matrix multiplication and matrix decomposition). The produ ...
instead of the QR decomposition. The QR algorithm is more stable, so the LR algorithm is rarely used nowadays. However, it represents an important step in the development of the QR algorithm. The LR algorithm was developed in the early 1950s by
Heinz Rutishauser Heinz Rutishauser (30 January 1918 – 10 November 1970) was a Swiss people, Swiss mathematician and a pioneer of modern numerical mathematics and computer science. Life Rutishauser's father died when he was 13 years old and his mother died t ...
, who worked at that time as a research assistant of
Eduard Stiefel Eduard L. Stiefel (21 April 1909 – 25 November 1978) was a Swiss mathematician. Together with Cornelius Lanczos and Magnus Hestenes, he invented the conjugate gradient method, and gave what is now understood to be a partial construction of th ...
at
ETH Zurich ETH Zurich (; ) is a public university in Zurich, Switzerland. Founded in 1854 with the stated mission to educate engineers and scientists, the university focuses primarily on science, technology, engineering, and mathematics. ETH Zurich ran ...
. Stiefel suggested that Rutishauser use the sequence of moments ''y''0T ''A''''k'' ''x''0, ''k'' = 0, 1, ... (where ''x''0 and ''y''0 are arbitrary vectors) to find the eigenvalues of ''A''. Rutishauser took an algorithm of
Alexander Aitken Alexander Craig "Alec" Aitken (1 April 1895 – 3 November 1967) was one of New Zealand's most eminent mathematicians. In a 1935 paper he introduced the concept of generalized least squares, along with now standard vector/matrix notation ...
for this task and developed it into the ''quotient–difference algorithm'' or ''qd algorithm''. After arranging the computation in a suitable shape, he discovered that the qd algorithm is in fact the iteration ''A''''k'' = ''L''''k''''U''''k'' (LU decomposition), ''A''''k''+1 = ''U''''k''''L''''k'', applied on a tridiagonal matrix, from which the LR algorithm follows.


Other variants

One variant of the ''QR algorithm'', ''the Golub-Kahan-Reinsch'' algorithm starts with reducing a general matrix into a bidiagonal one. This variant of the ''QR algorithm'' for the computation of
singular values In mathematics, in particular functional analysis, the singular values of a compact operator T: X \rightarrow Y acting between Hilbert spaces X and Y, are the square roots of the (necessarily non-negative) eigenvalues of the self-adjoint operator ...
was first described by . The
LAPACK LAPACK ("Linear Algebra Package") is a standard software library for numerical linear algebra. It provides routines for solving systems of linear equations and linear least squares, eigenvalue problems, and singular value decomposition. It als ...
subroutin
DBDSQR
implements this
iterative method In computational mathematics, an iterative method is a Algorithm, mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the ''i''-th approximation (called an " ...
, with some modifications to cover the case where the singular values are very small . Together with a first step using Householder reflections and, if appropriate,
QR decomposition In linear algebra, a QR decomposition, also known as a QR factorization or QU factorization, is a decomposition of a matrix ''A'' into a product ''A'' = ''QR'' of an orthonormal matrix ''Q'' and an upper triangular matrix ''R''. QR decom ...
, this forms th
DGESVD
routine for the computation of the
singular value decomposition In linear algebra, the singular value decomposition (SVD) is a Matrix decomposition, factorization of a real number, real or complex number, complex matrix (mathematics), matrix into a rotation, followed by a rescaling followed by another rota ...
. The QR algorithm can also be implemented in infinite dimensions with corresponding convergence results.


References


Sources

* *


External links

*
Notes on orthogonal bases and the workings of the QR algorithm
by
Peter J. Olver Peter John Olver (11 January 1952, Twickenham) is a British-American mathematician working in differential geometry. Education and career After moving to the USA in 1961, Olver obtained a bachelor's degree in Applied Mathematics at Brown Univer ...

Module for the QR Method

C++ Library
{{DEFAULTSORT:Qr Algorithm Numerical linear algebra