In
mathematics
Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. There are many ar ...
, preconditioning is the application of a transformation, called the preconditioner, that conditions a given problem into a form that is more suitable for
numerical solving methods. Preconditioning is typically related to reducing a
condition number of the problem. The preconditioned problem is then usually solved by an
iterative method
In computational mathematics, an iterative method is a Algorithm, mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the ''i''-th approximation (called an " ...
.
Preconditioning for linear systems
In
linear algebra
Linear algebra is the branch of mathematics concerning linear equations such as
:a_1x_1+\cdots +a_nx_n=b,
linear maps such as
:(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n,
and their representations in vector spaces and through matrix (mathemat ...
and
numerical analysis
Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic computation, symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of ...
, a preconditioner
of a matrix
is a matrix such that
has a smaller
condition number than
. It is also common to call
the preconditioner, rather than
, since
itself is rarely explicitly available. In modern preconditioning, the application of
, i.e., multiplication of a column vector, or a block of column vectors, by
, is commonly performed in a
matrix-free fashion, i.e., where neither
, nor
(and often not even
) are explicitly available in a matrix form.
Preconditioners are useful in
iterative methods
In computational mathematics, an iterative method is a Algorithm, mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the ''i''-th approximation (called an " ...
to solve a linear system
for
since the
rate of convergence for most iterative linear solvers increases because the
condition number of a matrix decreases as a result of preconditioning. Preconditioned iterative solvers typically outperform direct solvers, e.g.,
Gaussian elimination
In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of row-wise operations performed on the corresponding matrix of coefficients. This method can a ...
, for large, especially for
sparse, matrices. Iterative solvers can be used as
matrix-free methods, i.e. become the only choice if the coefficient matrix
is not stored explicitly, but is accessed by evaluating matrix-vector products.
Description
Instead of solving the original linear system
for
, one may consider the right preconditioned system
and solve
for
and
for
.
Alternatively, one may solve the left preconditioned system
Both systems give the same solution as the original system as long as the preconditioner matrix
is
nonsingular. The left preconditioning is more traditional.
The two-sided preconditioned system
may be beneficial, e.g., to preserve the matrix symmetry: if the original matrix
is real symmetric and real preconditioners
and
satisfy
then the preconditioned matrix
is also symmetric. The two-sided preconditioning is common for diagonal scaling where the preconditioners
and
are diagonal and scaling is applied both to columns and rows of the original matrix
, e.g., in order to decrease the dynamic range of entries of the matrix.
The goal of preconditioning is reducing the
condition number, e.g., of the left or right preconditioned system matrix
or
. Small condition numbers benefit fast convergence of iterative solvers and improve stability of the solution with respect to perturbations in the system matrix and the right-hand side, e.g., allowing for more aggressive
quantization of the matrix entries using lower
computer precision.
The preconditioned matrix
or
is rarely explicitly formed. Only the action of applying the preconditioner solve operation
to a given vector may need to be computed.
Typically there is a trade-off in the choice of
. Since the operator
must be applied at each step of the iterative linear solver, it should have a small cost (computing time) of applying the
operation. The cheapest preconditioner would therefore be
since then
Clearly, this results in the original linear system and the preconditioner does nothing. At the other extreme, the choice
gives
which has optimal
condition number of 1, requiring a single iteration for convergence; however in this case
and applying the preconditioner is as difficult as solving the original system. One therefore chooses
as somewhere between these two extremes, in an attempt to achieve a minimal number of linear iterations while keeping the operator
as simple as possible. Some examples of typical preconditioning approaches are detailed below.
Preconditioned iterative methods
Preconditioned iterative methods for
are, in most cases, mathematically equivalent to standard iterative methods applied to the preconditioned system
For example, the standard
Richardson iteration for solving
is
Applied to the preconditioned system
it turns into a preconditioned method
Examples of popular preconditioned
iterative methods
In computational mathematics, an iterative method is a Algorithm, mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the ''i''-th approximation (called an " ...
for linear systems include the
preconditioned conjugate gradient method
In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-semidefinite. The conjugate gradient method is often implemented as an i ...
, the
biconjugate gradient method, and
generalized minimal residual method. Iterative methods, which use scalar products to compute the iterative parameters, require corresponding changes in the scalar product together with substituting
for
Matrix splitting
A
stationary iterative method is determined by the matrix splitting
and the iteration matrix
. Assuming that
* the system matrix
is
symmetric
Symmetry () in everyday life refers to a sense of harmonious and beautiful proportion and balance. In mathematics, the term has a more precise definition and is usually used to refer to an object that is invariant under some transformations ...
positive-definite,
* the splitting matrix
is
symmetric
Symmetry () in everyday life refers to a sense of harmonious and beautiful proportion and balance. In mathematics, the term has a more precise definition and is usually used to refer to an object that is invariant under some transformations ...
positive-definite,
* the stationary iterative method is convergent, as determined by
,
the
condition number is bounded above by
Geometric interpretation
For a
symmetric
Symmetry () in everyday life refers to a sense of harmonious and beautiful proportion and balance. In mathematics, the term has a more precise definition and is usually used to refer to an object that is invariant under some transformations ...
positive definite matrix
the preconditioner
is typically chosen to be symmetric positive definite as well. The preconditioned operator
is then also symmetric positive definite, but with respect to the
-based
scalar product
In mathematics, the dot product or scalar productThe term ''scalar product'' means literally "product with a scalar as a result". It is also used for other symmetric bilinear forms, for example in a pseudo-Euclidean space. Not to be confused wit ...
. In this case, the desired effect in applying a preconditioner is to make the
quadratic form
In mathematics, a quadratic form is a polynomial with terms all of degree two (" form" is another name for a homogeneous polynomial). For example,
4x^2 + 2xy - 3y^2
is a quadratic form in the variables and . The coefficients usually belong t ...
of the preconditioned operator
with respect to the
-based
scalar product
In mathematics, the dot product or scalar productThe term ''scalar product'' means literally "product with a scalar as a result". It is also used for other symmetric bilinear forms, for example in a pseudo-Euclidean space. Not to be confused wit ...
to be nearly spherical.
Variable and non-linear preconditioning
Denoting
, we highlight that preconditioning is practically implemented as multiplying some vector
by
, i.e., computing the product
In many applications,
is not given as a matrix, but rather as an operator
acting on the vector
. Some popular preconditioners, however, change with
and the dependence on
may not be linear. Typical examples involve using non-linear
iterative methods
In computational mathematics, an iterative method is a Algorithm, mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the ''i''-th approximation (called an " ...
, e.g., the
conjugate gradient method
In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-semidefinite. The conjugate gradient method is often implemented as an it ...
, as a part of the preconditioner construction. Such preconditioners may be practically very efficient, however, their behavior is hard to predict theoretically.
Random preconditioning
One interesting particular case of variable preconditioning is random preconditioning, e.g.,
multigrid preconditioning on random coarse grids. If used in
gradient descent
Gradient descent is a method for unconstrained mathematical optimization. It is a first-order iterative algorithm for minimizing a differentiable multivariate function.
The idea is to take repeated steps in the opposite direction of the gradi ...
methods, random preconditioning can be viewed as an implementation of
stochastic gradient descent
Stochastic gradient descent (often abbreviated SGD) is an Iterative method, iterative method for optimizing an objective function with suitable smoothness properties (e.g. Differentiable function, differentiable or Subderivative, subdifferentiable ...
and can lead to faster convergence, compared to fixed preconditioning, since it breaks the asymptotic "zig-zag" pattern of the
gradient descent
Gradient descent is a method for unconstrained mathematical optimization. It is a first-order iterative algorithm for minimizing a differentiable multivariate function.
The idea is to take repeated steps in the opposite direction of the gradi ...
.
Spectrally equivalent preconditioning
The most common use of preconditioning is for iterative solution of linear systems resulting from approximations of
partial differential equations
In mathematics, a partial differential equation (PDE) is an equation which involves a multivariable function and one or more of its partial derivatives.
The function is often thought of as an "unknown" that solves the equation, similar to how ...
. The better the approximation quality, the larger the matrix size is. In such a case, the goal of optimal preconditioning is, on the one side, to make the spectral condition number of
to be bounded from above by a constant independent of the matrix size, which is called ''spectrally equivalent'' preconditioning by
D'yakonov. On the other hand, the cost of application of the
should ideally be proportional (also independent of the matrix size) to the cost of multiplication of
by a vector.
Examples
Jacobi (or diagonal) preconditioner
The Jacobi preconditioner is one of the simplest forms of preconditioning, in which the preconditioner is chosen to be the diagonal of the matrix
Assuming
, we get
It is efficient for
diagonally dominant matrices . It is used in analysis software for beam problems or 1-D problems (EX:-
STAAD.Pro)
SPAI
The Sparse Approximate Inverse preconditioner minimises
where
is the
Frobenius norm
In the field of mathematics, norms are defined for elements within a vector space. Specifically, when the vector space comprises matrices, such norms are referred to as matrix norms. Matrix norms differ from vector norms in that they must also ...
and
is from some suitably constrained set of
sparse matrices
In numerical analysis and scientific computing, a sparse matrix or sparse array is a matrix (mathematics), matrix in which most of the elements are zero. There is no strict definition regarding the proportion of zero-value elements for a matrix ...
. Under the Frobenius norm, this reduces to solving numerous independent least-squares problems (one for every column). The entries in
must be restricted to some sparsity pattern or the problem remains as difficult and time-consuming as finding the exact inverse of
. The method was introduced by M.J. Grote and T. Huckle together with an approach to selecting sparsity patterns.
Other preconditioners
*
Incomplete Cholesky factorization
*
Incomplete LU factorization
*
Successive over-relaxation
**
Symmetric successive over-relaxation
In applied mathematics, symmetric successive over-relaxation (SSOR), is a preconditioner.
If the original matrix can be Matrix splitting, split into diagonal, lower and upper triangular as A=D+L+L^\mathsf then the SSOR preconditioner matrix is def ...
*
Multigrid preconditioning
External links
Preconditioned Conjugate Gradient– math-linux.com
Preconditioning for eigenvalue problems
Eigenvalue problems can be framed in several alternative ways, each leading to its own preconditioning. The traditional preconditioning is based on the so-called ''spectral transformations.'' Knowing (approximately) the targeted eigenvalue, one can compute the corresponding eigenvector by solving the related homogeneous linear system, thus allowing to use preconditioning for linear system. Finally, formulating the eigenvalue problem as optimization of the
Rayleigh quotient
In mathematics, the Rayleigh quotient () for a given complex Hermitian matrix M and nonzero vector ''x'' is defined as:R(M,x) = .For real matrices and vectors, the condition of being Hermitian reduces to that of being symmetric, and the conjugat ...
brings preconditioned optimization techniques to the scene.
Spectral transformations
By analogy with linear systems, for an
eigenvalue
In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a ...
problem
one may be tempted to replace the matrix
with the matrix
using a preconditioner
. However, this makes sense only if the seeking
eigenvectors
In linear algebra, an eigenvector ( ) or characteristic vector is a Vector (mathematics and physics), vector that has its direction (geometry), direction unchanged (or reversed) by a given linear map, linear transformation. More precisely, an e ...
of
and
are the same. This is the case for spectral transformations.
The most popular spectral transformation is the so-called ''shift-and-invert'' transformation, where for a given scalar
, called the ''shift'', the original eigenvalue problem
is replaced with the shift-and-invert problem
. The eigenvectors are preserved, and one can solve the shift-and-invert problem by an iterative solver, e.g., the
power iteration. This gives the
Inverse iteration, which normally converges to the eigenvector, corresponding to the eigenvalue closest to the shift
. The
Rayleigh quotient iteration is a shift-and-invert method with a variable shift.
Spectral transformations are specific for eigenvalue problems and have no analogs for linear systems. They require accurate numerical calculation of the transformation involved, which becomes the main bottleneck for large problems.
General preconditioning
To make a close connection to linear systems, let us suppose that the targeted eigenvalue
is known (approximately). Then one can compute the corresponding eigenvector from the homogeneous linear system
. Using the concept of left preconditioning for linear systems, we obtain
, where
is the preconditioner, which we can try to solve using the
Richardson iteration
The ''ideal'' preconditioning
The
Moore–Penrose pseudoinverse is the preconditioner, which makes the
Richardson iteration above converge in one step with
, since
, denoted by
, is the orthogonal projector on the eigenspace, corresponding to
. The choice
is impractical for three independent reasons. First,
is actually not known, although it can be replaced with its approximation
. Second, the exact
Moore–Penrose pseudoinverse requires the knowledge of the eigenvector, which we are trying to find. This can be somewhat circumvented by the use of the
Jacobi–Davidson preconditioner , where
approximates
. Last, but not least, this approach requires accurate numerical solution of linear system with the system matrix
, which becomes as expensive for large problems as the shift-and-invert method above. If the solution is not accurate enough, step two may be redundant.
Practical preconditioning
Let us first replace the theoretical value
in the
Richardson iteration above with its current approximation
to obtain a practical algorithm
A popular choice is
using the
Rayleigh quotient
In mathematics, the Rayleigh quotient () for a given complex Hermitian matrix M and nonzero vector ''x'' is defined as:R(M,x) = .For real matrices and vectors, the condition of being Hermitian reduces to that of being symmetric, and the conjugat ...
function
. Practical preconditioning may be as trivial as just using
or
For some classes of eigenvalue problems the efficiency of
has been demonstrated, both numerically and theoretically. The choice
allows one to easily utilize for eigenvalue problems the vast variety of preconditioners developed for linear systems.
Due to the changing value
, a comprehensive theoretical convergence analysis is much more difficult, compared to the linear systems case, even for the simplest methods, such as the
Richardson iteration.
External links
Templates for the Solution of Algebraic Eigenvalue Problems: a Practical Guide
Preconditioning in optimization
In
optimization
Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criteria, from some set of available alternatives. It is generally divided into two subfiel ...
, preconditioning is typically used to accelerate
first-order optimization
Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criteria, from some set of available alternatives. It is generally divided into two subfiel ...
algorithms
In mathematics and computer science, an algorithm () is a finite sequence of mathematically rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for per ...
.
Description
For example, to find a
local minimum
In mathematical analysis, the maximum and minimum of a function are, respectively, the greatest and least value taken by the function. Known generically as extremum, they may be defined either within a given range (the ''local'' or ''relative ...
of a real-valued function
using
gradient descent
Gradient descent is a method for unconstrained mathematical optimization. It is a first-order iterative algorithm for minimizing a differentiable multivariate function.
The idea is to take repeated steps in the opposite direction of the gradi ...
, one takes steps proportional to the ''negative'' of the
gradient
In vector calculus, the gradient of a scalar-valued differentiable function f of several variables is the vector field (or vector-valued function) \nabla f whose value at a point p gives the direction and the rate of fastest increase. The g ...
(or of the approximate gradient) of the function at the current point:
The preconditioner is applied to the gradient:
Preconditioning here can be viewed as changing the geometry of the vector space with the goal to make the level sets look like circles.
In this case the preconditioned gradient aims closer to the point of the extrema as on the figure, which speeds up the convergence.
Connection to linear systems
The minimum of a quadratic function
where
and
are real column-vectors and
is a real
symmetric
Symmetry () in everyday life refers to a sense of harmonious and beautiful proportion and balance. In mathematics, the term has a more precise definition and is usually used to refer to an object that is invariant under some transformations ...
positive-definite matrix
In mathematics, a symmetric matrix M with real entries is positive-definite if the real number \mathbf^\mathsf M \mathbf is positive for every nonzero real column vector \mathbf, where \mathbf^\mathsf is the row vector transpose of \mathbf.
Mo ...
, is exactly the solution of the linear equation
. Since
, the preconditioned
gradient descent
Gradient descent is a method for unconstrained mathematical optimization. It is a first-order iterative algorithm for minimizing a differentiable multivariate function.
The idea is to take repeated steps in the opposite direction of the gradi ...
method of minimizing
is
This is the preconditioned
Richardson iteration for solving a
system of linear equations
In mathematics, a system of linear equations (or linear system) is a collection of two or more linear equations involving the same variable (math), variables.
For example,
: \begin
3x+2y-z=1\\
2x-2y+4z=-2\\
-x+\fracy-z=0
\end
is a system of th ...
.
Connection to eigenvalue problems
The minimum of the
Rayleigh quotient
In mathematics, the Rayleigh quotient () for a given complex Hermitian matrix M and nonzero vector ''x'' is defined as:R(M,x) = .For real matrices and vectors, the condition of being Hermitian reduces to that of being symmetric, and the conjugat ...
where
is a real non-zero column-vector and
is a real
symmetric
Symmetry () in everyday life refers to a sense of harmonious and beautiful proportion and balance. In mathematics, the term has a more precise definition and is usually used to refer to an object that is invariant under some transformations ...
positive-definite matrix
In mathematics, a symmetric matrix M with real entries is positive-definite if the real number \mathbf^\mathsf M \mathbf is positive for every nonzero real column vector \mathbf, where \mathbf^\mathsf is the row vector transpose of \mathbf.
Mo ...
, is the smallest
eigenvalue
In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a ...
of
, while the minimizer is the corresponding
eigenvector
In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by ...
. Since
is proportional to
, the preconditioned
gradient descent
Gradient descent is a method for unconstrained mathematical optimization. It is a first-order iterative algorithm for minimizing a differentiable multivariate function.
The idea is to take repeated steps in the opposite direction of the gradi ...
method of minimizing
is
This is an analog of preconditioned
Richardson iteration for solving eigenvalue problems.
Variable preconditioning
In many cases, it may be beneficial to change the preconditioner at some or even every step of an
iterative algorithm in order to accommodate for a changing shape of the level sets, as in
One should have in mind, however, that constructing an efficient preconditioner is very often computationally expensive. The increased cost of updating the preconditioner can easily override the positive effect of faster convergence. If
, a
BFGS approximation of the inverse hessian matrix, this method is referred to as a
Quasi-Newton method.
References
Sources
*
*
*
*
*
{{Numerical linear algebra
Numerical linear algebra