In
computational mathematics
Computational mathematics is an area of mathematics devoted to the interaction between mathematics and computer computation.National Science Foundation, Division of Mathematical ScienceProgram description PD 06-888 Computational Mathematics 2006 ...
, an iterative method is a
mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the ''n''-th approximation is derived from the previous ones. A specific implementation of an iterative method, including the
termination
Termination may refer to:
Science
*Termination (geomorphology), the period of time of relatively rapid change from cold, glacial conditions to warm interglacial condition
*Termination factor, in genetics, part of the process of transcribing RNA ...
criteria, is an
algorithm of the iterative method. An iterative method is called convergent if the corresponding sequence converges for given initial approximations. A mathematically rigorous convergence analysis of an iterative method is usually performed; however,
heuristic-based iterative methods are also common.
In contrast, direct methods attempt to solve the problem by a finite sequence of operations. In the absence of
rounding errors, direct methods would deliver an exact solution (for example, solving a linear system of equations
by
Gaussian elimination
In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of operations performed on the corresponding matrix of coefficients. This method can also be used ...
). Iterative methods are often the only choice for
nonlinear equations. However, iterative methods are often useful even for linear problems involving many variables (sometimes on the order of millions), where direct methods would be prohibitively expensive (and in some cases impossible) even with the best available computing power.
Attractive fixed points
If an equation can be put into the form ''f''(''x'') = ''x'', and a solution x is an attractive
fixed point of the function ''f'', then one may begin with a point ''x''
1 in the
basin of attraction of x, and let ''x''
''n''+1 = ''f''(''x''
''n'') for ''n'' ≥ 1, and the sequence
''n'' ≥ 1 will converge to the solution x. Here ''x''
''n'' is the ''n''th approximation or iteration of ''x'' and ''x''
''n''+1 is the next or ''n'' + 1 iteration of ''x''. Alternately, superscripts in parentheses are often used in numerical methods, so as not to interfere with subscripts with other meanings. (For example, ''x''
(''n''+1) = ''f''(''x''
(''n'')).) If the function ''f'' is
continuously differentiable
In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. In other words, the graph of a differentiable function has a non-vertical tangent line at each interior point in its ...
, a sufficient condition for convergence is that the
spectral radius of the derivative is strictly bounded by one in a neighborhood of the fixed point. If this condition holds at the fixed point, then a sufficiently small neighborhood (basin of attraction) must exist.
Linear systems
In the case of a
system of linear equations
In mathematics, a system of linear equations (or linear system) is a collection of one or more linear equations involving the same variable (math), variables.
For example,
:\begin
3x+2y-z=1\\
2x-2y+4z=-2\\
-x+\fracy-z=0
\end
is a system of three ...
, the two main classes of iterative methods are the stationary iterative methods, and the more general
Krylov subspace methods.
Stationary iterative methods
Introduction
Stationary iterative methods solve a linear system with an
operator
Operator may refer to:
Mathematics
* A symbol indicating a mathematical operation
* Logical operator or logical connective in mathematical logic
* Operator (mathematics), mapping that acts on elements of a space to produce elements of another ...
approximating the original one; and based on a measurement of the error in the result (
the residual), form a "correction equation" for which this process is repeated. While these methods are simple to derive, implement, and analyze, convergence is only guaranteed for a limited class of matrices.
Definition
An ''iterative method'' is defined by
:
and for a given linear system
with exact solution
the ''error'' by
:
An iterative method is called ''linear'' if there exists a matrix
such that
:
and this matrix is called the ''iteration matrix''.
An iterative method with a given iteration matrix
is called ''convergent'' if the following holds
:
An important theorem states that for a given iterative method and its iteration matrix
it is convergent if and only if its
spectral radius is smaller than unity, that is,
:
The basic iterative methods work by
splitting the matrix
into
:
and here the matrix
should be easily
invertible.
The iterative methods are now defined as
:
From this follows that the iteration matrix is given by
:
Examples
Basic examples of stationary iterative methods use a splitting of the matrix
such as
:
where
is only the diagonal part of
, and
is the strict lower
triangular part of
.
Respectively,
is the strict upper triangular part of
.
*
Richardson method:
*
Jacobi method:
*
Damped Jacobi method:
*
Gauss–Seidel method:
*
Successive over-relaxation method (SOR):
*
Symmetric successive over-relaxation (SSOR):
Linear stationary iterative methods are also called
relaxation methods.
Krylov subspace methods
Krylov subspace methods work by forming a
basis of the sequence of successive matrix powers times the initial residual (the Krylov sequence).
The approximations to the solution are then formed by minimizing the residual over the subspace formed.
The prototypical method in this class is the
conjugate gradient method
In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-definite. The conjugate gradient method is often implemented as an iterativ ...
(CG) which assumes that the system matrix
is
symmetric positive-definite In mathematics, positive definiteness is a property of any object to which a bilinear form or a sesquilinear form may be naturally associated, which is positive-definite. See, in particular:
* Positive-definite bilinear form
* Positive-definite fu ...
.
For symmetric (and possibly indefinite)
one works with the
minimal residual method (MINRES).
In the case of non-symmetric matrices, methods such as the
generalized minimal residual method (GMRES) and the
biconjugate gradient method (BiCG) have been derived.
Convergence of Krylov subspace methods
Since these methods form a basis, it is evident that the method converges in ''N'' iterations, where ''N'' is the system size. However, in the presence of rounding errors this statement does not hold; moreover, in practice ''N'' can be very large, and the iterative process reaches sufficient accuracy already far earlier. The analysis of these methods is hard, depending on a complicated function of the
spectrum of the operator.
Preconditioners
The approximating operator that appears in stationary iterative methods can also be incorporated in Krylov subspace methods such as
GMRES (alternatively,
preconditioned Krylov methods can be considered as accelerations of stationary iterative methods), where they become transformations of the original operator to a presumably better conditioned one. The construction of preconditioners is a large research area.
History
Jamshīd al-Kāshī used iterative methods to calculate the sine of 1° and in ''The Treatise of Chord and Sine'' to high precision.
An early iterative method for solving a linear system appeared in a letter of
Gauss to a student of his. He proposed solving a 4-by-4 system of equations by repeatedly solving the component in which the residual was the largest .
The theory of stationary iterative methods was solidly established with the work of
D.M. Young
David M. Young Jr. (October 20, 1923 – December 21, 2008) was an American mathematician and computer scientist who was one of the pioneers in the field of modern applied mathematics, numerical analysis/scientific computing.
Contributions
Dr. You ...
starting in the 1950s. The conjugate gradient method was also invented in the 1950s, with independent developments by
Cornelius Lanczos
__NOTOC__
Cornelius (Cornel) Lanczos ( hu, Lánczos Kornél, ; born as Kornél Lőwy, until 1906: ''Löwy (Lőwy) Kornél''; February 2, 1893 – June 25, 1974) was a Hungarian-American and later Hungarian-Irish mathematician and physicist. Accor ...
,
Magnus Hestenes
Magnus Rudolph Hestenes (February 13, 1906 – May 31, 1991) was an American mathematician best known for his contributions to calculus of variations and optimal control. As a pioneer in computer science, he devised the conjugate gradient method, ...
and
Eduard Stiefel, but its nature and applicability were misunderstood at the time. Only in the 1970s was it realized that conjugacy based methods work very well for
partial differential equation
In mathematics, a partial differential equation (PDE) is an equation which imposes relations between the various partial derivatives of a Multivariable calculus, multivariable function.
The function is often thought of as an "unknown" to be sol ...
s, especially the elliptic type.
See also
*
Closed-form expression
*
Iterative refinement
Iterative refinement is an iterative method proposed by James H. Wilkinson to improve the accuracy of numerical solutions to systems of linear equations.
When solving a linear system \,\mathrm \, \mathbf = \mathbf \,, due to the compounded accu ...
*
Kaczmarz method
The Kaczmarz method or Kaczmarz's algorithm is an iterative algorithm for solving linear equation systems A x = b . It was first discovered by the Polish mathematician Stefan Kaczmarz, and was rediscovered in the field of image reconstruction from ...
*
Non-linear least squares
*
Numerical analysis
*
Root-finding algorithm
References
External links
Templates for the Solution of Linear Systems
{{Authority control
Numerical analysis