HOME

TheInfoList



OR:

In numerical mathematics, relaxation methods are
iterative method In computational mathematics, an iterative method is a mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the ''n''-th approximation is derived from the pre ...
s for solving systems of equations, including nonlinear systems. Relaxation methods were developed for solving large
sparse Sparse is a computer software tool designed to find possible coding faults in the Linux kernel. Unlike other such tools, this static analysis tool was initially designed to only flag constructs that were likely to be of interest to kernel de ...
linear systems, which arose as
finite-difference A finite difference is a mathematical expression of the form . If a finite difference is divided by , one gets a difference quotient. The approximation of derivatives by finite differences plays a central role in finite difference methods for the ...
discretizations of differential equations. They are also used for the solution of linear equations for
linear least-squares Linear least squares (LLS) is the least squares approximation of linear functions to data. It is a set of formulations for solving statistical problems involved in linear regression, including variants for ordinary (unweighted), weighted, and g ...
problems and also for systems of linear inequalities, such as those arising in
linear programming Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented by linear function#As a polynomial function, li ...
. They have also been developed for solving nonlinear systems of equations. Relaxation methods are important especially in the solution of linear systems used to model elliptic partial differential equations, such as
Laplace's equation In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as \nabla^2\! f = 0 or \Delta f = 0, where \Delta = \nab ...
and its generalization, Poisson's equation. These equations describe
boundary-value problem In mathematics, in the field of differential equations, a boundary value problem is a differential equation together with a set of additional constraints, called the boundary conditions. A solution to a boundary value problem is a solution to t ...
s, in which the solution-function's values are specified on boundary of a domain; the problem is to compute a solution also on its interior. Relaxation methods are used to solve the linear equations resulting from a discretization of the differential equation, for example by finite differences.Abraham Berman, Robert J. Plemmons, ''Nonnegative Matrices in the Mathematical Sciences'', 1994, SIAM. .
David M. Young, Jr. David M. Young Jr. (October 20, 1923 – December 21, 2008) was an American mathematician and computer scientist who was one of the pioneers in the field of modern numerical analysis/scientific computing. Contributions Dr. Young is best known for ...
''Iterative Solution of Large Linear Systems'', Academic Press, 1971. (reprinted by Dover, 2003)
Richard S. Varga 2002 ''Matrix Iterative Analysis'', Second ed. (of 1962 Prentice Hall edition), Springer-Verlag. Iterative relaxation of solutions is commonly dubbed smoothing because with certain equations, such as
Laplace's equation In mathematics and physics, Laplace's equation is a second-order partial differential equation named after Pierre-Simon Laplace, who first studied its properties. This is often written as \nabla^2\! f = 0 or \Delta f = 0, where \Delta = \nab ...
, it resembles repeated application of a local smoothing filter to the solution vector. These are not to be confused with relaxation methods in
mathematical optimization Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfi ...
, which
approximate An approximation is anything that is intentionally similar but not exactly equal to something else. Etymology and usage The word ''approximation'' is derived from Latin ''approximatus'', from ''proximus'' meaning ''very near'' and the prefix ' ...
a difficult problem by a simpler problem whose "relaxed" solution provides information about the solution of the original problem.


Model problem of potential theory

When φ is a smooth real-valued function on the real numbers, its second derivative can be approximated by: :\frac = \frac\,+\,\mathcal(h^2)\,. Using this in both dimensions for a function φ of two arguments at the point (''x'', ''y''), and solving for φ(''x'', ''y''), results in: :\varphi(x, y) = \tfrac\left(\varphi(xh,y)+\varphi(x,yh)+\varphi(xh,y)+\varphi(x,yh) \,-\,h^2^2\varphi(x,y)\right)\,+\,\mathcal(h^4)\,. To approximate the solution of the Poisson equation: :^2 \varphi = f\, numerically on a two-dimensional grid with grid spacing ''h'', the relaxation method assigns the given values of function φ to the grid points near the boundary and arbitrary values to the interior grid points, and then repeatedly performs the assignment φ := φ* on the interior points, where φ* is defined by: :\varphi^*(x, y) = \tfrac\left(\varphi(xh,y)+\varphi(x,yh)+\varphi(xh,y)+\varphi(x,yh) \,-\,h^2f(x,y)\right)\,, until convergence. The method is easily generalized to other numbers of dimensions.


Convergence and acceleration

While the method converges under general conditions, it typically makes slower progress than competing methods. Nonetheless, the study of relaxation methods remains a core part of linear algebra, because the transformations of relaxation theory provide excellent preconditioners for new methods. Indeed, the choice of preconditioner is often more important than the choice of iterative method. Yousef Saad,
Iterative Methods for Sparse Linear Systems
', 1st edition, PWS, 1996.
Multigrid methods may be used to accelerate the methods. One can first compute an approximation on a coarser grid – usually the double spacing 2''h'' – and use that solution with interpolated values for the other grid points as the initial assignment. This can then also be done recursively for the coarser computation.William L. Briggs, Van Emden Henson, and Steve F. McCormick (2000),
A Multigrid Tutorial
'' (2nd ed.), Philadelphia:
Society for Industrial and Applied Mathematics Society for Industrial and Applied Mathematics (SIAM) is a professional society dedicated to applied mathematics, computational science, and data science through research, publications, and community. SIAM is the world's largest scientific socie ...
, .


See also

* In linear systems, the two main classes of relaxation methods are stationary iterative methods, and the more general Krylov subspace methods. * The Jacobi method is a simple relaxation method. * The Gauss–Seidel method is an improvement upon the Jacobi method. * Successive over-relaxation can be applied to either of the Jacobi and Gauss–Seidel methods to speed convergence. * Multigrid methods


Notes


References

* Abraham Berman, Robert J. Plemmons, ''Nonnegative Matrices in the Mathematical Sciences'', 1994, SIAM. . * * * Yousef Saad,
Iterative Methods for Sparse Linear Systems
', 1st edition, PWS, 1996. * Richard S. Varga 2002 ''Matrix Iterative Analysis'', Second ed. (of 1962 Prentice Hall edition), Springer-Verlag. *
David M. Young, Jr. David M. Young Jr. (October 20, 1923 – December 21, 2008) was an American mathematician and computer scientist who was one of the pioneers in the field of modern numerical analysis/scientific computing. Contributions Dr. Young is best known for ...
''Iterative Solution of Large Linear Systems'', Academic Press, 1971. (reprinted by Dover, 2003)


Further reading

* Southwell, R.V. (1940) ''Relaxation Methods in Engineering Science''. Oxford University Press, Oxford. * Southwell, R.V. (1946) ''Relaxation Methods in Theoretical Physics''. Oxford University Press, Oxford. * * * {{cite book , author= P.-B. Zhou , title=Numerical Analysis of Electromagnetic Fields , location=New York , publisher=Springer , year=1993 Iterative methods Numerical linear algebra