HOME

TheInfoList



OR:

In
numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic computation, symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of ...
, Richardson extrapolation is a sequence acceleration method used to improve the
rate of convergence In mathematical analysis, particularly numerical analysis, the rate of convergence and order of convergence of a sequence that converges to a limit are any of several characterizations of how quickly that sequence approaches its limit. These are ...
of a
sequence In mathematics, a sequence is an enumerated collection of objects in which repetitions are allowed and order matters. Like a set, it contains members (also called ''elements'', or ''terms''). The number of elements (possibly infinite) is cal ...
of estimates of some value A^\ast = \lim_ A(h). In essence, given the value of A(h) for several values of h, we can estimate A^\ast by extrapolating the estimates to h=0. It is named after
Lewis Fry Richardson Lewis Fry Richardson, Fellow of the Royal Society, FRS (11 October 1881 – 30 September 1953) was an English mathematician, physicist, meteorologist, psychologist, and Pacifism, pacifist who pioneered modern mathematical techniques of weather ...
, who introduced the technique in the early 20th century, though the idea was already known to
Christiaan Huygens Christiaan Huygens, Halen, Lord of Zeelhem, ( , ; ; also spelled Huyghens; ; 14 April 1629 – 8 July 1695) was a Dutch mathematician, physicist, engineer, astronomer, and inventor who is regarded as a key figure in the Scientific Revolution ...
in his calculation of \pi. In the words of Birkhoff and Rota, "its usefulness for practical computations can hardly be overestimated."Page 126 of Practical applications of Richardson extrapolation include Romberg integration, which applies Richardson extrapolation to the
trapezoid rule In calculus, the trapezoidal rule (or trapezium rule in British English) is a technique for numerical integration, i.e., approximating the definite integral: \int_a^b f(x) \, dx. The trapezoidal rule works by approximating the region under the ...
, and the
Bulirsch–Stoer algorithm In numerical analysis, the Bulirsch–Stoer algorithm is a method for the numerical solution of ordinary differential equations which combines three powerful ideas: Richardson extrapolation, the use of rational function extrapolation in Richardson ...
for solving ordinary differential equations.


General formula


Notation

Let A_0(h) be an approximation of ''A^*''(exact value) that depends on a step size (where 0 < h < 1) with an
error An error (from the Latin , meaning 'to wander'Oxford English Dictionary, s.v. “error (n.), Etymology,” September 2023, .) is an inaccurate or incorrect action, thought, or judgement. In statistics, "error" refers to the difference between t ...
formula of the form A^* = A_0(h)+a_0h^ + a_1h^ + a_2h^ + \cdots where the a_i are unknown constants and the k_i are known constants such that h^ > h^. Furthermore, O(h^) represents the
truncation error In numerical analysis and scientific computing, truncation error is an error caused by approximating a mathematical process. The term truncation comes from the fact that these simplifications often involve the truncation of an infinite series expa ...
of the A_i(h) approximation such that A^* = A_i(h)+O(h^). Similarly, in A^*=A_i(h)+O(h^), the approximation A_i(h) is said to be an O(h^) approximation. Note that by simplifying with
Big O notation Big ''O'' notation is a mathematical notation that describes the asymptotic analysis, limiting behavior of a function (mathematics), function when the Argument of a function, argument tends towards a particular value or infinity. Big O is a memb ...
, the following formulae are equivalent: \begin A^* &= A_0(h) + a_0h^ + a_1h^ + a_2h^ + \cdots \\ A^* &= A_0(h)+ a_0h^ + O(h^) \\ A^* &= A_0(h)+O(h^) \end


Purpose

Richardson extrapolation is a process that finds a better approximation of A^* by changing the error formula from A^*=A_0(h)+O(h^) to A^* = A_1(h) + O(h^). Therefore, by replacing A_0(h) with A_1(h) the
truncation error In numerical analysis and scientific computing, truncation error is an error caused by approximating a mathematical process. The term truncation comes from the fact that these simplifications often involve the truncation of an infinite series expa ...
has reduced from O(h^) to O(h^) for the same step size h. The general pattern occurs in which A_i(h) is a more accurate estimate than A_j(h) when i>j. By this process, we have achieved a better approximation of A^* by subtracting the largest term in the error which was O(h^) . This process can be repeated to remove more error terms to get even better approximations.


Process

Using the step sizes h and h / t for some constant t, the two formulas for A^* are: To improve our approximation from O(h^) to O(h^) by removing the first error term, we multiply by t^ and subtract to give us (t^-1)A^* = \bigg ^A_0\left(\frac\right) - A_0(h)\bigg+ \bigg(t^a_1\bigg(\frac\bigg)^-a_1h^\bigg)+ \bigg(t^a_2\bigg(\frac\bigg)^-a_2h^\bigg) + O(h^). This multiplication and
subtraction Subtraction (which is signified by the minus sign, –) is one of the four Arithmetic#Arithmetic operations, arithmetic operations along with addition, multiplication and Division (mathematics), division. Subtraction is an operation that repre ...
was performed because \big ^A_0\left(\frac\right) - A_0(h)\big/math> is an O(h^) approximation of (t^-1)A^*. We can solve our current formula for A^* to give A^* = \frac + \frac + \frac +O(h^) which can be written as A^* = A_1(h)+O(h^) by setting A_1(h) = \frac .


Recurrence relation

A general
recurrence relation In mathematics, a recurrence relation is an equation according to which the nth term of a sequence of numbers is equal to some combination of the previous terms. Often, only k previous terms of the sequence appear in the equation, for a parameter ...
can be defined for the approximations by A_(h) = \frac where k_ satisfies A^* = A_(h) + O(h^) .


Properties

The Richardson extrapolation can be considered as a linear
sequence transformation In mathematics, a sequence transformation is an Operator (mathematics), operator acting on a given space of sequences (a sequence space). Sequence transformations include linear mappings such as convolution, discrete convolution with another sequen ...
. Additionally, the general formula can be used to estimate k_0 (leading order step size behavior of
Truncation error In numerical analysis and scientific computing, truncation error is an error caused by approximating a mathematical process. The term truncation comes from the fact that these simplifications often involve the truncation of an infinite series expa ...
) when neither its value nor A^* is known ''a priori''. Such a technique can be useful for quantifying an unknown
rate of convergence In mathematical analysis, particularly numerical analysis, the rate of convergence and order of convergence of a sequence that converges to a limit are any of several characterizations of how quickly that sequence approaches its limit. These are ...
. Given approximations of A^* from three distinct step sizes h, h / t, and h / s, the exact relationshipA^*=\frac + O(h^) = \frac + O(h^)yields an approximate relationship (please note that the notation here may cause a bit of confusion, the two O appearing in the equation above only indicates the leading order step size behavior but their explicit forms are different and hence cancelling out of the two terms is only approximately valid) A_i\left(\frac\right) + \frac \approx A_i\left(\frac\right) +\frac which can be solved numerically to estimate k_0 for some arbitrary valid choices of h, s, and t. As t \neq 1, if t>0 and s is chosen so that s = t^2, this approximate relation reduces to a
quadratic equation In mathematics, a quadratic equation () is an equation that can be rearranged in standard form as ax^2 + bx + c = 0\,, where the variable (mathematics), variable represents an unknown number, and , , and represent known numbers, where . (If and ...
in t^, which is readily solved for k_0 in terms of h and t.


Example of Richardson extrapolation

Suppose that we wish to approximate A^*, and we have a method A(h) that depends on a small parameter h in such a way that A(h) = A^\ast + C h^n + O(h^). Let us define a new function R(h,t) := \frac where h and \frac are two distinct step sizes. Then R(h, t) = \frac = A^* + O(h^). R(h,t) is called the Richardson
extrapolation In mathematics Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. ...
of ''A''(''h''), and has a higher-order error estimate O(h^) compared to A(h) . Very often, it is much easier to obtain a given precision by using ''R''(''h'') rather than ''A''(''h′'') with a much smaller ''h′''. Where ''A''(''h′'') can cause problems due to limited precision (
rounding error In computing, a roundoff error, also called rounding error, is the difference between the result produced by a given algorithm using exact arithmetic and the result produced by the same algorithm using finite-precision, rounded arithmetic. Roun ...
s) and/or due to the increasing number of calculations needed (see examples below).


Example pseudocode for Richardson extrapolation

The following
pseudocode In computer science, pseudocode is a description of the steps in an algorithm using a mix of conventions of programming languages (like assignment operator, conditional operator, loop) with informal, usually self-explanatory, notation of actio ...
in MATLAB style demonstrates Richardson extrapolation to help solve the ODE y'(t) = -y^2, y(0) = 1 with the
Trapezoidal method In calculus, the trapezoidal rule (or trapezium rule in British English) is a technique for numerical integration, i.e., approximating the definite integral: \int_a^b f(x) \, dx. The trapezoidal rule works by approximating the region under the ...
. In this example we halve the step size h each iteration and so in the discussion above we'd have that t = 2. The error of the Trapezoidal method can be expressed in terms of odd powers so that the error over multiple steps can be expressed in even powers; this leads us to raise t to the second power and to take powers of 4 = 2^2 = t^2 in the pseudocode. We want to find the value of y(5), which has the exact solution of \frac = \frac = 0.1666... since the exact solution of the ODE is y(t) = \frac. This pseudocode assumes that a function called Trapezoidal(f, tStart, tEnd, h, y0) exists which attempts to compute y(tEnd) by performing the trapezoidal method on the function f, with starting point y0 and tStart and step size h. Note that starting with too small an initial step size can potentially introduce error into the final solution. Although there are methods designed to help pick the best initial step size, one option is to start with a large step size and then to allow the Richardson extrapolation to reduce the step size each iteration until the error reaches the desired tolerance. tStart = 0 % Starting time tEnd = 5 % Ending time f = -y^2 % The derivative of y, so y' = f(t, y(t)) = -y^2 % The solution to this ODE is y = 1/(1 + t) y0 = 1 % The initial position (i.e. y0 = y(tStart) = y(0) = 1) tolerance = 10^-11 % 10 digit accuracy is desired % Don't allow the iteration to continue indefinitely maxRows = 20 % Pick an initial step size initialH = tStart - tEnd % Were we able to find the solution to within the desired tolerance? not yet. haveWeFoundSolution = false h = initialH % Create a 2D matrix of size maxRows by maxRows to hold the Richardson extrapolates % Note that this will be a lower triangular matrix and that at most two rows are actually % needed at any time in the computation. A = zeroMatrix(maxRows, maxRows) % Compute the top left element of the matrix. % The first row of this (lower triangular) matrix has now been filled. A(1, 1) = Trapezoidal(f, tStart, tEnd, h, y0) % Each row of the matrix requires one call to Trapezoidal % This loops starts by filling the second row of the matrix, % since the first row was computed above for i = 1 : maxRows - 1 % Starting at i = 1, iterate at most maxRows - 1 times % Halve the previous value of h since this is the start of a new row. h = h/2 % Starting filling row i+1 from the left by calling % the Trapezoidal function with this new smaller step size A(i + 1, 1) = Trapezoidal(f, tStart, tEnd, h, y0) % Go across this current (i+1)-th row until the diagonal is reached for j = 1 : i % To compute A(i + 1, j + 1), which is the next Richardson extrapolate, % use the most recently computed value (i.e. A(i + 1, j)) % and the value from the row above it (i.e. A(i, j)). A(i + 1, j + 1) = ((4^j).*A(i + 1, j) - A(i, j))/(4^j - 1); end % After leaving the above inner loop, the diagonal element of row i + 1 has been computed % This diagonal element is the latest Richardson extrapolate to be computed. % The difference between this extrapolate and the last extrapolate of row i is a good % indication of the error. if (absoluteValue(A(i + 1, i + 1) - A(i, i)) < tolerance) % If the result is within tolerance % Display the result of the Richardson extrapolation print("y = ", A(i + 1, i + 1)) haveWeFoundSolution = true % Done, so leave the loop break end end % If we were not able to find a solution to within the desired tolerance if (not haveWeFoundSolution) print("Warning: Not able to find solution to within the desired tolerance of ", tolerance); print("The last computed extrapolate was ", A(maxRows, maxRows)) end


See also

*
Aitken's delta-squared process In numerical analysis, Aitken's delta-squared process or Aitken extrapolation is a series acceleration method used for accelerating the rate of convergence of a sequence. It is named after Alexander Aitken, who introduced this method in 1926 as ...
* Takebe Kenko *
Richardson iteration Modified Richardson iteration is an iterative method for solving a system of linear equations. Richardson iteration was proposed by Lewis Fry Richardson in his work dated 1910. It is similar to the Jacobi and Gauss–Seidel method. We seek the so ...


References

*''Extrapolation Methods. Theory and Practice'' by C. Brezinski and M. Redivo Zaglia, North-Holland, 1991. *Ivan Dimov, Zahari Zlatev, Istvan Farago, Agnes Havasi: ''Richardson Extrapolation: Practical Aspects and Applications'', Walter de Gruyter GmbH & Co KG, (2017).


External links

* * {{cite web, url=http://www.math.ubc.ca/~israel/m215/rich/rich.html, title= Richardson extrapolation, first1=Robert, last1=Israel
Matlab code
for generic Richardson extrapolation.
Julia code
for generic Richardson extrapolation. Series acceleration methods Extrapolation Articles with example MATLAB/Octave code