HOME

TheInfoList



OR:

In numerical analysis, Richardson extrapolation is a sequence acceleration method used to improve the rate of convergence of a sequence of estimates of some value A^\ast = \lim_ A(h). In essence, given the value of A(h) for several values of h, we can estimate A^\ast by extrapolating the estimates to h=0. It is named after
Lewis Fry Richardson Lewis Fry Richardson, FRS (11 October 1881 – 30 September 1953) was an English mathematician, physicist, meteorologist, psychologist, and pacifist who pioneered modern mathematical techniques of weather forecasting, and the application of si ...
, who introduced the technique in the early 20th century, though the idea was already known to
Christiaan Huygens Christiaan Huygens, Lord of Zeelhem, ( , , ; also spelled Huyghens; la, Hugenius; 14 April 1629 – 8 July 1695) was a Dutch mathematician, physicist, engineer, astronomer, and inventor, who is regarded as one of the greatest scientists of ...
in his calculation of π. In the words of Birkhoff and
Rota Rota or ROTA may refer to: Places * Rota (island), in the Marianas archipelago * Rota (volcano), in Nicaragua * Rota, Andalusia, a town in Andalusia, Spain * Naval Station Rota, Spain People * Rota (surname), a surname (including a list of peop ...
, "its usefulness for practical computations can hardly be overestimated."Page 126 of Practical applications of Richardson extrapolation include
Romberg integration In numerical analysis, Romberg's method is used to estimate the definite integral \int_a^b f(x) \, dx by applying Richardson extrapolation repeatedly on the trapezium rule or the rectangle rule (midpoint rule). The estimates generate a tria ...
, which applies Richardson extrapolation to the trapezoid rule, and the Bulirsch–Stoer algorithm for solving ordinary differential equations.


Example of Richardson extrapolation

Suppose that we wish to approximate A^*, and we have a method A(h) that depends on a small parameter h in such a way that A(h) = A^\ast + C h^n + O(h^). Let us define a new function R(h,t) := \frac where h and \frac are two distinct step sizes. Then R(h, t) = \frac = A^* + O(h^). R(h,t) is called the Richardson extrapolation of ''A''(''h''), and has a higher-order error estimate O(h^) compared to A(h) . Very often, it is much easier to obtain a given precision by using ''R''(''h'') rather than ''A''(''h′'') with a much smaller ''h′''. Where ''A''(''h′'') can cause problems due to limited precision ( rounding errors) and/or due to the increasing number of calculations needed (see examples below).


General formula

Let ''A_0(h)'' be an approximation of ''A^*''(exact value) that depends on a positive step size ''h'' with an error formula of the form : A^* = A_0(h)+a_0h^ + a_1h^ + a_2h^ + \cdots where the a_i are unknown constants and the k_i are known constants such that h^>h^. Further k_0 is the leading order step size behavior of the truncation error as A^*=A_0(h)+O(h^). The exact value sought can be given by : A^* = A_0(h) + a_0h^ + a_1h^ + a_2h^ + \cdots which can be simplified with
Big O notation Big ''O'' notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. Big O is a member of a family of notations invented by Paul Bachmann, Edmund Lan ...
to be : A^* = A_0(h)+ a_0h^ + O(h^). \,\! Using the step sizes ''h'' and ''h / t'' for some constant ''t'', the two formulas for ''A^*'' are A^* = A_0(h)+ a_0h^ + a_1h^ + a_2h^ + O(h^) \,\!and A^* = A_0\!\left(\frac\right) + a_0\left(\frac\right)^ + a_1\left(\frac\right)^ + a_2\left(\frac\right)^ + O(h^) .Multiplying the second equation by ''t^'' and subtracting the first equation gives (t^-1)A^* = \bigg ^A_0\left(\frac\right) - A_0(h)\bigg+ \bigg(t^a_1\bigg(\frac\bigg)^-a_1h^\bigg)+ \bigg(t^a_2\bigg(\frac\bigg)^-a_2h^\bigg) + O(h^). This can be solved for A^* to giveA^* = \frac + \frac +\frac +O(h^) which can be written as A^* = A_2(h)+O(k^) by setting :A_1(h) = \frac . Therefore, by replacing A_0(h) with A_1(h) the truncation error has reduced from O(h^) to O(h^) for the same step size h. The general pattern occurs in which A_i(h) is has a more accurate estimate than A_j(h) when i>j. By this process, we have achieved a better approximation of ''A^*'' by subtracting the largest term in the error which was O(h^) . This process can be repeated to remove more error terms to get even better approximations. A general recurrence relation can be defined for the approximations by : A_(h) = \frac where k_ satisfies : A^* = A_(h) + O(h^) . The Richardson extrapolation can be considered as a linear sequence transformation. Additionally, the general formula can be used to estimate ''k_0'' (leading order step size behavior of Truncation error) when neither its value nor ''A^*'' is known ''a priori''. Such a technique can be useful for quantifying an unknown rate of convergence. Given approximations of ''A^*'' from three distinct step sizes ''h'', ''h / t'', and ''h / s'', the exact relationshipA^*=\frac + O(h^) = \frac + O(h^)yields an approximate relationship (please note that the notation here may cause a bit of confusion, the two O appearing in the equation above only indicates the leading order step size behavior but their explicit forms are different and hence cancelling out of the two O terms is approximately valid)A_i\left(\frac\right) + \frac \approx A_i\left(\frac\right) +\fracwhich can be solved numerically to estimate ''k_0'' for some arbitrary choices of ''h'', ''s'', and ''t''.


Example pseudocode code for Richardson extrapolation

The following pseudocode in MATLAB style demonstrates Richardson extrapolation to help solve the ODE y'(t) = -y^2, y(0) = 1 with the Trapezoidal method. In this example we halve the step size h each iteration and so in the discussion above we'd have that t = 2. The error of the Trapezoidal method can be expressed in terms of odd powers so that the error over multiple steps can be expressed in even powers; this leads us to raise t to the second power and to take powers of 4 = 2^2 = t^2 in the pseudocode. We want to find the value of y(5), which has the exact solution of \frac = \frac = 0.1666... since the exact solution of the ODE is y(t) = \frac. This pseudocode assumes that a function called Trapezoidal(f, tStart, tEnd, h, y0) exists which attempts to compute y(tEnd) by performing the trapezoidal method on the function f, with starting point y0 and tStart and step size h. Note that starting with too small an initial step size can potentially introduce error into the final solution. Although there are methods designed to help pick the best initial step size, one option is to start with a large step size and then to allow the Richardson extrapolation to reduce the step size each iteration until the error reaches the desired tolerance. tStart = 0 % Starting time tEnd = 5 % Ending time f = -y^2 % The derivative of y, so y' = f(t, y(t)) = -y^2 % The solution to this ODE is y = 1/(1 + t) y0 = 1 % The initial position (i.e. y0 = y(tStart) = y(0) = 1) tolerance = 10^-11 % 10 digit accuracy is desired maxRows = 20 % Don't allow the iteration to continue indefinitely initialH = tStart - tEnd % Pick an initial step size haveWeFoundSolution = false % Were we able to find the solution to within the desired tolerance? not yet. h = initialH % Create a 2D matrix of size maxRows by maxRows to hold the Richardson extrapolates % Note that this will be a lower triangular matrix and that at most two rows are actually % needed at any time in the computation. A = zeroMatrix(maxRows, maxRows) %Compute the top left element of the matrix. The first row of this (lower triangular) matrix has now been filled. A(1, 1) = Trapezoidal(f, tStart, tEnd, h, y0) % Each row of the matrix requires one call to Trapezoidal % This loops starts by filling the second row of the matrix, since the first row was computed above for i = 1 : maxRows - 1 % Starting at i = 1, iterate at most maxRows - 1 times h = h/2 % Halve the previous value of h since this is the start of a new row. % Starting filling row i+1 from the left by calling the Trapezoidal function with this new smaller step size A(i + 1, 1) = Trapezoidal(f, tStart, tEnd, h, y0) for j = 1 : i % Go across this current (i+1)-th row until the diagonal is reached % To compute A(i + 1, j + 1), which is the next Richardson extrapolate, % use the most recently computed value (i.e. A(i + 1, j)) and the value from the % row above it (i.e. A(i, j)). A(i + 1, j + 1) = ((4^j).*A(i + 1, j) - A(i, j))/(4^j - 1); end % After leaving the above inner loop, the diagonal element of row i + 1 has been computed % This diagonal element is the latest Richardson extrapolate to be computed. % The difference between this extrapolate and the last extrapolate of row i is a good % indication of the error. if (absoluteValue(A(i + 1, i + 1) - A(i, i)) < tolerance) % If the result is within tolerance print("y = ", A(i + 1, i + 1)) % Display the result of the Richardson extrapolation haveWeFoundSolution = true break % Done, so leave the loop end end if (haveWeFoundSolution

false) % If we were not able to find a solution to within the desired tolerance print("Warning: Not able to find solution to within the desired tolerance of ", tolerance); print("The last computed extrapolate was ", A(maxRows, maxRows)) end


See also

* Aitken's delta-squared process * Takebe Kenko * Richardson iteration


References

*''Extrapolation Methods. Theory and Practice'' by C. Brezinski and M. Redivo Zaglia, North-Holland, 1991. *Ivan Dimov, Zahari Zlatev, Istvan Farago, Agnes Havasi: Richardson Extrapolation: Practical Aspects and Applications'', Walter de Gruyter GmbH & Co KG, {{ISBN, 9783110533002 (2017).


External links


Fundamental Methods of Numerical Extrapolation With Applications
mit.edu
Richardson-Extrapolation
* ttps://www.mathworks.com/matlabcentral/fileexchange/24388-richardson-extrapolation Matlab codefor generic Richardson extrapolation.
Julia code
for generic Richardson extrapolation. Numerical analysis Asymptotic analysis Articles with example MATLAB/Octave code