HOME

TheInfoList



OR:

In
mathematics Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. There are many ar ...
and
computational science Computational science, also known as scientific computing, technical computing or scientific computation (SC), is a division of science, and more specifically the Computer Sciences, which uses advanced computing capabilities to understand and s ...
, the Euler method (also called the forward Euler method) is a first-order numerical procedure for solving
ordinary differential equation In mathematics, an ordinary differential equation (ODE) is a differential equation (DE) dependent on only a single independent variable (mathematics), variable. As with any other DE, its unknown(s) consists of one (or more) Function (mathematic ...
s (ODEs) with a given
initial value In multivariable calculus, an initial value problem (IVP) is an ordinary differential equation together with an initial condition which specifies the value of the unknown function at a given point in the domain. Modeling a system in physics or ...
. It is the most basic
explicit method Explicit and implicit methods are approaches used in numerical analysis for obtaining numerical approximations to the solutions of time-dependent ordinary and partial differential equations, as is required in computer simulations of physical pro ...
for numerical integration of ordinary differential equations and is the simplest Runge–Kutta method. The Euler method is named after
Leonhard Euler Leonhard Euler ( ; ; ; 15 April 170718 September 1783) was a Swiss polymath who was active as a mathematician, physicist, astronomer, logician, geographer, and engineer. He founded the studies of graph theory and topology and made influential ...
, who first proposed it in his book ''
Institutionum calculi integralis ''Institutiones calculi integralis'' (''Foundations of integral calculus'') is a three-volume textbook written by Leonhard Euler and published in 1768. It was on the subject of integral calculus In mathematics, an integral is the continuous ...
'' (published 1768–1770). The Euler method is a first-order method, which means that the local error (error per step) is proportional to the square of the step size, and the global error (error at a given time) is proportional to the step size. The Euler method often serves as the basis to construct more complex methods, e.g., predictor–corrector method.


Geometrical description


Purpose and why it works

Consider the problem of calculating the shape of an unknown curve which starts at a given point and satisfies a given differential equation. Here, a differential equation can be thought of as a formula by which the
slope In mathematics, the slope or gradient of a Line (mathematics), line is a number that describes the direction (geometry), direction of the line on a plane (geometry), plane. Often denoted by the letter ''m'', slope is calculated as the ratio of t ...
of the
tangent line In geometry, the tangent line (or simply tangent) to a plane curve at a given point is, intuitively, the straight line that "just touches" the curve at that point. Leibniz defined it as the line through a pair of infinitely close points o ...
to the curve can be computed at any point on the curve, once the position of that point has been calculated. The idea is that while the curve is initially unknown, its starting point, which we denote by A_0, is known (see Figure 1). Then, from the differential equation, the slope to the curve at A_0 can be computed, and so, the tangent line. Take a small step along that tangent line up to a point A_1. Along this small step, the slope does not change too much, so A_1 will be close to the curve. If we pretend that A_1 is still on the curve, the same reasoning as for the point A_0 above can be used. After several steps, a polygonal curve (A_0,A_1,A_2,A_3,\dots) is computed. In general, this curve does not diverge too far from the original unknown curve, and the error between the two curves can be made small if the step size is small enough and the interval of computation is finite.


First-order process

When given the values for t_0 and y(t_0) , and the derivative of y is a given function of t and y denoted as y'(t) = f\bigl(t,y(t)\bigr) . Begin the process by setting y_0=y(t_0) . Next, choose a value h for the size of every step along t-axis, and set t_n = t_0 + nh (or equivalently t_ = t_n + h). Now, the Euler method is used to find y_ from y_n and t_n: : y_ = y_n + hf(t_n,y_n). The value of y_n is an approximation of the solution at time t_n, i.e., y_n \approx y(t_n). The Euler method is
explicit Explicit refers to something that is specific, clear, or detailed. It can also mean: * Explicit knowledge, knowledge that can be readily articulated, codified and transmitted to others * Explicit (text), the final words of a text; contrast with inc ...
, i.e. the solution y_ is an explicit function of y_i for i \leq n.


Higher-order process

While the Euler method integrates a first-order ODE, any ODE of order N can be represented as a system of first-order ODEs. When given the ODE of order N defined as : y^(t) = f\left(t, y(t), y'(t), \ldots, y^(t)\right) , as well as h, t_0, and y_0,y'_0,\dots, y^_0, we implement the following formula until we reach the approximation of the solution to the ODE at the desired time: : \vec_ = \begin y_\\ y'_ \\ \vdots\\ y_^\\ y_^ \end = \begin y_+h\cdot y'_i\\ y' _+h\cdot y''_\\ \vdots\\ y^_+h\cdot y^_\\ y^_+h\cdot f\left(t_i, y_i, y'_i, \ldots, y^_i\right) \end These first-order systems can be handled by Euler's method or, in fact, by any other scheme for first-order systems.


First-order example

Given the initial value problem :y'=y, \quad y(0)=1, we would like to use the Euler method to approximate y(4).


Using step size equal to 1 ()

The Euler method is : y_ = y_n + hf(t_n,y_n). so first we must compute f(t_0, y_0). In this simple differential equation, the function f is defined by f(t,y) = y. We have : f(t_0,y_0) = f(0,1) = 1. By doing the above step, we have found the slope of the line that is tangent to the solution curve at the point (0,1). Recall that the slope is defined as the change in y divided by the change in t, or \frac . The next step is to multiply the above value by the step size h, which we take equal to one here: : h \cdot f(y_0) = 1 \cdot 1 = 1. Since the step size is the change in t, when we multiply the step size and the slope of the tangent, we get a change in y value. This value is then added to the initial y value to obtain the next value to be used for computations. : y_0 + hf(y_0) = y_1 = 1 + 1 \cdot 1 = 2. The above steps should be repeated to find y_2, y_3 and y_4. : \begin y_2 &= y_1 + hf(y_1) = 2 + 1 \cdot 2 = 4, \\ y_3 &= y_2 + hf(y_2) = 4 + 1 \cdot 4 = 8, \\ y_4 &= y_3 + hf(y_3) = 8 + 1 \cdot 8 = 16. \end Due to the repetitive nature of this algorithm, it can be helpful to organize computations in a chart form, as seen below, to avoid making errors. : The conclusion of this computation is that y_4 = 16 . The exact solution of the differential equation is y(t) = e^t , so y(4) = e^4 \approx 54.598 . Although the approximation of the Euler method was not very precise in this specific case, particularly due to a large value step size h, its behaviour is qualitatively correct as the figure shows.


Using other step sizes

As suggested in the introduction, the Euler method is more accurate if the step size h is smaller. The table below shows the result with different step sizes. The top row corresponds to the example in the previous section, and the second row is illustrated in the figure. : The error recorded in the last column of the table is the difference between the exact solution at t = 4 and the Euler approximation. In the bottom of the table, the step size is half the step size in the previous row, and the error is also approximately half the error in the previous row. This suggests that the error is roughly proportional to the step size, at least for fairly small values of the step size. This is true in general, also for other equations; see the section ''Global truncation error'' for more details. Other methods, such as the midpoint method also illustrated in the figures, behave more favourably: the global error of the midpoint method is roughly proportional to the ''square'' of the step size. For this reason, the Euler method is said to be a first-order method, while the midpoint method is second order. We can extrapolate from the above table that the step size needed to get an answer that is correct to three decimal places is approximately 0.00001, meaning that we need 400,000 steps. This large number of steps entails a high computational cost. For this reason, higher-order methods are employed such as Runge–Kutta methods or
linear multistep method Linear multistep methods are used for the numerical solution of ordinary differential equations. Conceptually, a numerical method starts from an initial point and then takes a short step forward in time to find the next solution point. The proce ...
s, especially if a high accuracy is desired.


Higher-order example

For this third-order example, assume that the following information is given: \begin & y + 4ty'' - t^2y'- (\cos)y=\sin \\ & t_0=0 \\ & y_0=y(t_0)=2 \\ & y'_0=y'(t_0)=-1 \\ & y''_0 = y''( t_0 )=3 \\ & h=0.5 \end From this we can isolate y to get the equation: f\left(t , y, y' , y'' \right) = y = \sin + (\cos)y + t^2y' - 4ty'' Using that we can get the solution for \vec_ : \vec_ = \begin y_ \\ y_'\\ y_ '' \end = \begin y_+h\cdot y'_0\\ y'_+h\cdot y''_ \\ y''_+h\cdot f\left(t_0 , y_0, y'_0 , y''_0\right) \end = \begin 2+0.5 \cdot -1 \\ -1+0.5\cdot 3 \\ 3 +0.5\cdot \left(\sin + (\cos) \cdot 2 + 0^2 \cdot (-1) - 4 \cdot 0 \cdot 3 \right) \end = \begin 1.5\\ 0.5 \\ 4 \end And using the solution for \vec_ , we can get the solution for \vec_ : \vec_ = \begin y_ \\ y_' \\ y_ '' \end = \begin y_+h\cdot y'_1\\ y'_+h\cdot y''_ \\ y''_+h\cdot f\left(t_1 , y_1, y'_1 , y''_1 \right) \end = \begin 1.5+ 0.5 \cdot 0.5\\ 0.5+ 0.5 \cdot 4 \\ 4+ 0.5 \cdot \left( \sin + ( \cos) \cdot 1.5 + 0.5^2 \cdot 0.5 - 4 \cdot 0.5 \cdot 4 \right) \end = \begin 1.75 \\ 2.5 \\ 0.9604... \end We can continue this process using the same formula as long as necessary to find whichever \vec_ desired.


Derivation

The Euler method can be derived in a number of ways. (1) Firstly, there is the geometrical description above. (2) Another possibility is to consider the
Taylor expansion In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor ser ...
of the function y around t_0: : y(t_0 + h) = y(t_0) + h y'(t_0) + \tfrac12 h^2 y''(t_0) + O\left(h^3\right). The differential equation states that y'=f(t,y). If this is substituted in the Taylor expansion and the quadratic and higher-order terms are ignored, the Euler method arises. The Taylor expansion is used below to analyze the error committed by the Euler method, and it can be extended to produce
Runge–Kutta methods In numerical analysis, the Runge–Kutta methods ( ) are a family of Explicit and implicit methods, implicit and explicit iterative methods, List of Runge–Kutta methods, which include the Euler method, used in temporal discretization for the a ...
. (3) A closely related derivation is to substitute the forward
finite difference A finite difference is a mathematical expression of the form . Finite differences (or the associated difference quotients) are often used as approximations of derivatives, such as in numerical differentiation. The difference operator, commonly d ...
formula for the derivative, : y'(t_0) \approx \frac in the differential equation y' = f(t,y). Again, this yields the Euler method. A similar computation leads to the midpoint method and the
backward Euler method In numerical analysis and scientific computing, the backward Euler method (or implicit Euler method) is one of the most basic numerical methods for ordinary differential equations, numerical methods for the solution of ordinary differential equatio ...
. (4) Finally, one can integrate the differential equation from t_0 to t_0 + h and apply the
fundamental theorem of calculus The fundamental theorem of calculus is a theorem that links the concept of derivative, differentiating a function (mathematics), function (calculating its slopes, or rate of change at every point on its domain) with the concept of integral, inte ...
to get: : y(t_0+h) - y(t_0) = \int_^ f\bigl(t,y(t)\bigr) \,\mathrmt. Now approximate the integral by the left-hand
rectangle method In mathematics, a Riemann sum is a certain kind of approximation of an integral by a finite sum. It is named after nineteenth century German mathematician Bernhard Riemann. One very common application is in numerical integration, i.e., approxima ...
(with only one rectangle): : \int_^ f\bigl(t,y(t)\bigr) \,\mathrmt \approx h f\bigl(t_0, y(t_0)\bigr). Combining both equations, one finds again the Euler method. This line of thought can be continued to arrive at various
linear multistep method Linear multistep methods are used for the numerical solution of ordinary differential equations. Conceptually, a numerical method starts from an initial point and then takes a short step forward in time to find the next solution point. The proce ...
s.


Local truncation error

The
local truncation error Truncation errors in numerical integration are of two kinds: * ''local truncation errors'' – the error caused by one iteration, and * ''global truncation errors'' – the cumulative error caused by many iterations. Definitions Suppose we have ...
of the Euler method is the error made in a single step. It is the difference between the numerical solution after one step, y_1, and the exact solution at time t_1 = t_0+h. The numerical solution is given by : y_1 = y_0 + h f(t_0, y_0). For the exact solution, we use the Taylor expansion mentioned in the section ''Derivation'' above: : y(t_0 + h) = y(t_0) + h y'(t_0) + \tfrac12 h^2 y''(t_0) + O\left(h^3\right). The local truncation error (LTE) introduced by the Euler method is given by the difference between these equations: : \mathrm = y(t_0 + h) - y_1 = \tfrac12 h^2 y''(t_0) + O\left(h^3\right). This result is valid if y has a bounded third derivative. This shows that for small h, the local truncation error is approximately proportional to h^2. This makes the Euler method less accurate than higher-order techniques such as Runge–Kutta methods and
linear multistep method Linear multistep methods are used for the numerical solution of ordinary differential equations. Conceptually, a numerical method starts from an initial point and then takes a short step forward in time to find the next solution point. The proce ...
s, for which the local truncation error is proportional to a higher power of the step size. A slightly different formulation for the local truncation error can be obtained by using the Lagrange form for the remainder term in
Taylor's theorem In calculus, Taylor's theorem gives an approximation of a k-times differentiable function around a given point by a polynomial of degree k, called the k-th-order Taylor polynomial. For a smooth function, the Taylor polynomial is the truncation a ...
. If y has a continuous second derivative, then there exists a \xi \in _0,t_0+h/math> such that : \mathrm = y(t_0 + h) - y_1 = \tfrac12 h^2 y''(\xi). In the above expressions for the error, the second derivative of the unknown exact solution y can be replaced by an expression involving the right-hand side of the differential equation. Indeed, it follows from the equation y'=f(t,y) that :y''(t_0) = \frac\bigl(t_0, y(t_0)\bigr) + \frac\bigl(t_0, y(t_0)\bigr) \, f\bigl(t_0, y(t_0)\bigr).


Global truncation error

The global truncation error is the error at a fixed time t_i, after however many steps the method needs to take to reach that time from the initial time. The global truncation error is the cumulative effect of the local truncation errors committed in each step. The number of steps is easily determined to be \frac, which is proportional to \frac, and the error committed in each step is proportional to h^2 (see the previous section). Thus, it is to be expected that the global truncation error will be proportional to h. This intuitive reasoning can be made precise. If the solution y has a bounded second derivative and f is
Lipschitz continuous In mathematical analysis, Lipschitz continuity, named after Germany, German mathematician Rudolf Lipschitz, is a strong form of uniform continuity for function (mathematics), functions. Intuitively, a Lipschitz continuous function is limited in h ...
in its second argument, then the global truncation error (denoted as , y(t_i)-y_i, ) is bounded by : , y(t_i)-y_i, \le \frac\left(e^-1\right) where M is an upper bound on the second derivative of y on the given interval and L is the Lipschitz constant of f. Or more simply, when y'(t) = f(t,y) , the value L = \text\bigl(, \frac\bigl (t,y)\bigr, \bigr) (such that t is treated as a constant). In contrast, M = \text\bigl( , \frac\bigl (t)\bigr, \bigr) where function y(t) is the exact solution which only contains the t variable. The precise form of this bound is of little practical importance, as in most cases the bound vastly overestimates the actual error committed by the Euler method. What is important is that it shows that the global truncation error is (approximately) proportional to h. For this reason, the Euler method is said to be first order.


Example

If we have the differential equation y'=1+ ( t-y)^2, and the exact solution y=t+ \frac, and we want to find M and L for when 2 \le t \le 3. L = \text\bigl( , \frac\bigl (t,y)\bigr, \bigr) = \max_\bigl( , \frac\bigl + ( t-y)^2 \bigr, \bigr) = \max_\bigl( , 2( t-y) , \bigr) = \max_\bigl( , 2( t- + \frac , \bigr) = \max_\bigl( , - \frac , \bigr) = 2 M = \text\bigl(, \frac\bigl (t)\bigr\bigr) = \max_ \left( , \frac \bigl t + \frac \bigr, \right) = \max_\left( , \frac , \right) = 2 Thus we can find the error bound at t=2.5 and h=0.5: \text = \frac\left(e^-1\right) = \frac\left(e^-1\right) = 0.42957 Notice that t0 is equal to 2 because it is the lower bound for t in 2 \le t \le 3.


Numerical stability

The Euler method can also be numerically
unstable In dynamical systems instability means that some of the outputs or internal state (controls), states increase with time, without bounds. Not all systems that are not Stability theory, stable are unstable; systems can also be marginal stability ...
, especially for
stiff equation In mathematics, a stiff equation is a differential equation for which certain numerical methods for solving the equation are numerically unstable, unless the step size is taken to be extremely small. It has proven difficult to formulate a precise ...
s, meaning that the numerical solution grows very large for equations where the exact solution does not. This can be illustrated using the linear equation : y' = -2.3y, \qquad y(0) = 1. The exact solution is y(t) = e^, which decays to zero as t \to \infty. However, if the Euler method is applied to this equation with step size h=1, then the numerical solution is qualitatively wrong: It oscillates and grows (see the figure). This is what it means to be unstable. If a smaller step size is used, for instance h = 0.7, then the numerical solution does decay to zero. If the Euler method is applied to the linear equation y' = k y, then the numerical solution is unstable if the product hk is outside the region : \bigl\, illustrated on the right. This region is called the (linear) ''stability region''. In the example, k = -2.3, so if h = 1 then hk = -2.3 which is outside the stability region, and thus the numerical solution is unstable. This limitation — along with its slow convergence of error with h — means that the Euler method is not often used, except as a simple example of numerical integration. Frequently models of physical systems contain terms representing fast-decaying elements (i.e. with large negative exponential arguments). Even when these are not of interest in the overall solution, the instability they can induce means that an exceptionally small timestep would be required if the Euler method is used.


Rounding errors

In step n of the Euler method, the
rounding error In computing, a roundoff error, also called rounding error, is the difference between the result produced by a given algorithm using exact arithmetic and the result produced by the same algorithm using finite-precision, rounded arithmetic. Roun ...
is roughly of the magnitude \varepsilon y_n where \varepsilon is the
machine epsilon Machine epsilon or machine precision is an upper bound on the relative approximation error due to rounding in floating point number systems. This value characterizes computer arithmetic in the field of numerical analysis, and by extension in the ...
. Assuming that the rounding errors are independent random variables, the expected total rounding error is proportional to \frac\sqrt . Thus, for extremely small values of the step size the truncation error will be small but the effect of rounding error may be big. Most of the effect of rounding error can be easily avoided if compensated summation is used in the formula for the Euler method.


Modifications and extensions

A simple modification of the Euler method which eliminates the stability problems noted
above Above may refer to: *Above (artist) Tavar Zawacki (b. 1981, California) is a Polish, Portuguese - American abstract artist and internationally recognized visual artist based in Berlin, Germany. From 1996 to 2016, he created work under the ...
is the
backward Euler method In numerical analysis and scientific computing, the backward Euler method (or implicit Euler method) is one of the most basic numerical methods for ordinary differential equations, numerical methods for the solution of ordinary differential equatio ...
: : y_ = y_n + h f(t_, y_). This differs from the (standard, or forward) Euler method in that the function f is evaluated at the end point of the step, instead of the starting point. The backward Euler method is an implicit method, meaning that the formula for the backward Euler method has y_ on both sides, so when applying the backward Euler method we have to solve an equation. This makes the implementation more costly. Other modifications of the Euler method that help with stability yield the exponential Euler method or the semi-implicit Euler method. More complicated methods can achieve a higher order (and more accuracy). One possibility is to use more function evaluations. This is illustrated by the midpoint method which is already mentioned in this article: : y_ = y_n + h f \left( t_n + \tfrac12 h, y_n + \tfrac12 h f(t_n, y_n) \right). This leads to the family of
Runge–Kutta methods In numerical analysis, the Runge–Kutta methods ( ) are a family of Explicit and implicit methods, implicit and explicit iterative methods, List of Runge–Kutta methods, which include the Euler method, used in temporal discretization for the a ...
. The other possibility is to use more past values, as illustrated by the two-step Adams–Bashforth method: : y_ = y_n + \tfrac32 h f(t_, y_) - \tfrac12 h f(t_, y_). This leads to the family of
linear multistep method Linear multistep methods are used for the numerical solution of ordinary differential equations. Conceptually, a numerical method starts from an initial point and then takes a short step forward in time to find the next solution point. The proce ...
s. There are other modifications which uses techniques from compressive sensing to minimize memory usage


In popular culture

In the film ''
Hidden Figures ''Hidden Figures'' is a 2016 American Biographical film, biographical Drama (film and television), drama film directed by Theodore Melfi and written by Melfi and Allison Schroeder. It is loosely based on the 2016 non-fiction Hidden Figures (boo ...
'', Katherine Johnson resorts to the Euler method in calculating the re-entry of astronaut
John Glenn John Herschel Glenn Jr. (July 18, 1921 – December 8, 2016) was an American Marine Corps aviator, astronaut, businessman, and politician. He was the third American in space and the first to orbit the Earth, circling it three times in 1 ...
from Earth orbit.


See also

*
Crank–Nicolson method In numerical analysis, the Crank–Nicolson method is a finite difference method used for numerically solving the heat equation and similar partial differential equations. It is a Big O notation, second-order method in time. It is Explicit and im ...
*
Gradient descent Gradient descent is a method for unconstrained mathematical optimization. It is a first-order iterative algorithm for minimizing a differentiable multivariate function. The idea is to take repeated steps in the opposite direction of the gradi ...
similarly uses finite steps, here to find minima of functions *
List of Runge–Kutta methods Runge–Kutta methods are methods for the numerical solution of the ordinary differential equation :\frac = f(t, y). Explicit and implicit methods, Explicit Runge–Kutta methods take the form :\begin y_ &= y_n + h \sum_^s b_i k_i \\ k_1 &= f(t_ ...
*
Linear multistep method Linear multistep methods are used for the numerical solution of ordinary differential equations. Conceptually, a numerical method starts from an initial point and then takes a short step forward in time to find the next solution point. The proce ...
*
Numerical integration In analysis, numerical integration comprises a broad family of algorithms for calculating the numerical value of a definite integral. The term numerical quadrature (often abbreviated to quadrature) is more or less a synonym for "numerical integr ...
(for calculating definite integrals) *
Numerical methods for ordinary differential equations Numerical methods for ordinary differential equations are methods used to find Numerical analysis, numerical approximations to the solutions of ordinary differential equations (ODEs). Their use is also known as "numerical integration", although ...


Notes


References

* * * * * * * *


External links

*
Euler method implementations in different languages
by
Rosetta Code Rosetta Code is a wiki-based programming chrestomathy website with implementations of common algorithms and solutions to various computer programming, programming problems in many different programming languages. It is named for the Rosetta Stone ...
* {{Leonhard Euler Numerical differential equations Runge–Kutta methods First order methods Leonhard Euler Articles with example R code Articles with example MATLAB/Octave code