
In
numerical analysis
Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic computation, symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of ...
, the Newton–Raphson method, also known simply as Newton's method, named after
Isaac Newton
Sir Isaac Newton () was an English polymath active as a mathematician, physicist, astronomer, alchemist, theologian, and author. Newton was a key figure in the Scientific Revolution and the Age of Enlightenment, Enlightenment that followed ...
and
Joseph Raphson
Joseph Raphson (c. 1668 – c. 1715) was an England, English mathematician and intellectual known best for the Newton–Raphson method.
Biography
Very little is known about Raphson's life. Connor and Robertson give his date of birth as 1668 bas ...
, is a
root-finding algorithm
In numerical analysis, a root-finding algorithm is an algorithm for finding zeros, also called "roots", of continuous functions. A zero of a function is a number such that . As, generally, the zeros of a function cannot be computed exactly nor ...
which produces successively better
approximations to the
roots
A root is the part of a plant, generally underground, that anchors the plant body, and absorbs and stores water and nutrients.
Root or roots may also refer to:
Art, entertainment, and media
* ''The Root'' (magazine), an online magazine focusin ...
(or zeroes) of a
real-valued
function. The most basic version starts with a
real-valued function
In mathematics, a real-valued function is a function whose values are real numbers. In other words, it is a function that assigns a real number to each member of its domain.
Real-valued functions of a real variable (commonly called ''real ...
, its
derivative
In mathematics, the derivative is a fundamental tool that quantifies the sensitivity to change of a function's output with respect to its input. The derivative of a function of a single variable at a chosen input value, when it exists, is t ...
, and an initial guess for a
root
In vascular plants, the roots are the plant organ, organs of a plant that are modified to provide anchorage for the plant and take in water and nutrients into the plant body, which allows plants to grow taller and faster. They are most often bel ...
of . If satisfies certain assumptions and the initial guess is close, then
is a better approximation of the root than . Geometrically, is the
x-intercept
In mathematics, a zero (also sometimes called a root) of a real-, complex-, or generally vector-valued function f, is a member x of the domain of f such that f(x) ''vanishes'' at x; that is, the function f attains the value of 0 at x, or equ ...
of the
tangent
In geometry, the tangent line (or simply tangent) to a plane curve at a given point is, intuitively, the straight line that "just touches" the curve at that point. Leibniz defined it as the line through a pair of infinitely close points o ...
of the
graph
Graph may refer to:
Mathematics
*Graph (discrete mathematics), a structure made of vertices and edges
**Graph theory, the study of such graphs and their properties
*Graph (topology), a topological space resembling a graph in the sense of discret ...
of at : that is, the improved guess, , is the unique root of the
linear approximation
In mathematics, a linear approximation is an approximation of a general function (mathematics), function using a linear function (more precisely, an affine function). They are widely used in the method of finite differences to produce first order ...
of at the initial guess, . The process is repeated as
until a sufficiently precise value is reached. The number of correct digits roughly doubles with each step. This algorithm is first in the class of
Householder's methods, and was succeeded by
Halley's method. The method can also be extended to
complex functions and to
systems of equations.
Description
The purpose of Newton's method is to find a root of a function. The idea is to start with an initial guess at a root, approximate the function by its
tangent line
In geometry, the tangent line (or simply tangent) to a plane curve at a given point is, intuitively, the straight line that "just touches" the curve at that point. Leibniz defined it as the line through a pair of infinitely close points o ...
near the guess, and then take the root of the linear approximation as a next guess at the function's root. This will typically be closer to the function's root than the previous guess, and the method can be
iterated.
The best
linear approximation
In mathematics, a linear approximation is an approximation of a general function (mathematics), function using a linear function (more precisely, an affine function). They are widely used in the method of finite differences to produce first order ...
to an arbitrary
differentiable function
In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. In other words, the graph of a differentiable function has a non- vertical tangent line at each interior point in ...
near the point
is the tangent line to the curve, with equation
The root of this linear function, the place where it intercepts the -axis, can be taken as a closer approximate root :
The process can be started with any arbitrary initial guess , though it will generally require fewer iterations to converge if the guess is close to one of the function's roots. The method will usually converge if . Furthermore, for a root of
multiplicity 1, the convergence is at least quadratic (see ''
Rate of convergence'') in some sufficiently small
neighbourhood
A neighbourhood (Commonwealth English) or neighborhood (American English) is a geographically localized community within a larger town, city, suburb or rural area, sometimes consisting of a single street and the buildings lining it. Neighbourh ...
of the root: the number of correct digits of the approximation roughly doubles with each additional step. More details can be found in ' below.
Householder's methods are similar but have higher order for even faster convergence. However, the extra computations required for each step can slow down the overall performance relative to Newton's method, particularly if or its derivatives are computationally expensive to evaluate.
History
In the
Old Babylonian
Old Babylonian may refer to:
*the period of the First Babylonian dynasty (20th to 16th centuries BC)
*the historical stage of the Akkadian language
Akkadian ( ; )John Huehnergard & Christopher Woods, "Akkadian and Eblaite", ''The Cambridge Enc ...
period (19th–16th century BCE), the side of a square of known area could be effectively approximated, and this is conjectured to have been done using a special case of Newton's method,
described algebraically below, by iteratively improving an initial estimate; an equivalent method can be found in
Heron of Alexandria
Hero of Alexandria (; , , also known as Heron of Alexandria ; probably 1st or 2nd century AD) was a Greek mathematician and engineer who was active in Alexandria in Egypt during the Roman era. He has been described as the greatest experimentali ...
's ''Metrica'' (1st–2nd century CE), so is often called ''
Heron's method''.
Jamshīd al-Kāshī
Ghiyāth al-Dīn Jamshīd Masʿūd al-Kāshī (or al-Kāshānī) ( ''Ghiyās-ud-dīn Jamshīd Kāshānī'') (c. 1380 Kashan, Iran – 22 June 1429 Samarkand, Transoxiana) was a Persian astronomer and mathematician during the reign of Tamerlane.
...
used a method to solve to find roots of ', a method that was algebraically equivalent to Newton's method, and in which a similar method was found in ''Trigonometria Britannica'', published by
Henry Briggs in 1633.
The method first appeared roughly in
Isaac Newton
Sir Isaac Newton () was an English polymath active as a mathematician, physicist, astronomer, alchemist, theologian, and author. Newton was a key figure in the Scientific Revolution and the Age of Enlightenment, Enlightenment that followed ...
's work in ''
De analysi per aequationes numero terminorum infinitas'' (written in 1669, published in 1711 by
William Jones) and in ''De metodis fluxionum et serierum infinitarum'' (written in 1671, translated and published as ''
Method of Fluxions
''Method of Fluxions'' () is a mathematical treatise by Sir Isaac Newton which served as the earliest written formulation of modern calculus. The book was completed in 1671 and posthumously published in 1736. Background
Fluxion is Newton's term ...
'' in 1736 by
John Colson).
However, while Newton gave the basic ideas, his method differs from the modern method given above. He applied the method only to polynomials, starting with an initial root estimate and extracting a sequence of error corrections. He used each correction to rewrite the polynomial in terms of the remaining error, and then solved for a new correction by neglecting higher-degree terms. He did not explicitly connect the method with derivatives or present a general formula. Newton applied this method to both numerical and algebraic problems, producing
Taylor series
In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor ser ...
in the latter case.
Newton may have derived his method from a similar, less precise method by mathematician
François Viète
François Viète (; 1540 – 23 February 1603), known in Latin as Franciscus Vieta, was a French people, French mathematician whose work on new algebra was an important step towards modern algebra, due to his innovative use of letters as par ...
, however, the two methods are not the same.
The essence of Viète's own method can be found in the work of the mathematician
Sharaf al-Din al-Tusi.
The Japanese mathematician
Seki Kōwa used a form of Newton's method in the 1680s to solve single-variable equations, though the connection with
calculus
Calculus is the mathematics, mathematical study of continuous change, in the same way that geometry is the study of shape, and algebra is the study of generalizations of arithmetic operations.
Originally called infinitesimal calculus or "the ...
was missing.
Newton's method was first published in 1685 in ''A Treatise of Algebra both Historical and Practical'' by
John Wallis
John Wallis (; ; ) was an English clergyman and mathematician, who is given partial credit for the development of infinitesimal calculus.
Between 1643 and 1689 Wallis served as chief cryptographer for Parliament and, later, the royal court. ...
. In 1690,
Joseph Raphson
Joseph Raphson (c. 1668 – c. 1715) was an England, English mathematician and intellectual known best for the Newton–Raphson method.
Biography
Very little is known about Raphson's life. Connor and Robertson give his date of birth as 1668 bas ...
published a simplified description in ''Analysis aequationum universalis''. Raphson also applied the method only to polynomials, but he avoided Newton's tedious rewriting process by extracting each successive correction from the original polynomial. This allowed him to derive a reusable iterative expression for each problem. Finally, in 1740,
Thomas Simpson described Newton's method as an iterative method for solving general nonlinear equations using calculus, essentially giving the description above. In the same publication, Simpson also gives the generalization to systems of two equations and notes that Newton's method can be used for solving optimization problems by setting the gradient to zero.
Arthur Cayley
Arthur Cayley (; 16 August 1821 – 26 January 1895) was a British mathematician who worked mostly on algebra. He helped found the modern British school of pure mathematics, and was a professor at Trinity College, Cambridge for 35 years.
He ...
in 1879 in ''The Newton–Fourier imaginary problem'' was the first to notice the difficulties in generalizing Newton's method to complex roots of polynomials with degree greater than 2 and complex initial values. This opened the way to the study of the
theory of iterations of rational functions.
Practical considerations
Newton's method is a powerful technique—if the derivative of the function at the root is nonzero, then the
convergence
Convergence may refer to:
Arts and media Literature
*''Convergence'' (book series), edited by Ruth Nanda Anshen
*Convergence (comics), "Convergence" (comics), two separate story lines published by DC Comics:
**A four-part crossover storyline that ...
is at least quadratic: as the method converges on the root, the difference between the root and the approximation is squared (the number of accurate digits roughly doubles) at each step. However, there are some difficulties with the method.
Difficulty in calculating the derivative of a function
Newton's method requires that the derivative can be calculated directly. An analytical expression for the derivative may not be easily obtainable or could be expensive to evaluate. In these situations, it may be appropriate to approximate the derivative by using the slope of a line through two nearby points on the function. Using this approximation would result in something like the
secant method whose convergence is slower than that of Newton's method.
Failure of the method to converge to the root
It is important to review the
proof of quadratic convergence of Newton's method before implementing it. Specifically, one should review the assumptions made in the proof. For
situations where the method fails to converge, it is because the assumptions made in this proof are not met.
For example,
in some cases, if the first derivative is not well behaved in the neighborhood of a particular root, then it is possible that Newton's method will fail to converge no matter where the initialization is set. In some cases, Newton's method can be stabilized by using
successive over-relaxation
In numerical linear algebra, the method of successive over-relaxation (SOR) is a variant of the Gauss–Seidel method for solving a linear system of equations, resulting in faster convergence. A similar method can be used for any slowly convergi ...
, or the speed of convergence can be increased by using the same method.
In a robust implementation of Newton's method, it is common to place limits on the number of iterations, bound the solution to an interval known to contain the root, and combine the method with a more robust root finding method.
Slow convergence for roots of multiplicity greater than 1
If the root being sought has
multiplicity greater than one, the convergence rate is merely linear (errors reduced by a constant factor at each step) unless special steps are taken. When there are two or more roots that are close together then it may take many iterations before the iterates get close enough to one of them for the quadratic convergence to be apparent. However, if the multiplicity of the root is known, the following modified algorithm preserves the quadratic convergence rate:
This is equivalent to using
successive over-relaxation
In numerical linear algebra, the method of successive over-relaxation (SOR) is a variant of the Gauss–Seidel method for solving a linear system of equations, resulting in faster convergence. A similar method can be used for any slowly convergi ...
. On the other hand, if the multiplicity of the root is not known, it is possible to estimate after carrying out one or two iterations, and then use that value to increase the rate of convergence.
If the multiplicity of the root is finite then will have a root at the same location with multiplicity 1. Applying Newton's method to find the root of recovers quadratic convergence in many cases although it generally involves the second derivative of . In a particularly simple case, if then and Newton's method finds the root in a single iteration with
Slow convergence
The function has a root at 0.
[J. E. Dennis, Jr. and Robert B. Schnabel. Numerical methods for unconstrained optimization and nonlinear equations. SIAM] Since is continuously differentiable at its root, the theory guarantees that Newton's method as initialized sufficiently close to the root will converge. However, since the derivative is zero at the root, quadratic convergence is not ensured by the theory. In this particular example, the Newton iteration is given by
:
It is visible from this that Newton's method could be initialized anywhere and converge to zero, but at only a linear rate. If initialized at 1, dozens of iterations would be required before ten digits of accuracy are achieved.
The function also has a root at 0, where it is continuously differentiable. Although the first derivative is nonzero at the root, the second derivative is nonexistent there, so that quadratic convergence cannot be guaranteed. In fact the Newton iteration is given by
:
From this, it can be seen that the rate of convergence is superlinear but subquadratic. This can be seen in the following tables, the left of which shows Newton's method applied to the above and the right of which shows Newton's method applied to . The quadratic convergence in iteration shown on the right is illustrated by the orders of magnitude in the distance from the iterate to the true root (0,1,2,3,5,10,20,39,...) being approximately doubled from row to row. While the convergence on the left is superlinear, the order of magnitude is only multiplied by about 4/3 from row to row (0,1,2,4,5,7,10,13,...).
The rate of convergence is distinguished from the number of iterations required to reach a given accuracy. For example, the function has a root at 1. Since and is smooth, it is known that any Newton iteration convergent to 1 will converge quadratically. However, if initialized at 0.5, the first few iterates of Newton's method are approximately 26214, 24904, 23658, 22476, decreasing slowly, with only the 200th iterate being 1.0371. The following iterates are 1.0103, 1.00093, 1.0000082, and 1.00000000065, illustrating quadratic convergence. This highlights that quadratic convergence of a Newton iteration does not mean that only few iterates are required; this only applies once the sequence of iterates is sufficiently close to the root.
Convergence dependent on initialization
The function has a root at 0. The Newton iteration is given by
:
From this, it can be seen that there are three possible phenomena for a Newton iteration. If initialized strictly between , the Newton iteration will converge (super-)quadratically to 0; if initialized exactly at or , the Newton iteration will oscillate endlessly between ; if initialized anywhere else, the Newton iteration will diverge. This same trichotomy occurs for .
In cases where the function in question has multiple roots, it can be difficult to control, via choice of initialization, which root (if any) is identified by Newton's method. For example, the function has roots at −1, 0, 1, and 3. If initialized at −1.488, the Newton iteration converges to 0; if initialized at −1.487, it diverges to ; if initialized at −1.486, it converges to −1; if initialized at −1.485, it diverges to ; if initialized at −1.4843, it converges to 3; if initialized at −1.484, it converges to . This kind of subtle dependence on initialization is not uncommon; it is frequently studied in the
complex plane
In mathematics, the complex plane is the plane (geometry), plane formed by the complex numbers, with a Cartesian coordinate system such that the horizontal -axis, called the real axis, is formed by the real numbers, and the vertical -axis, call ...
in the form of the
Newton fractal
The Newton fractal is a boundary set in the complex plane which is characterized by Newton's method applied to a fixed polynomial \mathbb or transcendental function. It is the Julia set of the meromorphic function which is given by Newton's met ...
.
Divergence even when initialization is close to the root
Consider the problem of finding a root of . The Newton iteration is
:
Unless Newton's method is initialized at the exact root 0, it is seen that the sequence of iterates will fail to converge. For example, even if initialized at the reasonably accurate guess of 0.001, the first several iterates are −0.002, 0.004, −0.008, 0.016, reaching 1048.58, −2097.15, ... by the 20th iterate. This failure of convergence is not contradicted by the analytic theory, since in this case is not differentiable at its root.
In the above example, failure of convergence is reflected by the failure of to get closer to zero as increases, as well as by the fact that successive iterates are growing further and further apart. However, the function also has a root at 0. The Newton iteration is given by
:
In this example, where again is not differentiable at the root, any Newton iteration not starting exactly at the root will diverge, but with both and converging to zero.
[Kenneth L. Judd. Numerical methods in economics. MIT Press] This is seen in the following table showing the iterates with initialization 1:
Although the convergence of in this case is not very rapid, it can be proved from the iteration formula. This example highlights the possibility that a stopping criterion for Newton's method based only on the smallness of and might falsely identify a root.
Oscillatory behavior
It is easy to find situations for which Newton's method oscillates endlessly between two distinct values. For example, for Newton's method as applied to a function to oscillate between 0 and 1, it is only necessary that the tangent line to at 0 intersects the -axis at 1 and that the tangent line to at 1 intersects the -axis at 0.
This is the case, for example, if . For this function, it is even the case that Newton's iteration as initialized sufficiently close to 0 or 1 will ''asymptotically'' oscillate between these values. For example, Newton's method as initialized at 0.99 yields iterates 0.99, −0.06317, 1.00628, 0.03651, 1.00196, 0.01162, 1.00020, 0.00120, 1.000002, and so on. This behavior is present despite the presence of a root of approximately equal to −1.76929.
Undefinedness of Newton's method
In some cases, it is not even possible to perform the Newton iteration. For example, if , then the Newton iteration is defined by
:
So Newton's method cannot be initialized at 0, since this would make undefined. Geometrically, this is because the tangent line to at 0 is horizontal (i.e. ), never intersecting the -axis.
Even if the initialization is selected so that the Newton iteration can begin, the same phenomenon can block the iteration from being indefinitely continued.
If has an incomplete domain, it is possible for Newton's method to send the iterates outside of the domain, so that it is impossible to continue the iteration.
For example, the
natural logarithm
The natural logarithm of a number is its logarithm to the base of a logarithm, base of the e (mathematical constant), mathematical constant , which is an Irrational number, irrational and Transcendental number, transcendental number approxima ...
function has a root at 1, and is defined only for positive . Newton's iteration in this case is given by
:
So if the iteration is initialized at , the next iterate is 0; if the iteration is initialized at a value larger than , then the next iterate is negative. In either case, the method cannot be continued.
Analysis
Suppose that the function has a zero at , i.e., , and is differentiable in a
neighborhood
A neighbourhood (Commonwealth English) or neighborhood (American English) is a geographically localized community within a larger town, city, suburb or rural area, sometimes consisting of a single street and the buildings lining it. Neigh ...
of .
If is continuously differentiable and its derivative is nonzero at , then there exists a
neighborhood
A neighbourhood (Commonwealth English) or neighborhood (American English) is a geographically localized community within a larger town, city, suburb or rural area, sometimes consisting of a single street and the buildings lining it. Neigh ...
of such that for all starting values in that neighborhood, the
sequence
In mathematics, a sequence is an enumerated collection of objects in which repetitions are allowed and order matters. Like a set, it contains members (also called ''elements'', or ''terms''). The number of elements (possibly infinite) is cal ...
will
converge
Converge may refer to:
* Converge (band), American hardcore punk band
* Converge (Baptist denomination), American national evangelical Baptist body
* Limit (mathematics)
In mathematics, a limit is the value that a function (or sequence) app ...
to .
If is continuously differentiable, its derivative is nonzero at , ''and'' it has a
second derivative
In calculus, the second derivative, or the second-order derivative, of a function is the derivative of the derivative of . Informally, the second derivative can be phrased as "the rate of change of the rate of change"; for example, the secon ...
at , then the convergence is quadratic or faster. If the second derivative is not 0 at then the convergence is merely quadratic. If the third derivative exists and is bounded in a neighborhood of , then:
where
If the derivative is 0 at , then the convergence is usually only linear. Specifically, if is twice continuously differentiable, and , then there exists a neighborhood of such that, for all starting values in that neighborhood, the sequence of iterates converges linearly, with
rate . Alternatively, if and for , in a
neighborhood
A neighbourhood (Commonwealth English) or neighborhood (American English) is a geographically localized community within a larger town, city, suburb or rural area, sometimes consisting of a single street and the buildings lining it. Neigh ...
of , being a zero of
multiplicity , and if , then there exists a neighborhood of such that, for all starting values in that neighborhood, the sequence of iterates converges linearly.
However, even linear convergence is not guaranteed in pathological situations.
In practice, these results are local, and the neighborhood of convergence is not known in advance. But there are also some results on global convergence: for instance, given a right neighborhood of , if is twice differentiable in and if , in , then, for each in the sequence is monotonically decreasing to .
Proof of quadratic convergence for Newton's iterative method
According to
Taylor's theorem
In calculus, Taylor's theorem gives an approximation of a k-times differentiable function around a given point by a polynomial of degree k, called the k-th-order Taylor polynomial. For a smooth function, the Taylor polynomial is the truncation a ...
, any function which has a continuous second derivative can be represented by an expansion about a point that is close to a root of . Suppose this root is . Then the expansion of about is:
where the
Lagrange form of the Taylor series expansion remainder is
where is in between and .
Since is the root, () becomes:
Dividing equation () by and rearranging gives
Remembering that is defined by
one finds that
That is,
Taking the absolute value of both sides gives
Equation () shows that the
order of convergence is at least quadratic if the following conditions are satisfied:
# ; for all , where is the interval ;
# is continuous, for all ;
#
where is given by
If these conditions hold,
Fourier conditions
Suppose that is a
concave function
In mathematics, a concave function is one for which the function value at any convex combination of elements in the domain is greater than or equal to that convex combination of those domain elements. Equivalently, a concave function is any funct ...
on an interval, which is
strictly increasing
In mathematical writing, the term strict refers to the property of excluding equality and equivalence and often occurs in the context of inequality and monotonic functions. It is often attached to a technical term to indicate that the exclusiv ...
. If it is negative at the left endpoint and positive at the right endpoint, the
intermediate value theorem
In mathematical analysis, the intermediate value theorem states that if f is a continuous function whose domain contains the interval , then it takes on any given value between f(a) and f(b) at some point within the interval.
This has two imp ...
guarantees that there is a zero of somewhere in the interval. From geometrical principles, it can be seen that the Newton iteration starting at the left endpoint is
monotonically increasing
In mathematics, a monotonic function (or monotone function) is a function between ordered sets that preserves or reverses the given order. This concept first arose in calculus, and was later generalized to the more abstract setting of orde ...
and convergent, necessarily to .
Joseph Fourier
Jean-Baptiste Joseph Fourier (; ; 21 March 1768 – 16 May 1830) was a French mathematician and physicist born in Auxerre, Burgundy and best known for initiating the investigation of Fourier series, which eventually developed into Fourier analys ...
introduced a modification of Newton's method starting at the right endpoint:
:
This sequence is monotonically decreasing and convergent. By passing to the limit in this definition, it can be seen that the limit of must also be the zero .
So, in the case of a concave increasing function with a zero, initialization is largely irrelevant. Newton iteration starting anywhere left of the zero will converge, as will Fourier's modified Newton iteration starting anywhere right of the zero. The accuracy at any step of the iteration can be determined directly from the difference between the location of the iteration from the left and the location of the iteration from the right. If is twice continuously differentiable, it can be proved using
Taylor's theorem
In calculus, Taylor's theorem gives an approximation of a k-times differentiable function around a given point by a polynomial of degree k, called the k-th-order Taylor polynomial. For a smooth function, the Taylor polynomial is the truncation a ...
that
:
showing that this difference in locations converges quadratically to zero.
All of the above can be extended to systems of equations in multiple variables, although in that context the relevant concepts of
monotonicity
In mathematics, a monotonic function (or monotone function) is a function between ordered sets that preserves or reverses the given order. This concept first arose in calculus, and was later generalized to the more abstract setting of orde ...
and concavity are more subtle to formulate. In the case of single equations in a single variable, the above monotonic convergence of Newton's method can also be generalized to replace concavity by positivity or negativity conditions on an arbitrary higher-order derivative of . However, in this generalization, Newton's iteration is modified so as to be based on
Taylor polynomial
In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor ser ...
s rather than the
tangent line
In geometry, the tangent line (or simply tangent) to a plane curve at a given point is, intuitively, the straight line that "just touches" the curve at that point. Leibniz defined it as the line through a pair of infinitely close points o ...
. In the case of concavity, this modification coincides with the standard Newton method.
Error for variables
If we seek the root of a single function
then the error
is a vector such that its components obey
where
is a quadratic form:
evaluated at the root
(where
is the 2nd derivative Hessian matrix).
Examples
Use of Newton's method to compute square roots
Newton's method is one of many known
methods of computing square roots. Given a positive number , the problem of finding a number such that is equivalent to finding a root of the function . The Newton iteration defined by this function is given by
:
This happens to coincide with the
"Babylonian" method of finding square roots, which consists of replacing an approximate root by the
arithmetic mean
In mathematics and statistics, the arithmetic mean ( ), arithmetic average, or just the ''mean'' or ''average'' is the sum of a collection of numbers divided by the count of numbers in the collection. The collection is often a set of results fr ...
of and . By performing this iteration, it is possible to evaluate a square root to any desired accuracy by only using the basic
arithmetic operations
Arithmetic is an elementary branch of mathematics that deals with numerical operations like addition, subtraction, multiplication, and Division (mathematics), division. In a wider sense, it also includes exponentiation, extraction of nth root, ...
.
The following three tables show examples of the result of this computation for finding the square root of 612, with the iteration initialized at the values of 1, 10, and −20. Each row in a "" column is obtained by applying the preceding formula to the entry above it, for instance
:
The correct digits are underlined. It is seen that with only a few iterations one can obtain a solution accurate to many decimal places. The first table shows that this is true even if the Newton iteration were initialized by the very inaccurate guess of .
When computing any nonzero square root, the first derivative of must be nonzero at the root, and that is a smooth function. So, even before any computation, it is known that any convergent Newton iteration has a quadratic rate of convergence. This is reflected in the above tables by the fact that once a Newton iterate gets close to the root, the number of correct digits approximately doubles with each iteration.
Solution of using Newton's method
Consider the problem of finding the positive number with . We can rephrase that as finding the zero of . We have . Since for all and for , we know that our solution lies between 0 and 1.
A starting value of 0 will lead to an undefined result which illustrates the importance of using a starting point close to the solution. For example, with an initial guess , the sequence given by Newton's method is:
The correct digits are underlined in the above example. In particular, is correct to 12 decimal places. We see that the number of correct digits after the decimal point increases from 2 (for ) to 5 and 10, illustrating the quadratic convergence.
Multidimensional formulations
Systems of equations
variables, functions
One may also use Newton's method to solve systems of equations, which amounts to finding the (simultaneous) zeroes of continuously differentiable functions
This is equivalent to finding the zeroes of a single vector-valued function
In the formulation given above, the scalars are replaced by vectors and instead of dividing the function by its derivative one instead has to left multiply the function by the inverse of its
Jacobian matrix
In vector calculus, the Jacobian matrix (, ) of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. If this matrix is square, that is, if the number of variables equals the number of component ...
.
This results in the expression
or, by solving the
system of linear equations
In mathematics, a system of linear equations (or linear system) is a collection of two or more linear equations involving the same variable (math), variables.
For example,
: \begin
3x+2y-z=1\\
2x-2y+4z=-2\\
-x+\fracy-z=0
\end
is a system of th ...
for the unknown .
variables, equations, with
The -dimensional variant of Newton's method can be used to solve systems of greater than (nonlinear) equations as well if the algorithm uses the
generalized inverse
In mathematics, and in particular, algebra, a generalized inverse (or, g-inverse) of an element ''x'' is an element ''y'' that has some properties of an inverse element but not necessarily all of them. The purpose of constructing a generalized inv ...
of the non-square
Jacobian matrix instead of the inverse of . If the
nonlinear system
In mathematics and science, a nonlinear system (or a non-linear system) is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, biologists, physicists, mathem ...
has no solution, the method attempts to find a solution in the
non-linear least squares
Non-linear least squares is the form of least squares analysis used to fit a set of ''m'' observations with a model that is non-linear in ''n'' unknown parameters (''m'' ≥ ''n''). It is used in some forms of nonlinear regression. The ...
sense. See
Gauss–Newton algorithm for more information.
Example
For example, the following set of equations needs to be solved for vector of points
given the vector of known values
the function vector,
and Jacobian Matrix,
for iteration k, and the vector of known values,
are defined below.
Note that
could have been rewritten to absorb
and thus eliminate
from the equations. The equation to solve for each iteration are
and
The iterations should be repeated until
where
is a value acceptably small enough to meet application requirements.
If vector
is initially chosen to be
that is,
and
and
is chosen to be 1., then the example converges after four iterations to a value of
Iterations
The following iterations were made during the course of the solution.
:
Complex functions
When dealing with
complex functions, Newton's method can be directly applied to find their zeroes. Each zero has a
basin of attraction
In the mathematical field of dynamical systems, an attractor is a set of states toward which a system tends to evolve, for a wide variety of starting conditions of the system. System values that get close enough to the attractor values remain c ...
in the complex plane, the set of all starting values that cause the method to converge to that particular zero. These sets can be mapped as in the image shown. For many complex functions, the boundaries of the basins of attraction are
fractal
In mathematics, a fractal is a Shape, geometric shape containing detailed structure at arbitrarily small scales, usually having a fractal dimension strictly exceeding the topological dimension. Many fractals appear similar at various scale ...
s.
In some cases there are regions in the complex plane which are not in any of these basins of attraction, meaning the iterates do not converge. For example, if one uses a real initial condition to seek a root of , all subsequent iterates will be real numbers and so the iterations cannot converge to either root, since both roots are non-real. In this case
almost all
In mathematics, the term "almost all" means "all but a negligible quantity". More precisely, if X is a set (mathematics), set, "almost all elements of X" means "all elements of X but those in a negligible set, negligible subset of X". The meaning o ...
real initial conditions lead to
chaotic behavior, while some initial conditions iterate either to infinity or to repeating cycles of any finite length.
Curt McMullen has shown that for any possible purely iterative algorithm similar to Newton's method, the algorithm will diverge on some open regions of the complex plane when applied to some polynomial of degree 4 or higher. However, McMullen gave a generally convergent algorithm for polynomials of degree 3. Also, for any polynomial, Hubbard, Schleicher, and Sutherland gave a method for selecting a set of initial points such that Newton's method will certainly converge at one of them at least.
In a Banach space
Another generalization is Newton's method to find a root of a
functional defined in a
Banach space
In mathematics, more specifically in functional analysis, a Banach space (, ) is a complete normed vector space. Thus, a Banach space is a vector space with a metric that allows the computation of vector length and distance between vectors and ...
. In this case the formulation is
where is the
Fréchet derivative
In mathematics, the Fréchet derivative is a derivative defined on normed spaces. Named after Maurice Fréchet, it is commonly used to generalize the derivative of a real-valued function of a single real variable to the case of a vector-valued f ...
computed at . One needs the Fréchet derivative to be boundedly invertible at each in order for the method to be applicable. A condition for existence of and convergence to a root is given by the
Newton–Kantorovich theorem.
Nash–Moser iteration
In the 1950s,
John Nash developed a version of the Newton's method to apply to the problem of constructing
isometric embedding
In mathematics, an embedding (or imbedding) is one instance of some mathematical structure contained within another instance, such as a group that is a subgroup.
When some object X is said to be embedded in another object Y, the embedding is gi ...
s of general
Riemannian manifold
In differential geometry, a Riemannian manifold is a geometric space on which many geometric notions such as distance, angles, length, volume, and curvature are defined. Euclidean space, the N-sphere, n-sphere, hyperbolic space, and smooth surf ...
s in
Euclidean space
Euclidean space is the fundamental space of geometry, intended to represent physical space. Originally, in Euclid's ''Elements'', it was the three-dimensional space of Euclidean geometry, but in modern mathematics there are ''Euclidean spaces ...
. The ''loss of derivatives'' problem, present in this context, made the standard Newton iteration inapplicable, since it could not be continued indefinitely (much less converge). Nash's solution involved the introduction of
smoothing
In statistics and image processing, to smooth a data set is to create an approximating function that attempts to capture important patterns in the data, while leaving out noise or other fine-scale structures/rapid phenomena. In smoothing, the d ...
operators into the iteration. He was able to prove the convergence of his smoothed Newton method, for the purpose of proving an
implicit function theorem
In multivariable calculus, the implicit function theorem is a tool that allows relations to be converted to functions of several real variables. It does so by representing the relation as the graph of a function. There may not be a single functi ...
for isometric embeddings. In the 1960s,
Jürgen Moser showed that Nash's methods were flexible enough to apply to problems beyond isometric embedding, particularly in
celestial mechanics
Celestial mechanics is the branch of astronomy that deals with the motions of objects in outer space. Historically, celestial mechanics applies principles of physics (classical mechanics) to astronomical objects, such as stars and planets, to ...
. Since then, a number of mathematicians, including
Mikhael Gromov and
Richard Hamilton, have found generalized abstract versions of the Nash–Moser theory. In Hamilton's formulation, the Nash–Moser theorem forms a generalization of the Banach space Newton method which takes place in certain
Fréchet space
In functional analysis and related areas of mathematics, Fréchet spaces, named after Maurice Fréchet, are special topological vector spaces.
They are generalizations of Banach spaces ( normed vector spaces that are complete with respect to ...
s.
Modifications
Quasi-Newton methods
When the Jacobian is unavailable or too expensive to compute at every iteration, a
quasi-Newton method can be used.
Chebyshev's third-order method
Since higher-order Taylor expansions offer more accurate local approximations of a function , it is reasonable to ask why Newton’s method relies only on a second-order Taylor approximation. In the 19th century, Russian mathematician Pafnuty Chebyshev explored this idea by developing a variant of Newton’s method that used cubic approximations.
Over -adic numbers
In -adic analysis, the standard method to show a polynomial equation in one variable has a -adic root is
Hensel's lemma
In mathematics, Hensel's lemma, also known as Hensel's lifting lemma, named after Kurt Hensel, is a result in modular arithmetic, stating that if a univariate polynomial has a simple root modulo a prime number , then this root can be ''lifted'' to ...
, which uses the recursion from Newton's method on the -adic numbers. Because of the more stable behavior of addition and multiplication in the -adic numbers compared to the real numbers (specifically, the unit ball in the -adics is a ring), convergence in Hensel's lemma can be guaranteed under much simpler hypotheses than in the classical Newton's method on the real line.
-analog
Newton's method can be generalized with the
-analog of the usual derivative.
Modified Newton methods
Maehly's procedure
A nonlinear equation has multiple solutions in general. But if the initial value is not appropriate, Newton's method may not converge to the desired solution or may converge to the same solution found earlier. When we have already found solutions of
, then the next root can be found by applying Newton's method to the next equation:
This method is applied to obtain zeros of the
Bessel function
Bessel functions, named after Friedrich Bessel who was the first to systematically study them in 1824, are canonical solutions of Bessel's differential equation
x^2 \frac + x \frac + \left(x^2 - \alpha^2 \right)y = 0
for an arbitrary complex ...
of the second kind.
Hirano's modified Newton method
Hirano's modified Newton method is a modification conserving the convergence of Newton method and avoiding unstableness. It is developed to solve complex polynomials.
Interval Newton's method
Combining Newton's method with
interval arithmetic
Interval arithmetic (also known as interval mathematics; interval analysis or interval computation) is a mathematical technique used to mitigate rounding and measurement errors in mathematical computation by computing function bounds. Numeri ...
is very useful in some contexts. This provides a stopping criterion that is more reliable than the usual ones (which are a small value of the function or a small variation of the variable between consecutive iterations). Also, this may detect cases where Newton's method converges theoretically but diverges numerically because of an insufficient
floating-point precision (this is typically the case for polynomials of large degree, where a very small change of the variable may change dramatically the value of the function; see
Wilkinson's polynomial).
Consider , where is a real interval, and suppose that we have an interval extension of , meaning that takes as input an interval and outputs an interval such that:
We also assume that , so in particular has at most one root in .
We then define the interval Newton operator by:
where . Note that the hypothesis on implies that is well defined and is an interval (see
interval arithmetic
Interval arithmetic (also known as interval mathematics; interval analysis or interval computation) is a mathematical technique used to mitigate rounding and measurement errors in mathematical computation by computing function bounds. Numeri ...
for further details on interval operations). This naturally leads to the following sequence:
The
mean value theorem
In mathematics, the mean value theorem (or Lagrange's mean value theorem) states, roughly, that for a given planar arc (geometry), arc between two endpoints, there is at least one point at which the tangent to the arc is parallel to the secant lin ...
ensures that if there is a root of in , then it is also in . Moreover, the hypothesis on ensures that is at most half the size of when is the midpoint of , so this sequence converges towards , where is the root of in .
If strictly contains 0, the use of extended interval division produces a union of two intervals for ; multiple roots are therefore automatically separated and bounded.
Applications
Minimization and maximization problems
Newton's method can be used to find a minimum or maximum of a function . The derivative is zero at a minimum or maximum, so local minima and maxima can be found by applying Newton's method to the derivative.
The iteration becomes:
Multiplicative inverses of numbers and power series
An important application is
Newton–Raphson division
A division algorithm is an algorithm which, given two integers ''N'' and ''D'' (respectively the numerator and the denominator), computes their quotient and/or remainder, the result of Euclidean division. Some are applied by hand, while others ar ...
, which can be used to quickly find the
reciprocal of a number , using only multiplication and subtraction, that is to say the number such that . We can rephrase that as finding the zero of . We have .
Newton's iteration is
Therefore, Newton's iteration needs only two multiplications and one subtraction.
This method is also very efficient to compute the multiplicative inverse of a
power series
In mathematics, a power series (in one variable) is an infinite series of the form
\sum_^\infty a_n \left(x - c\right)^n = a_0 + a_1 (x - c) + a_2 (x - c)^2 + \dots
where ''a_n'' represents the coefficient of the ''n''th term and ''c'' is a co ...
.
Solving transcendental equations
Many
transcendental equations can be solved up to an arbitrary precision by using Newton's method. For example, finding the cumulative
probability density function
In probability theory, a probability density function (PDF), density function, or density of an absolutely continuous random variable, is a Function (mathematics), function whose value at any given sample (or point) in the sample space (the s ...
, such as a
Normal distribution
In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is
f(x) = \frac ...
to fit a known probability generally involves integral functions with no known means to solve in closed form. However, computing the derivatives needed to solve them numerically with Newton's method is generally known, making numerical solutions possible. For an example, see the numerical solution to the
inverse Normal cumulative distribution.
Numerical verification for solutions of nonlinear equations
A numerical verification for solutions of nonlinear equations has been established by using Newton's method multiple times and forming a set of solution candidates.
Code
The following is an example of a possible implementation of Newton's method in the
Python (version 3.x) programming language for finding a root of a function
f
which has derivative
f_prime
.
The initial guess will be and the function will be so that .
Each new iteration of Newton's method will be denoted by
x1
. We will check during the computation whether the denominator (
yprime
) becomes too small (smaller than
epsilon
), which would be the case if , since otherwise a large amount of error could be introduced.
def f(x):
return x**2 - 2 # f(x) = x^2 - 2
def f_prime(x):
return 2*x # f'(x) = 2x
def newtons_method(x0, f, f_prime, tolerance, epsilon, max_iterations):
"""Newton's method
Args:
x0: The initial guess
f: The function whose root we are trying to find
f_prime: The derivative of the function
tolerance: Stop when iterations change by less than this
epsilon: Do not divide by a number smaller than this
max_iterations: The maximum number of iterations to compute
"""
for _ in range(max_iterations):
y = f(x0)
yprime = f_prime(x0)
if abs(yprime) < epsilon: # Give up if the denominator is too small
break
x1 = x0 - y / yprime # Do Newton's computation
if abs(x1 - x0) <= tolerance: # Stop when the result is within the desired tolerance
return x1 # x1 is a solution within tolerance and maximum number of iterations
x0 = x1 # Update x0 to start the process again
return None # Newton's method did not converge
See also
*
Aitken's delta-squared process
In numerical analysis, Aitken's delta-squared process or Aitken extrapolation is a series acceleration method used for accelerating the rate of convergence of a sequence. It is named after Alexander Aitken, who introduced this method in 1926 as ...
*
Bisection method
In mathematics, the bisection method is a root-finding method that applies to any continuous function for which one knows two values with opposite signs. The method consists of repeatedly bisecting the interval defined by these values and t ...
*
Euler method
In mathematics and computational science, the Euler method (also called the forward Euler method) is a first-order numerical analysis, numerical procedure for solving ordinary differential equations (ODEs) with a given Initial value problem, in ...
*
Fast inverse square root
Fast inverse square root, sometimes referred to as or by the hexadecimal constant , is an algorithm that estimates \frac, the Multiplicative inverse, reciprocal (or multiplicative inverse) of the square root of a 32-bit floating-point number x i ...
*
Fisher scoring
*
Gradient descent
Gradient descent is a method for unconstrained mathematical optimization. It is a first-order iterative algorithm for minimizing a differentiable multivariate function.
The idea is to take repeated steps in the opposite direction of the gradi ...
*
Integer square root
In number theory, the integer square root (isqrt) of a non-negative integer is the non-negative integer which is the greatest integer less than or equal to the square root of ,
\operatorname(n) = \lfloor \sqrt n \rfloor.
For example, \operatorn ...
*
Kantorovich theorem
The Kantorovich theorem, or Newton–Kantorovich theorem, is a mathematical statement on the semi-local convergence of Newton's method. It was first stated by Leonid Kantorovich in 1948. It is similar to the form of the Banach fixed-point theorem, ...
*
Laguerre's method
*
Methods of computing square roots
*
Newton's method in optimization
In calculus, Newton's method (also called Newton–Raphson) is an iterative method for finding the roots of a differentiable function f, which are solutions to the equation f(x)=0. However, to optimize a twice-differentiable f, our goal is to f ...
*
Richardson extrapolation
In numerical analysis, Richardson extrapolation is a Series acceleration, sequence acceleration method used to improve the rate of convergence of a sequence of estimates of some value A^\ast = \lim_ A(h). In essence, given the value of A(h) for se ...
*
Root-finding algorithm
In numerical analysis, a root-finding algorithm is an algorithm for finding zeros, also called "roots", of continuous functions. A zero of a function is a number such that . As, generally, the zeros of a function cannot be computed exactly nor ...
*
Secant method
*
Steffensen's method
*
Subgradient method
Notes
References
*
*
Further reading
* Kendall E. Atkinson: ''An Introduction to Numerical Analysis'', John Wiley & Sons Inc., (1989).
* Tjalling J. Ypma: "Historical development of the Newton–Raphson method", SIAM Review, vol.37, no.4, (1995), pp.531–551. .
*
* P. Deuflhard: ''Newton Methods for Nonlinear Problems: Affine Invariance and Adaptive Algorithms'', Springer Berlin (Series in Computational Mathematics, Vol. 35) (2004). .
* C. T. Kelley: ''Solving Nonlinear Equations with Newton's Method'', SIAM (Fundamentals of Algorithms, 1) (2003). .
* J. M. Ortega, and W. C. Rheinboldt: ''Iterative Solution of Nonlinear Equations in Several Variables'', SIAM (Classics in Applied Mathematics) (2000). .
*. See especially Section
9.4 an
*
External links
*
*
Newton's method, Citizendium.*
ttp://www.ece.mcmaster.ca/~xwu/part2.pdf Wu, X., Roots of Equations, Course notes.
{{DEFAULTSORT:Newton's Method
Optimization algorithms and methods
Root-finding algorithms
Isaac Newton
Articles with example Python (programming language) code