Description
The idea is to start with an initial guess, then to approximate the function by its tangent line, and finally to compute the -intercept of this tangent line. This -intercept will typically be a better approximation to the original function's root than the first guess, and the method can beHistory
The name "Newton's method" is derived from Isaac Newton's description of a special case of the method in '' De analysi per aequationes numero terminorum infinitas'' (written in 1669, published in 1711 by William Jones) and in ''De metodis fluxionum et serierum infinitarum'' (written in 1671, translated and published as '' Method of Fluxions'' in 1736 by John Colson). However, his method differs substantially from the modern method given above. Newton applied the method only to polynomials, starting with an initial root estimate and extracting a sequence of error corrections. He used each correction to rewrite the polynomial in terms of the remaining error, and then solved for a new correction by neglecting higher-degree terms. He did not explicitly connect the method with derivatives or present a general formula. Newton applied this method to both numerical and algebraic problems, producing Taylor series in the latter case. Newton may have derived his method from a similar but less precise method by Vieta. The essence of Vieta's method can be found in the work of the Persian mathematician Sharaf al-Din al-Tusi, while his successor Jamshīd al-Kāshī used a form of Newton's method to solve to find roots of (Ypma 1995). A special case of Newton's method for calculating square roots was known since ancient times and is often called the Babylonian method. Newton's method was used by 17th-century Japanese mathematicianPractical considerations
Newton's method is a powerful technique—in general the convergence is quadratic: as the method converges on the root, the difference between the root and the approximation is squared (the number of accurate digits roughly doubles) at each step. However, there are some difficulties with the method.Difficulty in calculating the derivative of a function
Newton's method requires that the derivative can be calculated directly. An analytical expression for the derivative may not be easily obtainable or could be expensive to evaluate. In these situations, it may be appropriate to approximate the derivative by using the slope of a line through two nearby points on the function. Using this approximation would result in something like the secant method whose convergence is slower than that of Newton's method.Failure of the method to converge to the root
It is important to review the proof of quadratic convergence of Newton's method before implementing it. Specifically, one should review the assumptions made in the proof. For situations where the method fails to converge, it is because the assumptions made in this proof are not met.Overshoot
If the first derivative is not well behaved in the neighborhood of a particular root, the method may overshoot, and diverge from that root. An example of a function with one root, for which the derivative is not well behaved in the neighborhood of the root, is : for which the root will be overshot and the sequence of will diverge. For , the root will still be overshot, but the sequence will oscillate between two values. For , the root will still be overshot but the sequence will converge, and for the root will not be overshot at all. In some cases, Newton's method can be stabilized by using successive over-relaxation, or the speed of convergence can be increased by using the same method.Stationary point
If a stationary point of the function is encountered, the derivative is zero and the method will terminate due to division by zero.Poor initial estimate
A large error in the initial estimate can contribute to non-convergence of the algorithm. To overcome this problem one can often linearize the function that is being optimized using calculus, logs, differentials, or even using evolutionary algorithms, such as the stochastic tunneling. Good initial estimates lie close to the final globally optimal parameter estimate. In nonlinear regression, the sum of squared errors (SSE) is only "close to" parabolic in the region of the final parameter estimates. Initial estimates found here will allow the Newton–Raphson method to quickly converge. It is only here that the Hessian matrix of the SSE is positive and the first derivative of the SSE is close to zero.Mitigation of non-convergence
In a robust implementation of Newton's method, it is common to place limits on the number of iterations, bound the solution to an interval known to contain the root, and combine the method with a more robust root finding method.Slow convergence for roots of multiplicity greater than 1
If the root being sought has multiplicity greater than one, the convergence rate is merely linear (errors reduced by a constant factor at each step) unless special steps are taken. When there are two or more roots that are close together then it may take many iterations before the iterates get close enough to one of them for the quadratic convergence to be apparent. However, if the multiplicity of the root is known, the following modified algorithm preserves the quadratic convergence rate: : This is equivalent to using successive over-relaxation. On the other hand, if the multiplicity of the root is not known, it is possible to estimate after carrying out one or two iterations, and then use that value to increase the rate of convergence. If the multiplicity of the root is finite then will have a root at the same location with multiplicity 1. Applying Newton's method to find the root of recovers quadratic convergence in many cases although it generally involves the second derivative of . In a particularly simple case, if then and Newton's method finds the root in a single iteration with :Analysis
Suppose that the function has a zero at , i.e., , and is differentiable in aProof of quadratic convergence for Newton's iterative method
According toBasins of attraction
The disjoint subsets of the basins of attraction—the regions of the real number line such that within each region iteration from any point leads to one particular root—can be infinite in number and arbitrarily small. For example, for the function , the following initial conditions are in successive basins of attraction: :Failure analysis
Newton's method is only guaranteed to converge if certain conditions are satisfied. If the assumptions made in the proof of quadratic convergence are met, the method will converge. For the following subsections, failure of the method to converge indicates that the assumptions made in the proof were not met.Bad starting points
In some cases the conditions on the function that are necessary for convergence are satisfied, but the point chosen as the initial point is not in the interval where the method converges. This can happen, for example, if the function whose root is sought approaches zero asymptotically as goes to or . In such cases a different method, such asIteration point is stationary
Consider the function: : It has a maximum at and solutions of at . If we start iterating from the stationary point (where the derivative is zero), will be undefined, since the tangent at is parallel to the -axis: : The same issue occurs if, instead of the starting point, any iteration point is stationary. Even if the derivative is small but not zero, the next iteration will be a far worse approximation.Starting point enters a cycle
For some functions, some starting points may enter an infinite cycle, preventing convergence. Let : and take 0 as the starting point. The first iteration produces 1 and the second iteration returns to 0 so the sequence will alternate between the two without converging to a root. In fact, this 2-cycle is stable: there are neighborhoods around 0 and around 1 from which all points iterate asymptotically to the 2-cycle (and hence not to the root of the function). In general, the behavior of the sequence can be very complex (see Newton fractal). The real solution of this equation is ….Derivative issues
If the function is not continuously differentiable in a neighborhood of the root then it is possible that Newton's method will always diverge and fail, unless the solution is guessed on the first try.Derivative does not exist at root
A simple example of a function where Newton's method diverges is trying to find the cube root of zero. The cube root is continuous and infinitely differentiable, except for , where its derivative is undefined: : For any iteration point , the next iteration point will be: : The algorithm overshoots the solution and lands on the other side of the -axis, farther away than it initially was; applying Newton's method actually doubles the distances from the solution at each iteration. In fact, the iterations diverge to infinity for every , where . In the limiting case of (square root), the iterations will alternate indefinitely between points and , so they do not converge in this case either.Discontinuous derivative
If the derivative is not continuous at the root, then convergence may fail to occur in any neighborhood of the root. Consider the function : Its derivative is: : Within any neighborhood of the root, this derivative keeps changing sign as approaches 0 from the right (or from the left) while for . So is unbounded near the root, and Newton's method will diverge almost everywhere in any neighborhood of it, even though: *the function is differentiable (and thus continuous) everywhere; *the derivative at the root is nonzero; * is infinitely differentiable except at the root; and *the derivative is bounded in a neighborhood of the root (unlike ).Non-quadratic convergence
In some cases the iterates converge but do not converge as quickly as promised. In these cases simpler methods converge just as quickly as Newton's method.Zero derivative
If the first derivative is zero at the root, then convergence will not be quadratic. Let : then and consequently : So convergence is not quadratic, even though the function is infinitely differentiable everywhere. Similar problems occur even when the root is only "nearly" double. For example, let : Then the first few iterations starting at are : = 1 : = … : = … : = … : = … : = … : = … : = … it takes six iterations to reach a point where the convergence appears to be quadratic.No second derivative
If there is no second derivative at the root, then convergence may fail to be quadratic. Let : Then : And : except when where it is undefined. Given , : which has approximately times as many bits of precision as has. This is less than the 2 times as many which would be required for quadratic convergence. So the convergence of Newton's method (in this case) is not quadratic, even though: the function is continuously differentiable everywhere; the derivative is not zero at the root; and is infinitely differentiable except at the desired root.Generalizations
Complex functions
When dealing with complex functions, Newton's method can be directly applied to find their zeroes. Each zero has a basin of attraction in the complex plane, the set of all starting values that cause the method to converge to that particular zero. These sets can be mapped as in the image shown. For many complex functions, the boundaries of the basins of attraction areChebyshev's third-order method
Nash–Moser iteration
Systems of equations
variables, functions
One may also use Newton's method to solve systems of equations, which amounts to finding the (simultaneous) zeroes of continuously differentiable functions This is equivalent to finding the zeroes of a single vector-valued function In the formulation given above, the scalars are replaced by vectors and instead of dividing the function by its derivative one instead has to left multiply the function by the inverse of itsvariables, equations, with
The -dimensional variant of Newton's method can be used to solve systems of greater than (nonlinear) equations as well if the algorithm uses the generalized inverse of the non-squareIn a Banach space
Another generalization is Newton's method to find a root of aOver -adic numbers
In -adic analysis, the standard method to show a polynomial equation in one variable has a -adic root is Hensel's lemma, which uses the recursion from Newton's method on the -adic numbers. Because of the more stable behavior of addition and multiplication in the -adic numbers compared to the real numbers (specifically, the unit ball in the -adics is a ring), convergence in Hensel's lemma can be guaranteed under much simpler hypotheses than in the classical Newton's method on the real line.Newton–Fourier method
The Newton–Fourier method isQuasi-Newton methods
When the Jacobian is unavailable or too expensive to compute at every iteration, a quasi-Newton method can be used.-analog
Newton's method can be generalized with the -analog of the usual derivative.Modified Newton methods
Maehly's procedure
A nonlinear equation has multiple solutions in general. But if the initial value is not appropriate, Newton's method may not converge to the desired solution or may converge to the same solution found earlier. When we have already found N solutions of , then the next root can be found by applying Newton's method to the next equation: : This method is applied to obtain zeros of the Bessel function of the second kind.Hirano's modified Newton method
Hirano's modified Newton method is a modification conserving the convergence of Newton method and avoiding unstableness. It is developed to solve complex polynomials.Interval Newton's method
Combining Newton's method with interval arithmetic is very useful in some contexts. This provides a stopping criterion that is more reliable than the usual ones (which are a small value of the function or a small variation of the variable between consecutive iterations). Also, this may detect cases where Newton's method converges theoretically but diverges numerically because of an insufficient floating-point precision (this is typically the case for polynomials of large degree, where a very small change of the variable may change dramatically the value of the function; see Wilkinson's polynomial). Consider , where is a real interval, and suppose that we have an interval extension of , meaning that takes as input an interval and outputs an interval such that: : We also assume that , so in particular has at most one root in . We then define the interval Newton operator by: : where . Note that the hypothesis on implies that is well defined and is an interval (see interval arithmetic for further details on interval operations). This naturally leads to the following sequence: : TheApplications
Minimization and maximization problems
Newton's method can be used to find a minimum or maximum of a function . The derivative is zero at a minimum or maximum, so local minima and maxima can be found by applying Newton's method to the derivative. The iteration becomes: :Multiplicative inverses of numbers and power series
An important application is Newton–Raphson division, which can be used to quickly find the reciprocal of a number , using only multiplication and subtraction, that is to say the number such that . We can rephrase that as finding the zero of . We have . Newton's iteration is : Therefore, Newton's iteration needs only two multiplications and one subtraction. This method is also very efficient to compute the multiplicative inverse of a power series.Solving transcendental equations
Many transcendental equations can be solved using Newton's method. Given the equation : with and/or a transcendental function, one writes : The values of that solve the original equation are then the roots of , which may be found via Newton's method.Obtaining zeros of special functions
Newton's method is applied to the ratio of Bessel functions in order to obtain its root.Numerical verification for solutions of nonlinear equations
A numerical verification for solutions of nonlinear equations has been established by using Newton's method multiple times and forming a set of solution candidates.Examples
Square root
Consider the problem of finding the square root of a number , that is to say the positive number such that . Newton's method is one of many methods of computing square roots. We can rephrase that as finding the zero of . We have . For example, for finding the square root of 612 with an initial guess , the sequence given by Newton's method is: : where the correct digits are underlined. With only a few iterations one can obtain a solution accurate to many decimal places. Rearranging the formula as follows yields the Babylonian method of finding square roots: : i.e. theSolution of
Consider the problem of finding the positive number with . We can rephrase that as finding the zero of . We have . Since for all and for , we know that our solution lies between 0 and 1. For example, with an initial guess , the sequence given by Newton's method is (note that a starting value of 0 will lead to an undefined result, showing the importance of using a starting point that is close to the solution): : The correct digits are underlined in the above example. In particular, is correct to 12 decimal places. We see that the number of correct digits after the decimal point increases from 2 (for ) to 5 and 10, illustrating the quadratic convergence.Code
The following is an implementation example of the Newton's method in the Python (version 3.x) programming language for finding a root of a functionf
which has derivative f_prime
.
The initial guess will be and the function will be so that .
Each new iteration of Newton's method will be denoted by x1
. We will check during the computation whether the denominator (yprime
) becomes too small (smaller than epsilon
), which would be the case if , since otherwise a large amount of error could be introduced. See also
* Aitken's delta-squared process * Bisection method * Euler method * Fast inverse square root * Fisher scoring * Gradient descent * Integer square root * Kantorovich theorem * Laguerre's method * Methods of computing square roots * Newton's method in optimization * Richardson extrapolation * Root-finding algorithm * Secant method *Notes
References
* *Further reading
* Kendall E. Atkinson, ''An Introduction to Numerical Analysis'', (1989) John Wiley & Sons, Inc, * Tjalling J. Ypma, Historical development of the Newton–Raphson method, ''SIAM Review'' 37 (4), 531–551, 1995. . * * P. Deuflhard, ''Newton Methods for Nonlinear Problems. Affine Invariance and Adaptive Algorithms.'' Springer Series in Computational Mathematics, Vol. 35. Springer, Berlin, 2004. . * C. T. Kelley, ''Solving Nonlinear Equations with Newton's Method'', no 1 in Fundamentals of Algorithms, SIAM, 2003. . * J. M. Ortega, W. C. Rheinboldt, ''Iterative Solution of Nonlinear Equations in Several Variables.'' Classics in Applied Mathematics, SIAM, 2000. . *. See especially SectionExternal links
* *