HOME

TheInfoList



OR:

In
mathematics Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics ...
and
computing Computing is any goal-oriented activity requiring, benefiting from, or creating computing machinery. It includes the study and experimentation of algorithmic processes, and development of both hardware and software. Computing has scientific, ...
, a root-finding algorithm is an
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
for finding zeros, also called "roots", of
continuous function In mathematics, a continuous function is a function such that a continuous variation (that is a change without jump) of the argument induces a continuous variation of the value of the function. This means that there are no abrupt changes in val ...
s. A
zero of a function In mathematics, a zero (also sometimes called a root) of a real-, complex-, or generally vector-valued function f, is a member x of the domain of f such that f(x) ''vanishes'' at x; that is, the function f attains the value of 0 at x, or e ...
, from the
real number In mathematics, a real number is a number that can be used to measure a ''continuous'' one-dimensional quantity such as a distance, duration or temperature. Here, ''continuous'' means that values can have arbitrarily small variations. Every ...
s to real numbers or from the
complex number In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation i^= -1; every complex number can be expressed in the fo ...
s to the complex numbers, is a number such that . As, generally, the zeros of a function cannot be computed exactly nor expressed in closed form, root-finding algorithms provide approximations to zeros, expressed either as
floating-point In computing, floating-point arithmetic (FP) is arithmetic that represents real numbers approximately, using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. For example, 12.345 can ...
numbers or as small isolating intervals, or disks for complex roots (an interval or disk output being equivalent to an approximate output together with an error bound). Solving an equation is the same as finding the roots of the function . Thus root-finding algorithms allow solving any
equation In mathematics, an equation is a formula that expresses the equality of two expressions, by connecting them with the equals sign . The word ''equation'' and its cognates in other languages may have subtly different meanings; for example, in F ...
defined by continuous functions. However, most root-finding algorithms do not guarantee that they will find all the roots; in particular, if such an algorithm does not find any root, that does not mean that no root exists. Most numerical root-finding methods use iteration, producing a
sequence In mathematics, a sequence is an enumerated collection of objects in which repetitions are allowed and order matters. Like a set, it contains members (also called ''elements'', or ''terms''). The number of elements (possibly infinite) is called ...
of numbers that hopefully converge towards the root as a
limit Limit or Limits may refer to: Arts and media * ''Limit'' (manga), a manga by Keiko Suenobu * ''Limit'' (film), a South Korean film * Limit (music), a way to characterize harmony * "Limit" (song), a 2016 single by Luna Sea * "Limits", a 2019 ...
. They require one or more ''initial guesses'' of the root as starting values, then each iteration of the algorithm produces a successively more accurate approximation to the root. Since the iteration must be stopped at some point these methods produce an approximation to the root, not an exact solution. Many methods compute subsequent values by evaluating an auxiliary function on the preceding values. The limit is thus a fixed point of the auxiliary function, which is chosen for having the roots of the original equation as fixed points, and for converging rapidly to these fixed points. The behaviour of general root-finding algorithms is studied in
numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of numerical methods ...
. However, for polynomials, root-finding study belongs generally to
computer algebra In mathematics and computer science, computer algebra, also called symbolic computation or algebraic computation, is a scientific area that refers to the study and development of algorithms and software for manipulating mathematical expressions ...
, since algebraic properties of polynomials are fundamental for the most efficient algorithms. The efficiency of an algorithm may depend dramatically on the characteristics of the given functions. For example, many algorithms use the
derivative In mathematics, the derivative of a function of a real variable measures the sensitivity to change of the function value (output value) with respect to a change in its argument (input value). Derivatives are a fundamental tool of calculus. ...
of the input function, while others work on every
continuous function In mathematics, a continuous function is a function such that a continuous variation (that is a change without jump) of the argument induces a continuous variation of the value of the function. This means that there are no abrupt changes in val ...
. In general, numerical algorithms are not guaranteed to find all the roots of a function, so failing to find a root does not prove that there is no root. However, for
polynomial In mathematics, a polynomial is an expression consisting of indeterminates (also called variables) and coefficients, that involves only the operations of addition, subtraction, multiplication, and positive-integer powers of variables. An exampl ...
s, there are specific algorithms that use algebraic properties for certifying that no root is missed, and locating the roots in separate intervals (or disks for complex roots) that are small enough to ensure the convergence of numerical methods (typically
Newton's method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real- ...
) to the unique root so located.


Bracketing methods

Bracketing methods determine successively smaller intervals (brackets) that contain a root. When the interval is small enough, then a root has been found. They generally use the
intermediate value theorem In mathematical analysis, the intermediate value theorem states that if f is a continuous function whose domain contains the interval , then it takes on any given value between f(a) and f(b) at some point within the interval. This has two impor ...
, which asserts that if a continuous function has values of opposite signs at the end points of an interval, then the function has at least one root in the interval. Therefore, they require to start with an interval such that the function takes opposite signs at the end points of the interval. However, in the case of
polynomial In mathematics, a polynomial is an expression consisting of indeterminates (also called variables) and coefficients, that involves only the operations of addition, subtraction, multiplication, and positive-integer powers of variables. An exampl ...
s there are other methods ( Descartes' rule of signs,
Budan's theorem In mathematics, Budan's theorem is a theorem for bounding the number of real roots of a polynomial in an interval, and computing the parity of this number. It was published in 1807 by François Budan de Boislaurent. A similar theorem was publishe ...
and
Sturm's theorem In mathematics, the Sturm sequence of a univariate polynomial is a sequence of polynomials associated with and its derivative by a variant of Euclid's algorithm for polynomials. Sturm's theorem expresses the number of distinct real roots of ...
) for getting information on the number of roots in an interval. They lead to efficient algorithms for real-root isolation of polynomials, which ensure finding all real roots with a guaranteed accuracy.


Bisection method

The simplest root-finding algorithm is the bisection method. Let be a
continuous function In mathematics, a continuous function is a function such that a continuous variation (that is a change without jump) of the argument induces a continuous variation of the value of the function. This means that there are no abrupt changes in val ...
, for which one knows an interval such that and have opposite signs (a bracket). Let be the middle of the interval (the midpoint or the point that bisects the interval). Then either and , or and have opposite signs, and one has divided by two the size of the interval. Although the bisection method is robust, it gains one and only one bit of accuracy with each iteration. Other methods, under appropriate conditions, can gain accuracy faster.


False position (''regula falsi'')

The
false position method In mathematics, the ''regula falsi'', method of false position, or false position method is a very old method for solving an equation with one unknown; this method, in modified form, is still in use. In simple terms, the method is the trial and ...
, also called the ''regula falsi'' method, is similar to the bisection method, but instead of using bisection search's middle of the interval it uses the -intercept of the line that connects the plotted function values at the endpoints of the interval, that is :c= \frac. False position is similar to the secant method, except that, instead of retaining the last two points, it makes sure to keep one point on either side of the root. The false position method can be faster than the bisection method and will never diverge like the secant method; however, it may fail to converge in some naive implementations due to roundoff errors that may lead to a wrong sign for ; typically, this may occur if the rate of variation of is large in the neighborhood of the root.


ITP method

The ITP method is the only known method to bracket the root with the same worst case guarantees of the bisection method while guaranteeing a superlinear convergence to the root of smooth functions as the secant method. It is also the only known method guaranteed to outperform the bisection method on the average for any continuous distribution on the location of the root (see ITP Method#Analysis). It does so by keeping track of both the bracketing interval as well as the minmax interval in which any point therein converges as fast as the bisection method. The construction of the queried point c follows three steps: interpolation (similar to the regula falsi), truncation (adjusting the regula falsi similar to Regula falsi § Improvements in ''regula falsi'') and then projection onto the minmax interval. The combination of these steps produces a simultaneously minmax optimal method with guarantees similar to interpolation based methods for smooth functions, and, in practice will outperform both the bisection method and interpolation based methods under both smooth and non-smooth functions.


Interpolation

Many root-finding processes work by
interpolation In the mathematical field of numerical analysis, interpolation is a type of estimation, a method of constructing (finding) new data points based on the range of a discrete set of known data points. In engineering and science, one often has ...
. This consists in using the last computed approximate values of the root for approximating the function by a
polynomial In mathematics, a polynomial is an expression consisting of indeterminates (also called variables) and coefficients, that involves only the operations of addition, subtraction, multiplication, and positive-integer powers of variables. An exampl ...
of low degree, which takes the same values at these approximate roots. Then the root of the polynomial is computed and used as a new approximate value of the root of the function, and the process is iterated. Two values allow interpolating a function by a polynomial of degree one (that is approximating the graph of the function by a line). This is the basis of the secant method. Three values define a
quadratic function In mathematics, a quadratic polynomial is a polynomial of degree two in one or more variables. A quadratic function is the polynomial function defined by a quadratic polynomial. Before 20th century, the distinction was unclear between a polynomia ...
, which approximates the graph of the function by a
parabola In mathematics, a parabola is a plane curve which is mirror-symmetrical and is approximately U-shaped. It fits several superficially different mathematical descriptions, which can all be proved to define exactly the same curves. One descri ...
. This is
Muller's method Muller's method is a root-finding algorithm, a numerical method for solving equations of the form ''f''(''x'') = 0. It was first presented by David E. Muller in 1956. Muller's method is based on the secant method, which constructs at every iter ...
. ''Regula falsi'' is also an interpolation method, which differs from the secant method by using, for interpolating by a line, two points that are not necessarily the last two computed points.


Iterative methods

Although all root-finding algorithms proceed by iteration, an
iterative Iteration is the repetition of a process in order to generate a (possibly unbounded) sequence of outcomes. Each repetition of the process is a single iteration, and the outcome of each iteration is then the starting point of the next iteration. ...
root-finding method generally uses a specific type of iteration, consisting of defining an auxiliary function, which is applied to the last computed approximations of a root for getting a new approximation. The iteration stops when a fixed point (
up to Two mathematical objects ''a'' and ''b'' are called equal up to an equivalence relation ''R'' * if ''a'' and ''b'' are related by ''R'', that is, * if ''aRb'' holds, that is, * if the equivalence classes of ''a'' and ''b'' with respect to ''R'' ...
the desired precision) of the auxiliary function is reached, that is when the new computed value is sufficiently close to the preceding ones.


Newton's method (and similar derivative-based methods)

Newton's method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real- ...
assumes the function ''f'' to have a continuous
derivative In mathematics, the derivative of a function of a real variable measures the sensitivity to change of the function value (output value) with respect to a change in its argument (input value). Derivatives are a fundamental tool of calculus. ...
. Newton's method may not converge if started too far away from a root. However, when it does converge, it is faster than the bisection method, and is usually quadratic. Newton's method is also important because it readily generalizes to higher-dimensional problems. Newton-like methods with higher orders of convergence are the Householder's methods. The first one after Newton's method is
Halley's method In numerical analysis, Halley's method is a root-finding algorithm used for functions of one real variable with a continuous second derivative. It is named after its inventor Edmond Halley. The algorithm is second in the class of Householder's m ...
with cubic order of convergence.


Secant method

Replacing the derivative in Newton's method with a
finite difference A finite difference is a mathematical expression of the form . If a finite difference is divided by , one gets a difference quotient. The approximation of derivatives by finite differences plays a central role in finite difference methods for t ...
, we get the secant method. This method does not require the computation (nor the existence) of a derivative, but the price is slower convergence (the order is approximately 1.6 (
golden ratio In mathematics, two quantities are in the golden ratio if their ratio is the same as the ratio of their sum to the larger of the two quantities. Expressed algebraically, for quantities a and b with a > b > 0, where the Greek letter phi ( ...
)). A generalization of the secant method in higher dimensions is Broyden's method.


Steffensen's method

If we use a polynomial fit to remove the quadratic part of the finite difference used in the Secant method, so that it better approximates the derivative, we obtain
Steffensen's method In numerical analysis, Steffensen's method is a root-finding technique named after Johan Frederik Steffensen which is similar to Newton's method. Steffensen's method also achieves quadratic convergence, but without using derivatives as Newton ...
, which has quadratic convergence, and whose behavior (both good and bad) is essentially the same as Newton's method but does not require a derivative.


Fixed point iteration method

We can use the
fixed-point iteration In numerical analysis, fixed-point iteration is a method of computing fixed points of a function. More specifically, given a function f defined on the real numbers with real values and given a point x_0 in the domain of f, the fixed-point iterat ...
to find the root of a function. Given a function f(x) which we have set to zero to find the root ( f(x)=0 ), we rewrite the equation in terms of x so that f(x)=0 becomes x=g(x) (note, there are often many g(x) functions for each f(x)=0 function). Next, we relabel the each side of the equation as x_=g(x_) so that we can perform the iteration. Next, we pick a value for x_ and perform the iteration until it converges towards a root of the function. If the iteration converges, it will converge to a root. The iteration will only converge if , g'(root), <1 . As an example of converting f(x)=0 to x=g(x) , if given the function f(x)=x^2+x-1 , we will rewrite it as one of the following equations. : x_=(1/x_n) - 1 , : x_=1/(x_n+1) , : x_=x_n^2+2x_n-1 , or : x_=\pm \sqrt .


Inverse interpolation

The appearance of complex values in interpolation methods can be avoided by interpolating the
inverse Inverse or invert may refer to: Science and mathematics * Inverse (logic), a type of conditional sentence which is an immediate inference made from another conditional sentence * Additive inverse (negation), the inverse of a number that, when a ...
of ''f'', resulting in the inverse quadratic interpolation method. Again, convergence is asymptotically faster than the secant method, but inverse quadratic interpolation often behaves poorly when the iterates are not close to the root.


Combinations of methods


Brent's method

Brent's method In numerical analysis, Brent's method is a hybrid root-finding algorithm combining the bisection method, the secant method and inverse quadratic interpolation. It has the reliability of bisection but it can be as quick as some of the less-relia ...
is a combination of the bisection method, the secant method and inverse quadratic interpolation. At every iteration, Brent's method decides which method out of these three is likely to do best, and proceeds by doing a step according to that method. This gives a robust and fast method, which therefore enjoys considerable popularity.


Ridders' method

Ridders' method is a hybrid method that uses the value of function at the midpoint of the interval to perform an exponential interpolation to the root. This gives a fast convergence with a guaranteed convergence of at most twice the number of iterations as the bisection method.


Roots of polynomials

Finding roots of
polynomial In mathematics, a polynomial is an expression consisting of indeterminates (also called variables) and coefficients, that involves only the operations of addition, subtraction, multiplication, and positive-integer powers of variables. An exampl ...
is a long-standing problem that has been the object of much research throughout history. A testament to this is that up until the 19th century
algebra Algebra () is one of the broad areas of mathematics. Roughly speaking, algebra is the study of mathematical symbols and the rules for manipulating these symbols in formulas; it is a unifying thread of almost all of mathematics. Elementary ...
meant essentially theory of polynomial equations. Finding the root of a linear polynomial (degree one) is easy and needs only one division. For quadratic polynomials (degree two), the
quadratic formula In elementary algebra, the quadratic formula is a formula that provides the solution(s) to a quadratic equation. There are other ways of solving a quadratic equation instead of using the quadratic formula, such as factoring (direct factoring, ...
produces a solution, but its numerical evaluation may require some care for ensuring
numerical stability In the mathematical subfield of numerical analysis, numerical stability is a generally desirable property of numerical algorithms. The precise definition of stability depends on the context. One is numerical linear algebra and the other is algori ...
. For degrees three and four, there are closed-form solutions in terms of
radicals Radical may refer to: Politics and ideology Politics *Radical politics, the political intent of fundamental societal change *Radicalism (historical), the Radical Movement that began in late 18th century Britain and spread to continental Europe and ...
, which are generally not convenient for numerical evaluation, as being too complicated and involving the computation of several th roots whose computation is not easier than the direct computation of the roots of the polynomial (for example the expression of the real roots of a cubic polynomial may involve non-real
cube root In mathematics, a cube root of a number is a number such that . All nonzero real numbers, have exactly one real cube root and a pair of complex conjugate cube roots, and all nonzero complex numbers have three distinct complex cube roots. F ...
s). For polynomials of degree five or higher Abel–Ruffini theorem asserts that there is, in general, no radical expression of the roots. So, except for very low degrees, root finding of polynomials consists of finding approximations of the roots. By the
fundamental theorem of algebra The fundamental theorem of algebra, also known as d'Alembert's theorem, or the d'Alembert–Gauss theorem, states that every non- constant single-variable polynomial with complex coefficients has at least one complex root. This includes polynomia ...
, one knows that a polynomial of degree has at most real or complex roots, and this number is reached for almost all polynomials. It follows that the problem of root finding for polynomials may be split in three different subproblems; * Finding one root * Finding all roots * Finding roots in a specific region of the
complex plane In mathematics, the complex plane is the plane formed by the complex numbers, with a Cartesian coordinate system such that the -axis, called the real axis, is formed by the real numbers, and the -axis, called the imaginary axis, is formed by the ...
, typically the real roots or the real roots in a given interval (for example, when roots represents a physical quantity, only the real positive ones are interesting). For finding one root,
Newton's method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real- ...
and other general
iterative method In computational mathematics, an iterative method is a mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the ''n''-th approximation is derived from the pre ...
s work generally well. For finding all the roots, the oldest method is to start by finding a single root. When a root has been found, it can be removed from the polynomial by dividing out the binomial . The resulting polynomial contains the remaining roots, which can be found by iterating on this process. However, except for low degrees, this does not work well because of the
numerical instability In the mathematical subfield of numerical analysis, numerical stability is a generally desirable property of numerical algorithms. The precise definition of stability depends on the context. One is numerical linear algebra and the other is algorit ...
:
Wilkinson's polynomial In numerical analysis, Wilkinson's polynomial is a specific polynomial which was used by James H. Wilkinson in 1963 to illustrate a difficulty when finding the root of a polynomial: the location of the roots can be very sensitive to perturbatio ...
shows that a very small modification of one coefficient may change dramatically not only the value of the roots, but also their nature (real or complex). Also, even with a good approximation, when one evaluates a polynomial at an approximate root, one may get a result that is far to be close to zero. For example, if a polynomial of degree 20 (the degree of Wilkinson's polynomial) has a root close to 10, the derivative of the polynomial at the root may be of the order of 10^; this implies that an error of 10^ on the value of the root may produce a value of the polynomial at the approximate root that is of the order of 10^. For avoiding these problems, methods have been elaborated, which compute all roots simultaneously, to any desired accuracy. Presently the most efficient method is
Aberth method The Aberth method, or Aberth–Ehrlich method or Ehrlich–Aberth method, named after Oliver Aberth and Louis W. Ehrlich, is a root-finding algorithm developed in 1967 for simultaneous approximation of all the roots of a univariate polynomial. Th ...
. A free implementation is available under the name of
MPSolve MPSolve (Multiprecision Polynomial Solver) is a package for the approximation of the roots of a univariate polynomial. It uses the Aberth method, combined with a careful use of multiprecision. "Mpsolve takes advantage of sparsity, and has spe ...
. This is a reference implementation, which can find routinely the roots of polynomials of degree larger than 1,000, with more than 1,000 significant decimal digits. The methods for computing all roots may be used for computing real roots. However, it may be difficult to decide whether a root with a small imaginary part is real or not. Moreover, as the number of the real roots is, on the average, the logarithm of the degree, it is a waste of computer resources to compute the non-real roots when one is interested in real roots. The oldest method for computing the number of real roots, and the number of roots in an interval results from
Sturm's theorem In mathematics, the Sturm sequence of a univariate polynomial is a sequence of polynomials associated with and its derivative by a variant of Euclid's algorithm for polynomials. Sturm's theorem expresses the number of distinct real roots of ...
, but the methods based on Descartes' rule of signs and its extensions— Budan's and Vincent's theorems—are generally more efficient. For root finding, all proceed by reducing the size of the intervals in which roots are searched until getting intervals containing zero or one root. Then the intervals containing one root may be further reduced for getting a quadratic convergence of
Newton's method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real- ...
to the isolated roots. The main
computer algebra system A computer algebra system (CAS) or symbolic algebra system (SAS) is any mathematical software with the ability to manipulate mathematical expressions in a way similar to the traditional manual computations of mathematicians and scientists. The ...
s (
Maple ''Acer'' () is a genus of trees and shrubs commonly known as maples. The genus is placed in the family Sapindaceae.Stevens, P. F. (2001 onwards). Angiosperm Phylogeny Website. Version 9, June 2008 nd more or less continuously updated since h ...
,
Mathematica Wolfram Mathematica is a software system with built-in libraries for several areas of technical computing that allow machine learning, statistics, symbolic computation, data manipulation, network analysis, time series analysis, NLP, optimiza ...
, SageMath,
PARI/GP PARI/GP is a computer algebra system with the main aim of facilitating number theory computations. Versions 2.1.0 and higher are distributed under the GNU General Public License. It runs on most common operating systems. System overview The ...
) have each a variant of this method as the default algorithm for the real roots of a polynomial. Another class of methods is based on converting the problem of finding polynomial roots to the problem of finding
eigenvalues In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denote ...
of the
companion matrix In linear algebra, the Frobenius companion matrix of the monic polynomial : p(t)=c_0 + c_1 t + \cdots + c_t^ + t^n ~, is the square matrix defined as :C(p)=\begin 0 & 0 & \dots & 0 & -c_0 \\ 1 & 0 & \dots & 0 & -c_1 \\ 0 & 1 & \dots & 0 & -c_2 ...
of the polynomial. In principle, one can use any eigenvalue algorithm to find the roots of the polynomial. However, for efficiency reasons one prefers methods that employ the structure of the matrix, that is, can be implemented in matrix-free form. Among these methods are the power method, whose application to the transpose of the companion matrix is the classical Bernoulli's method to find the root of greatest modulus. The inverse power method with shifts, which finds some smallest root first, is what drives the complex (''cpoly'') variant of the Jenkins–Traub algorithm and gives it its numerical stability. Additionally, it is insensitive to multiple roots and has fast convergence with order 1+\varphi\approx 2.6 (where \varphi is the
golden ratio In mathematics, two quantities are in the golden ratio if their ratio is the same as the ratio of their sum to the larger of the two quantities. Expressed algebraically, for quantities a and b with a > b > 0, where the Greek letter phi ( ...
) even in the presence of clustered roots. This fast convergence comes with a cost of three polynomial evaluations per step, resulting in a residual of , that is a slower convergence than with three steps of Newton's method.


Finding one root

The most widely used method for computing a root is
Newton's method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real- ...
, which consists of the iterations of the computation of :x_=x_n-\frac, by starting from a well-chosen value x_0. If is a polynomial, the computation is faster when using Horner's method or evaluation with preprocessing for computing the polynomial and its derivative in each iteration. Though the convergence is generally quadratic, it may converge much slowly or even not converge at all. In particular, if the polynomial has no real root, and x_0 is real, then Newton's method cannot converge. However, if the polynomial has a real root, which is larger than the larger real root of its derivative, then Newton's method converges quadratically to this largest root if x_0 is larger than this larger root (there are easy ways for computing an upper bound of the roots, see
Properties of polynomial roots Property is the ownership of land, resources, improvements or other tangible objects, or intellectual property. Property may also refer to: Mathematics * Property (mathematics) Philosophy and science * Property (philosophy), in philosophy an ...
). This is the starting point of
Horner method In mathematics and computer science, Horner's method (or Horner's scheme) is an algorithm for polynomial evaluation. Although named after William George Horner, this method is much older, as it has been attributed to Joseph-Louis Lagrange by ...
for computing the roots. When one root has been found, one may use Euclidean division for removing the factor from the polynomial. Computing a root of the resulting quotient, and repeating the process provides, in principle, a way for computing all roots. However, this iterative scheme is numerically unstable; the approximation errors accumulate during the successive factorizations, so that the last roots are determined with a polynomial that deviates widely from a factor of the original polynomial. To reduce this error, one may, for each root that is found, restart Newton's method with the original polynomial, and this approximate root as starting value. However, there is no warranty that this will allow finding all roots. In fact, the problem of finding the roots of a polynomial from its coefficients is in general highly
ill-conditioned In numerical analysis, the condition number of a function measures how much the output value of the function can change for a small change in the input argument. This is used to measure how sensitive a function is to changes or errors in the input ...
. This is illustrated by
Wilkinson's polynomial In numerical analysis, Wilkinson's polynomial is a specific polynomial which was used by James H. Wilkinson in 1963 to illustrate a difficulty when finding the root of a polynomial: the location of the roots can be very sensitive to perturbatio ...
: the roots of this polynomial of degree 20 are the 20 first positive integers; changing the last bit of the 32-bit representation of one of its coefficient (equal to –210) produces a polynomial with only 10 real roots and 10 complex roots with imaginary parts larger than 0.6. Closely related to Newton's method are
Halley's method In numerical analysis, Halley's method is a root-finding algorithm used for functions of one real variable with a continuous second derivative. It is named after its inventor Edmond Halley. The algorithm is second in the class of Householder's m ...
and
Laguerre's method In numerical analysis, Laguerre's method is a root-finding algorithm tailored to polynomials. In other words, Laguerre's method can be used to numerically solve the equation for a given polynomial . One of the most useful properties of this met ...
. Both use the polynomial and its two first derivations for an iterative process that has a cubic convergence. Combining two consecutive steps of these methods into a single test, one gets a
rate of convergence In numerical analysis, the order of convergence and the rate of convergence of a convergent sequence are quantities that represent how quickly the sequence approaches its limit. A sequence (x_n) that converges to x^* is said to have ''order of c ...
of 9, at the cost of 6 polynomial evaluations (with Horner rule). On the other hand, combining three steps of Newtons method gives a rate of convergence of 8 at the cost of the same number of polynomial evaluation. This gives a slight advantage to these methods (less clear for Laguerre's method, as a square root has to be computed at each step). When applying these methods to polynomials with real coefficients and real starting points, Newton's and Halley's method stay inside the real number line. One has to choose complex starting points to find complex roots. In contrast, the Laguerre method with a square root in its evaluation will leave the real axis of its own accord.


Finding roots in pairs

If the given polynomial only has real coefficients, one may wish to avoid computations with complex numbers. To that effect, one has to find quadratic factors for pairs of conjugate complex roots. The application of the multidimensional Newton's method to this task results in Bairstow's method. The real variant of Jenkins–Traub algorithm is an improvement of this method.


Finding all roots at once

The simple Durand–Kerner and the slightly more complicated
Aberth method The Aberth method, or Aberth–Ehrlich method or Ehrlich–Aberth method, named after Oliver Aberth and Louis W. Ehrlich, is a root-finding algorithm developed in 1967 for simultaneous approximation of all the roots of a univariate polynomial. Th ...
simultaneously find all of the roots using only simple
complex number In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation i^= -1; every complex number can be expressed in the fo ...
arithmetic. Accelerated algorithms for multi-point evaluation and interpolation similar to the
fast Fourier transform A fast Fourier transform (FFT) is an algorithm that computes the discrete Fourier transform (DFT) of a sequence, or its inverse (IDFT). Fourier analysis converts a signal from its original domain (often time or space) to a representation in ...
can help speed them up for large degrees of the polynomial. It is advisable to choose an asymmetric, but evenly distributed set of initial points. The implementation of this method in the
free software Free software or libre software is computer software distributed under terms that allow users to run the software for any purpose as well as to study, change, and distribute it and any adapted versions. Free software is a matter of liberty, n ...
MPSolve MPSolve (Multiprecision Polynomial Solver) is a package for the approximation of the roots of a univariate polynomial. It uses the Aberth method, combined with a careful use of multiprecision. "Mpsolve takes advantage of sparsity, and has spe ...
is a reference for its efficiency and its accuracy. Another method with this style is the Dandelin–Gräffe method (sometimes also ascribed to Lobachevsky), which uses
polynomial transformations In mathematics, a polynomial transformation consists of computing the polynomial whose roots are a given function of the roots of a polynomial. Polynomial transformations such as Tschirnhaus transformations are often used to simplify the soluti ...
to repeatedly and implicitly square the roots. This greatly magnifies variances in the roots. Applying Viète's formulas, one obtains easy approximations for the modulus of the roots, and with some more effort, for the roots themselves.


Exclusion and enclosure methods

Several fast tests exist that tell if a segment of the real line or a region of the complex plane contains no roots. By bounding the modulus of the roots and recursively subdividing the initial region indicated by these bounds, one can isolate small regions that may contain roots and then apply other methods to locate them exactly. All these methods involve finding the coefficients of shifted and scaled versions of the polynomial. For large degrees, FFT-based accelerated methods become viable. For real roots, see next sections. The Lehmer–Schur algorithm uses the Schur–Cohn test for circles; a variant, Wilf's global bisection algorithm uses a winding number computation for rectangular regions in the complex plane. The splitting circle method uses FFT-based polynomial transformations to find large-degree factors corresponding to clusters of roots. The precision of the factorization is maximized using a Newton-type iteration. This method is useful for finding the roots of polynomials of high degree to arbitrary precision; it has almost optimal complexity in this setting.


Real-root isolation

Finding the real roots of a polynomial with real coefficients is a problem that has received much attention since the beginning of 19th century, and is still an active domain of research. Most root-finding algorithms can find some real roots, but cannot certify having found all the roots. Methods for finding all complex roots, such as
Aberth method The Aberth method, or Aberth–Ehrlich method or Ehrlich–Aberth method, named after Oliver Aberth and Louis W. Ehrlich, is a root-finding algorithm developed in 1967 for simultaneous approximation of all the roots of a univariate polynomial. Th ...
can provide the real roots. However, because of the numerical instability of polynomials (see
Wilkinson's polynomial In numerical analysis, Wilkinson's polynomial is a specific polynomial which was used by James H. Wilkinson in 1963 to illustrate a difficulty when finding the root of a polynomial: the location of the roots can be very sensitive to perturbatio ...
), they may need
arbitrary-precision arithmetic In computer science, arbitrary-precision arithmetic, also called bignum arithmetic, multiple-precision arithmetic, or sometimes infinite-precision arithmetic, indicates that calculations are performed on numbers whose digits of precision are li ...
for deciding which roots are real. Moreover, they compute all complex roots when only few are real. It follows that the standard way of computing real roots is to compute first disjoint intervals, called ''isolating intervals'', such that each one contains exactly one real root, and together they contain all the roots. This computation is called ''real-root isolation''. Having isolating interval, one may use fast numerical methods, such as
Newton's method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real- ...
for improving the precision of the result. The oldest complete algorithm for real-root isolation results from
Sturm's theorem In mathematics, the Sturm sequence of a univariate polynomial is a sequence of polynomials associated with and its derivative by a variant of Euclid's algorithm for polynomials. Sturm's theorem expresses the number of distinct real roots of ...
. However, it appears to be much less efficient than the methods based on Descartes' rule of signs and Vincent's theorem. These methods divide into two main classes, one using continued fractions and the other using bisection. Both method have been dramatically improved since the beginning of 21st century. With these improvements they reach a
computational complexity In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations) ...
that is similar to that of the best algorithms for computing all the roots (even when all roots are real). These algorithms have been implemented and are available in
Mathematica Wolfram Mathematica is a software system with built-in libraries for several areas of technical computing that allow machine learning, statistics, symbolic computation, data manipulation, network analysis, time series analysis, NLP, optimiza ...
(continued fraction method) and
Maple ''Acer'' () is a genus of trees and shrubs commonly known as maples. The genus is placed in the family Sapindaceae.Stevens, P. F. (2001 onwards). Angiosperm Phylogeny Website. Version 9, June 2008 nd more or less continuously updated since h ...
(bisection method). Both implementations can routinely find the real roots of polynomials of degree higher than 1,000.


Finding multiple roots of polynomials

Most root-finding algorithms behave badly when there are multiple roots or very close roots. However, for polynomials whose coefficients are exactly given as
integer An integer is the number zero (), a positive natural number (, , , etc.) or a negative integer with a minus sign ( −1, −2, −3, etc.). The negative numbers are the additive inverses of the corresponding positive numbers. In the languag ...
s or
rational number In mathematics, a rational number is a number that can be expressed as the quotient or fraction of two integers, a numerator and a non-zero denominator . For example, is a rational number, as is every integer (e.g. ). The set of all ra ...
s, there is an efficient method to factorize them into factors that have only simple roots and whose coefficients are also exactly given. This method, called '' square-free factorization'', is based on the multiple roots of a polynomial being the roots of the greatest common divisor of the polynomial and its derivative. The square-free factorization of a polynomial ''p'' is a factorization p=p_1p_2^2\cdots p_k^k where each p_i is either 1 or a polynomial without multiple roots, and two different p_i do not have any common root. An efficient method to compute this factorization is Yun's algorithm.


See also

* List of root finding algorithms * * *
GNU Scientific Library The GNU Scientific Library (or GSL) is a software library for numerical computations in applied mathematics and science. The GSL is written in C; wrappers are available for other programming languages. The GSL is part of the GNU Project and is ...
* * * * * th root algorithm * *


References

* {{Root-finding algorithms