HOME

TheInfoList



OR:

In mathematics and computing, a root-finding algorithm is an
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
for finding zeros, also called "roots", of continuous functions. A zero of a function , from the real numbers to real numbers or from the complex numbers to the complex numbers, is a number such that . As, generally, the zeros of a function cannot be computed exactly nor expressed in closed form, root-finding algorithms provide approximations to zeros, expressed either as
floating-point In computing, floating-point arithmetic (FP) is arithmetic that represents real numbers approximately, using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. For example, 12.345 can be r ...
numbers or as small isolating intervals, or disks for complex roots (an interval or disk output being equivalent to an approximate output together with an error bound). Solving an equation is the same as finding the roots of the function . Thus root-finding algorithms allow solving any equation defined by continuous functions. However, most root-finding algorithms do not guarantee that they will find all the roots; in particular, if such an algorithm does not find any root, that does not mean that no root exists. Most numerical root-finding methods use iteration, producing a sequence of numbers that hopefully converge towards the root as a
limit Limit or Limits may refer to: Arts and media * ''Limit'' (manga), a manga by Keiko Suenobu * ''Limit'' (film), a South Korean film * Limit (music), a way to characterize harmony * "Limit" (song), a 2016 single by Luna Sea * "Limits", a 2019 ...
. They require one or more ''initial guesses'' of the root as starting values, then each iteration of the algorithm produces a successively more accurate approximation to the root. Since the iteration must be stopped at some point these methods produce an approximation to the root, not an exact solution. Many methods compute subsequent values by evaluating an auxiliary function on the preceding values. The limit is thus a fixed point of the auxiliary function, which is chosen for having the roots of the original equation as fixed points, and for converging rapidly to these fixed points. The behaviour of general root-finding algorithms is studied in
numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of numerical methods t ...
. However, for polynomials, root-finding study belongs generally to
computer algebra In mathematics and computer science, computer algebra, also called symbolic computation or algebraic computation, is a scientific area that refers to the study and development of algorithms and software for manipulating mathematical expressio ...
, since algebraic properties of polynomials are fundamental for the most efficient algorithms. The efficiency of an algorithm may depend dramatically on the characteristics of the given functions. For example, many algorithms use the derivative of the input function, while others work on every continuous function. In general, numerical algorithms are not guaranteed to find all the roots of a function, so failing to find a root does not prove that there is no root. However, for polynomials, there are specific algorithms that use algebraic properties for certifying that no root is missed, and locating the roots in separate intervals (or disks for complex roots) that are small enough to ensure the convergence of numerical methods (typically Newton's method) to the unique root so located.


Bracketing methods

Bracketing methods determine successively smaller intervals (brackets) that contain a root. When the interval is small enough, then a root has been found. They generally use the intermediate value theorem, which asserts that if a continuous function has values of opposite signs at the end points of an interval, then the function has at least one root in the interval. Therefore, they require to start with an interval such that the function takes opposite signs at the end points of the interval. However, in the case of polynomials there are other methods ( Descartes' rule of signs, Budan's theorem and
Sturm's theorem In mathematics, the Sturm sequence of a univariate polynomial is a sequence of polynomials associated with and its derivative by a variant of Euclid's algorithm for polynomials. Sturm's theorem expresses the number of distinct real roots of ...
) for getting information on the number of roots in an interval. They lead to efficient algorithms for real-root isolation of polynomials, which ensure finding all real roots with a guaranteed accuracy.


Bisection method

The simplest root-finding algorithm is the
bisection method In mathematics, the bisection method is a root-finding method that applies to any continuous function for which one knows two values with opposite signs. The method consists of repeatedly bisecting the interval defined by these values and th ...
. Let be a continuous function, for which one knows an interval such that and have opposite signs (a bracket). Let be the middle of the interval (the midpoint or the point that bisects the interval). Then either and , or and have opposite signs, and one has divided by two the size of the interval. Although the bisection method is robust, it gains one and only one
bit The bit is the most basic unit of information in computing and digital communications. The name is a portmanteau of binary digit. The bit represents a logical state with one of two possible values. These values are most commonly represented ...
of accuracy with each iteration. Other methods, under appropriate conditions, can gain accuracy faster.


False position (''regula falsi'')

The false position method, also called the ''regula falsi'' method, is similar to the bisection method, but instead of using bisection search's middle of the interval it uses the -intercept of the line that connects the plotted function values at the endpoints of the interval, that is :c= \frac. False position is similar to the secant method, except that, instead of retaining the last two points, it makes sure to keep one point on either side of the root. The false position method can be faster than the bisection method and will never diverge like the secant method; however, it may fail to converge in some naive implementations due to roundoff errors that may lead to a wrong sign for ; typically, this may occur if the rate of variation of is large in the neighborhood of the root.


ITP method

The ITP method is the only known method to bracket the root with the same worst case guarantees of the bisection method while guaranteeing a superlinear convergence to the root of smooth functions as the secant method. It is also the only known method guaranteed to outperform the bisection method on the average for any continuous distribution on the location of the root (see ITP Method#Analysis). It does so by keeping track of both the bracketing interval as well as the minmax interval in which any point therein converges as fast as the bisection method. The construction of the queried point c follows three steps: interpolation (similar to the regula falsi), truncation (adjusting the regula falsi similar to Regula falsi § Improvements in ''regula falsi'') and then projection onto the minmax interval. The combination of these steps produces a simultaneously minmax optimal method with guarantees similar to interpolation based methods for smooth functions, and, in practice will outperform both the bisection method and interpolation based methods under both smooth and non-smooth functions.


Interpolation

Many root-finding processes work by
interpolation In the mathematical field of numerical analysis, interpolation is a type of estimation, a method of constructing (finding) new data points based on the range of a discrete set of known data points. In engineering and science, one often has a ...
. This consists in using the last computed approximate values of the root for approximating the function by a polynomial of low degree, which takes the same values at these approximate roots. Then the root of the polynomial is computed and used as a new approximate value of the root of the function, and the process is iterated. Two values allow interpolating a function by a polynomial of degree one (that is approximating the graph of the function by a line). This is the basis of the secant method. Three values define a quadratic function, which approximates the graph of the function by a parabola. This is
Muller's method Muller's method is a root-finding algorithm, a numerical method for solving equations of the form ''f''(''x'') = 0. It was first presented by David E. Muller in 1956. Muller's method is based on the secant method, which constructs at every itera ...
. ''Regula falsi'' is also an interpolation method, which differs from the secant method by using, for interpolating by a line, two points that are not necessarily the last two computed points.


Iterative methods

Although all root-finding algorithms proceed by iteration, an iterative root-finding method generally uses a specific type of iteration, consisting of defining an auxiliary function, which is applied to the last computed approximations of a root for getting a new approximation. The iteration stops when a fixed point (
up to Two mathematical objects ''a'' and ''b'' are called equal up to an equivalence relation ''R'' * if ''a'' and ''b'' are related by ''R'', that is, * if ''aRb'' holds, that is, * if the equivalence classes of ''a'' and ''b'' with respect to ''R'' a ...
the desired precision) of the auxiliary function is reached, that is when the new computed value is sufficiently close to the preceding ones.


Newton's method (and similar derivative-based methods)

Newton's method assumes the function ''f'' to have a continuous derivative. Newton's method may not converge if started too far away from a root. However, when it does converge, it is faster than the bisection method, and is usually quadratic. Newton's method is also important because it readily generalizes to higher-dimensional problems. Newton-like methods with higher orders of convergence are the Householder's methods. The first one after Newton's method is Halley's method with cubic order of convergence.


Secant method

Replacing the derivative in Newton's method with a
finite difference A finite difference is a mathematical expression of the form . If a finite difference is divided by , one gets a difference quotient. The approximation of derivatives by finite differences plays a central role in finite difference methods for th ...
, we get the secant method. This method does not require the computation (nor the existence) of a derivative, but the price is slower convergence (the order is approximately 1.6 ( golden ratio)). A generalization of the secant method in higher dimensions is Broyden's method.


Steffensen's method

If we use a polynomial fit to remove the quadratic part of the finite difference used in the Secant method, so that it better approximates the derivative, we obtain Steffensen's method, which has quadratic convergence, and whose behavior (both good and bad) is essentially the same as Newton's method but does not require a derivative.


Fixed point iteration method

We can use the fixed-point iteration to find the root of a function. Given a function f(x) which we have set to zero to find the root ( f(x)=0 ), we rewrite the equation in terms of x so that f(x)=0 becomes x=g(x) (note, there are often many g(x) functions for each f(x)=0 function). Next, we relabel the each side of the equation as x_=g(x_) so that we can perform the iteration. Next, we pick a value for x_ and perform the iteration until it converges towards a root of the function. If the iteration converges, it will converge to a root. The iteration will only converge if , g'(root), <1 . As an example of converting f(x)=0 to x=g(x) , if given the function f(x)=x^2+x-1 , we will rewrite it as one of the following equations. : x_=(1/x_n) - 1 , : x_=1/(x_n+1) , : x_=x_n^2+2x_n-1 , or : x_=\pm \sqrt .


Inverse interpolation

The appearance of complex values in interpolation methods can be avoided by interpolating the
inverse Inverse or invert may refer to: Science and mathematics * Inverse (logic), a type of conditional sentence which is an immediate inference made from another conditional sentence * Additive inverse (negation), the inverse of a number that, when a ...
of ''f'', resulting in the inverse quadratic interpolation method. Again, convergence is asymptotically faster than the secant method, but inverse quadratic interpolation often behaves poorly when the iterates are not close to the root.


Combinations of methods


Brent's method

Brent's method In numerical analysis, Brent's method is a hybrid root-finding algorithm combining the bisection method, the secant method and inverse quadratic interpolation. It has the reliability of bisection but it can be as quick as some of the less-reliable ...
is a combination of the bisection method, the secant method and inverse quadratic interpolation. At every iteration, Brent's method decides which method out of these three is likely to do best, and proceeds by doing a step according to that method. This gives a robust and fast method, which therefore enjoys considerable popularity.


Ridders' method

Ridders' method is a hybrid method that uses the value of function at the midpoint of the interval to perform an exponential interpolation to the root. This gives a fast convergence with a guaranteed convergence of at most twice the number of iterations as the bisection method.


Roots of polynomials

Finding roots of polynomial is a long-standing problem that has been the object of much research throughout history. A testament to this is that up until the 19th century
algebra Algebra () is one of the broad areas of mathematics. Roughly speaking, algebra is the study of mathematical symbols and the rules for manipulating these symbols in formulas; it is a unifying thread of almost all of mathematics. Elementary a ...
meant essentially theory of polynomial equations. Finding the root of a linear polynomial (degree one) is easy and needs only one division. For quadratic polynomials (degree two), the
quadratic formula In elementary algebra, the quadratic formula is a formula that provides the solution(s) to a quadratic equation. There are other ways of solving a quadratic equation instead of using the quadratic formula, such as factoring (direct factoring, gr ...
produces a solution, but its numerical evaluation may require some care for ensuring
numerical stability In the mathematical subfield of numerical analysis, numerical stability is a generally desirable property of numerical algorithms. The precise definition of stability depends on the context. One is numerical linear algebra and the other is algorit ...
. For degrees three and four, there are closed-form solutions in terms of
radicals Radical may refer to: Politics and ideology Politics *Radical politics, the political intent of fundamental societal change *Radicalism (historical), the Radical Movement that began in late 18th century Britain and spread to continental Europe and ...
, which are generally not convenient for numerical evaluation, as being too complicated and involving the computation of several th roots whose computation is not easier than the direct computation of the roots of the polynomial (for example the expression of the real roots of a
cubic polynomial In mathematics, a cubic function is a function of the form f(x)=ax^3+bx^2+cx+d where the coefficients , , , and are complex numbers, and the variable takes real values, and a\neq 0. In other words, it is both a polynomial function of degree ...
may involve non-real
cube root In mathematics, a cube root of a number is a number such that . All nonzero real numbers, have exactly one real cube root and a pair of complex conjugate cube roots, and all nonzero complex numbers have three distinct complex cube roots. ...
s). For polynomials of degree five or higher
Abel–Ruffini theorem In mathematics, the Abel–Ruffini theorem (also known as Abel's impossibility theorem) states that there is no solution in radicals to general polynomial equations of degree five or higher with arbitrary coefficients. Here, ''general'' means t ...
asserts that there is, in general, no radical expression of the roots. So, except for very low degrees, root finding of polynomials consists of finding approximations of the roots. By the fundamental theorem of algebra, one knows that a polynomial of degree has at most real or complex roots, and this number is reached for almost all polynomials. It follows that the problem of root finding for polynomials may be split in three different subproblems; * Finding one root * Finding all roots * Finding roots in a specific region of the
complex plane In mathematics, the complex plane is the plane formed by the complex numbers, with a Cartesian coordinate system such that the -axis, called the real axis, is formed by the real numbers, and the -axis, called the imaginary axis, is formed by th ...
, typically the real roots or the real roots in a given interval (for example, when roots represents a physical quantity, only the real positive ones are interesting). For finding one root, Newton's method and other general
iterative method In computational mathematics, an iterative method is a mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the ''n''-th approximation is derived from the pre ...
s work generally well. For finding all the roots, the oldest method is to start by finding a single root. When a root has been found, it can be removed from the polynomial by dividing out the binomial . The resulting polynomial contains the remaining roots, which can be found by iterating on this process. However, except for low degrees, this does not work well because of the
numerical instability In the mathematical subfield of numerical analysis, numerical stability is a generally desirable property of numerical algorithms. The precise definition of stability depends on the context. One is numerical linear algebra and the other is algorit ...
: Wilkinson's polynomial shows that a very small modification of one coefficient may change dramatically not only the value of the roots, but also their nature (real or complex). Also, even with a good approximation, when one evaluates a polynomial at an approximate root, one may get a result that is far to be close to zero. For example, if a polynomial of degree 20 (the degree of Wilkinson's polynomial) has a root close to 10, the derivative of the polynomial at the root may be of the order of 10^; this implies that an error of 10^ on the value of the root may produce a value of the polynomial at the approximate root that is of the order of 10^. For avoiding these problems, methods have been elaborated, which compute all roots simultaneously, to any desired accuracy. Presently the most efficient method is Aberth method. A free implementation is available under the name of MPSolve. This is a reference implementation, which can find routinely the roots of polynomials of degree larger than 1,000, with more than 1,000 significant decimal digits. The methods for computing all roots may be used for computing real roots. However, it may be difficult to decide whether a root with a small imaginary part is real or not. Moreover, as the number of the real roots is, on the average, the logarithm of the degree, it is a waste of computer resources to compute the non-real roots when one is interested in real roots. The oldest method for computing the number of real roots, and the number of roots in an interval results from
Sturm's theorem In mathematics, the Sturm sequence of a univariate polynomial is a sequence of polynomials associated with and its derivative by a variant of Euclid's algorithm for polynomials. Sturm's theorem expresses the number of distinct real roots of ...
, but the methods based on Descartes' rule of signs and its extensions— Budan's and Vincent's theorems—are generally more efficient. For root finding, all proceed by reducing the size of the intervals in which roots are searched until getting intervals containing zero or one root. Then the intervals containing one root may be further reduced for getting a quadratic convergence of Newton's method to the isolated roots. The main
computer algebra system A computer algebra system (CAS) or symbolic algebra system (SAS) is any mathematical software with the ability to manipulate mathematical expressions in a way similar to the traditional manual computations of mathematicians and scientists. The d ...
s (
Maple ''Acer'' () is a genus of trees and shrubs commonly known as maples. The genus is placed in the family Sapindaceae.Stevens, P. F. (2001 onwards). Angiosperm Phylogeny Website. Version 9, June 2008 nd more or less continuously updated since http ...
, Mathematica, SageMath,
PARI/GP PARI/GP is a computer algebra system with the main aim of facilitating number theory computations. Versions 2.1.0 and higher are distributed under the GNU General Public License. It runs on most common operating systems. System overview The P ...
) have each a variant of this method as the default algorithm for the real roots of a polynomial. Another class of methods is based on converting the problem of finding polynomial roots to the problem of finding eigenvalues of the
companion matrix In linear algebra, the Frobenius companion matrix of the monic polynomial : p(t)=c_0 + c_1 t + \cdots + c_t^ + t^n ~, is the square matrix defined as :C(p)=\begin 0 & 0 & \dots & 0 & -c_0 \\ 1 & 0 & \dots & 0 & -c_1 \\ 0 & 1 & \dots & 0 & -c_2 ...
of the polynomial. In principle, one can use any
eigenvalue algorithm In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors. Eigenvalues and eigenvectors Given an square ...
to find the roots of the polynomial. However, for efficiency reasons one prefers methods that employ the structure of the matrix, that is, can be implemented in matrix-free form. Among these methods are the
power method In mathematics, power iteration (also known as the power method) is an eigenvalue algorithm: given a diagonalizable matrix A, the algorithm will produce a number \lambda, which is the greatest (in absolute value) eigenvalue of A, and a nonzero vec ...
, whose application to the transpose of the companion matrix is the classical Bernoulli's method to find the root of greatest modulus. The inverse power method with shifts, which finds some smallest root first, is what drives the complex (''cpoly'') variant of the Jenkins–Traub algorithm and gives it its numerical stability. Additionally, it is insensitive to multiple roots and has fast convergence with order 1+\varphi\approx 2.6 (where \varphi is the golden ratio) even in the presence of clustered roots. This fast convergence comes with a cost of three polynomial evaluations per step, resulting in a residual of , that is a slower convergence than with three steps of Newton's method.


Finding one root

The most widely used method for computing a root is Newton's method, which consists of the iterations of the computation of :x_=x_n-\frac, by starting from a well-chosen value x_0. If is a polynomial, the computation is faster when using Horner's method or evaluation with preprocessing for computing the polynomial and its derivative in each iteration. Though the convergence is generally quadratic, it may converge much slowly or even not converge at all. In particular, if the polynomial has no real root, and x_0 is real, then Newton's method cannot converge. However, if the polynomial has a real root, which is larger than the larger real root of its derivative, then Newton's method converges quadratically to this largest root if x_0 is larger than this larger root (there are easy ways for computing an upper bound of the roots, see Properties of polynomial roots). This is the starting point of Horner method for computing the roots. When one root has been found, one may use Euclidean division for removing the factor from the polynomial. Computing a root of the resulting quotient, and repeating the process provides, in principle, a way for computing all roots. However, this iterative scheme is numerically unstable; the approximation errors accumulate during the successive factorizations, so that the last roots are determined with a polynomial that deviates widely from a factor of the original polynomial. To reduce this error, one may, for each root that is found, restart Newton's method with the original polynomial, and this approximate root as starting value. However, there is no warranty that this will allow finding all roots. In fact, the problem of finding the roots of a polynomial from its coefficients is in general highly ill-conditioned. This is illustrated by Wilkinson's polynomial: the roots of this polynomial of degree 20 are the 20 first positive integers; changing the last bit of the 32-bit representation of one of its coefficient (equal to –210) produces a polynomial with only 10 real roots and 10 complex roots with imaginary parts larger than 0.6. Closely related to Newton's method are Halley's method and Laguerre's method. Both use the polynomial and its two first derivations for an iterative process that has a cubic convergence. Combining two consecutive steps of these methods into a single test, one gets a rate of convergence of 9, at the cost of 6 polynomial evaluations (with Horner rule). On the other hand, combining three steps of Newtons method gives a rate of convergence of 8 at the cost of the same number of polynomial evaluation. This gives a slight advantage to these methods (less clear for Laguerre's method, as a square root has to be computed at each step). When applying these methods to polynomials with real coefficients and real starting points, Newton's and Halley's method stay inside the real number line. One has to choose complex starting points to find complex roots. In contrast, the Laguerre method with a square root in its evaluation will leave the real axis of its own accord.


Finding roots in pairs

If the given polynomial only has real coefficients, one may wish to avoid computations with complex numbers. To that effect, one has to find quadratic factors for pairs of conjugate complex roots. The application of the multidimensional Newton's method to this task results in Bairstow's method. The real variant of Jenkins–Traub algorithm is an improvement of this method.


Finding all roots at once

The simple Durand–Kerner and the slightly more complicated Aberth method simultaneously find all of the roots using only simple complex number arithmetic. Accelerated algorithms for multi-point evaluation and interpolation similar to the fast Fourier transform can help speed them up for large degrees of the polynomial. It is advisable to choose an asymmetric, but evenly distributed set of initial points. The implementation of this method in the free software MPSolve is a reference for its efficiency and its accuracy. Another method with this style is the Dandelin–Gräffe method (sometimes also ascribed to Lobachevsky), which uses polynomial transformations to repeatedly and implicitly square the roots. This greatly magnifies variances in the roots. Applying Viète's formulas, one obtains easy approximations for the modulus of the roots, and with some more effort, for the roots themselves.


Exclusion and enclosure methods

Several fast tests exist that tell if a segment of the real line or a region of the complex plane contains no roots. By bounding the modulus of the roots and recursively subdividing the initial region indicated by these bounds, one can isolate small regions that may contain roots and then apply other methods to locate them exactly. All these methods involve finding the coefficients of shifted and scaled versions of the polynomial. For large degrees, FFT-based accelerated methods become viable. For real roots, see next sections. The Lehmer–Schur algorithm uses the Schur–Cohn test for circles; a variant, Wilf's global bisection algorithm uses a winding number computation for rectangular regions in the complex plane. The splitting circle method uses FFT-based polynomial transformations to find large-degree factors corresponding to clusters of roots. The precision of the factorization is maximized using a Newton-type iteration. This method is useful for finding the roots of polynomials of high degree to arbitrary precision; it has almost optimal complexity in this setting.


Real-root isolation

Finding the real roots of a polynomial with real coefficients is a problem that has received much attention since the beginning of 19th century, and is still an active domain of research. Most root-finding algorithms can find some real roots, but cannot certify having found all the roots. Methods for finding all complex roots, such as Aberth method can provide the real roots. However, because of the numerical instability of polynomials (see Wilkinson's polynomial), they may need
arbitrary-precision arithmetic In computer science, arbitrary-precision arithmetic, also called bignum arithmetic, multiple-precision arithmetic, or sometimes infinite-precision arithmetic, indicates that calculations are performed on numbers whose digits of precision are li ...
for deciding which roots are real. Moreover, they compute all complex roots when only few are real. It follows that the standard way of computing real roots is to compute first disjoint intervals, called ''isolating intervals'', such that each one contains exactly one real root, and together they contain all the roots. This computation is called ''real-root isolation''. Having isolating interval, one may use fast numerical methods, such as Newton's method for improving the precision of the result. The oldest complete algorithm for real-root isolation results from
Sturm's theorem In mathematics, the Sturm sequence of a univariate polynomial is a sequence of polynomials associated with and its derivative by a variant of Euclid's algorithm for polynomials. Sturm's theorem expresses the number of distinct real roots of ...
. However, it appears to be much less efficient than the methods based on Descartes' rule of signs and Vincent's theorem. These methods divide into two main classes, one using
continued fraction In mathematics, a continued fraction is an expression obtained through an iterative process of representing a number as the sum of its integer part and the reciprocal of another number, then writing this other number as the sum of its integer ...
s and the other using bisection. Both method have been dramatically improved since the beginning of 21st century. With these improvements they reach a
computational complexity In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations ...
that is similar to that of the best algorithms for computing all the roots (even when all roots are real). These algorithms have been implemented and are available in Mathematica (continued fraction method) and
Maple ''Acer'' () is a genus of trees and shrubs commonly known as maples. The genus is placed in the family Sapindaceae.Stevens, P. F. (2001 onwards). Angiosperm Phylogeny Website. Version 9, June 2008 nd more or less continuously updated since http ...
(bisection method). Both implementations can routinely find the real roots of polynomials of degree higher than 1,000.


Finding multiple roots of polynomials

Most root-finding algorithms behave badly when there are multiple roots or very close roots. However, for polynomials whose coefficients are exactly given as integers or rational numbers, there is an efficient method to factorize them into factors that have only simple roots and whose coefficients are also exactly given. This method, called '' square-free factorization'', is based on the multiple roots of a polynomial being the roots of the greatest common divisor of the polynomial and its derivative. The square-free factorization of a polynomial ''p'' is a factorization p=p_1p_2^2\cdots p_k^k where each p_i is either 1 or a polynomial without multiple roots, and two different p_i do not have any common root. An efficient method to compute this factorization is
Yun's algorithm In mathematics, a square-free polynomial is a polynomial defined over a field (or more generally, an integral domain) that does not have as a divisor any square of a non-constant polynomial. A univariate polynomial is square free if and only if i ...
.


See also

* List of root finding algorithms * * * GNU Scientific Library * * * * * th root algorithm * *


References

* {{Root-finding algorithms