Finding
polynomial root
In mathematics, a zero (also sometimes called a root) of a real-, complex-, or generally vector-valued function f, is a member x of the domain of f such that f(x) ''vanishes'' at x; that is, the function f attains the value of 0 at x, or ...
s is a long-standing problem that has been the object of much research throughout history. A testament to this is that up until the 19th century,
algebra
Algebra () is one of the areas of mathematics, broad areas of mathematics. Roughly speaking, algebra is the study of mathematical symbols and the rules for manipulating these symbols in formulas; it is a unifying thread of almost all of mathem ...
meant essentially
theory of polynomial equations.
Principles
Finding the root of a
linear polynomial
In mathematics, a polynomial is an expression consisting of indeterminates (also called variables) and coefficients, that involves only the operations of addition, subtraction, multiplication, and positive-integer powers of variables. An examp ...
(degree one) is easy and needs only one division: the general equation
has solution
For
quadratic polynomial
In mathematics, a quadratic polynomial is a polynomial of degree two in one or more variables. A quadratic function is the polynomial function defined by a quadratic polynomial. Before 20th century, the distinction was unclear between a polynomia ...
s (degree two), the
quadratic formula
In elementary algebra, the quadratic formula is a formula that provides the solution(s) to a quadratic equation. There are other ways of solving a quadratic equation instead of using the quadratic formula, such as factoring (direct factoring, g ...
produces a solution, but its numerical evaluation may require some care for ensuring
numerical stability. For degrees three and four, there are closed-form solutions in terms of
radicals
Radical may refer to:
Politics and ideology Politics
*Radical politics, the political intent of fundamental societal change
* Radicalism (historical), the Radical Movement that began in late 18th century Britain and spread to continental Europe an ...
, which are generally not convenient for numerical evaluation, as being too complicated and involving the computation of several
th roots whose computation is not easier than the direct computation of the roots of the polynomial (for example the expression of the real roots of a
cubic polynomial
In mathematics, a cubic function is a function of the form f(x)=ax^3+bx^2+cx+d
where the coefficients , , , and are complex numbers, and the variable takes real values, and a\neq 0. In other words, it is both a polynomial function of degree ...
may involve non-real
cube roots). For polynomials of degree five or higher
Abel–Ruffini theorem
In mathematics, the Abel–Ruffini theorem (also known as Abel's impossibility theorem) states that there is no solution in radicals to general polynomial equations of degree five or higher with arbitrary coefficients. Here, ''general'' means ...
asserts that there is, in general, no radical expression of the roots.
So, except for very low degrees, root finding of polynomials consists of finding approximations of the roots. By the
fundamental theorem of algebra
The fundamental theorem of algebra, also known as d'Alembert's theorem, or the d'Alembert–Gauss theorem, states that every non- constant single-variable polynomial with complex coefficients has at least one complex root. This includes polynomia ...
, a polynomial of degree has exactly real or complex roots counting
multiplicities
In mathematics, the multiplicity of a member of a multiset is the number of times it appears in the multiset. For example, the number of times a given polynomial has a root at a given point is the multiplicity of that root.
The notion of multip ...
.
It follows that the problem of root finding for polynomials may be split in three different subproblems;
* Finding one root
* Finding all roots
* Finding roots in a specific region of the
complex plane
In mathematics, the complex plane is the plane formed by the complex numbers, with a Cartesian coordinate system such that the -axis, called the real axis, is formed by the real numbers, and the -axis, called the imaginary axis, is formed by th ...
, typically the real roots or the real roots in a given interval (for example, when roots represents a physical quantity, only the real positive ones are interesting).
For finding one root,
Newton's method
In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real ...
and other general
iterative method
In computational mathematics, an iterative method is a mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the ''n''-th approximation is derived from the pre ...
s work generally well.
For finding all the roots, arguably the most reliable method is the Francis
QR algorithm
In numerical linear algebra, the QR algorithm or QR iteration is an eigenvalue algorithm: that is, a procedure to calculate the eigenvalues and eigenvectors of a matrix. The QR algorithm was developed in the late 1950s by John G. F. Francis and b ...
computing the
eigenvalues
In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted b ...
of the
Companion matrix In linear algebra, the Frobenius companion matrix of the monic polynomial
:
p(t)=c_0 + c_1 t + \cdots + c_t^ + t^n ~,
is the square matrix defined as
:C(p)=\begin
0 & 0 & \dots & 0 & -c_0 \\
1 & 0 & \dots & 0 & -c_1 \\
0 & 1 & \dots & 0 & -c_ ...
corresponding to the polynomial, implemented as the standard method
in
MATLAB
MATLAB (an abbreviation of "MATrix LABoratory") is a proprietary multi-paradigm programming language and numeric computing environment developed by MathWorks. MATLAB allows matrix manipulations, plotting of functions and data, implementa ...
.
The oldest method of finding all roots is to start by finding a single root. When a root has been found, it can be removed from the polynomial by dividing out the binomial . The resulting polynomial contains the remaining roots, which can be found by iterating on this process. However, except for low degrees, this does not work well because of the
numerical instability
In the mathematical subfield of numerical analysis, numerical stability is a generally desirable property of numerical algorithms. The precise definition of stability depends on the context. One is numerical linear algebra and the other is algori ...
:
Wilkinson's polynomial
In numerical analysis, Wilkinson's polynomial is a specific polynomial which was used by James H. Wilkinson in 1963 to illustrate a difficulty when finding the root of a polynomial: the location of the roots can be very sensitive to perturbation ...
shows that a very small modification of one coefficient may change dramatically not only the value of the roots, but also their nature (real or complex). Also, even with a good approximation, when one evaluates a polynomial at an approximate root, one may get a result that is far to be close to zero. For example, if a polynomial of degree 20 (the degree of Wilkinson's polynomial) has a root close to 10, the derivative of the polynomial at the root may be of the order of
this implies that an error of
on the value of the root may produce a value of the polynomial at the approximate root that is of the order of
For avoiding these problems, methods have been elaborated, which compute all roots simultaneously, to any desired accuracy. Presently the most efficient method is
Aberth method The Aberth method, or Aberth–Ehrlich method or Ehrlich–Aberth method, named after Oliver Aberth and Louis W. Ehrlich, is a root-finding algorithm developed in 1967 for simultaneous approximation of all the roots of a univariate polynomial.
This ...
. A
free
Free may refer to:
Concept
* Freedom, having the ability to do something, without having to obey anyone/anything
* Freethought, a position that beliefs should be formed only on the basis of logic, reason, and empiricism
* Emancipate, to procur ...
implementation is available under the name of
MPSolve
MPSolve (Multiprecision Polynomial Solver) is a package for the approximation of the roots of a univariate polynomial. It uses the Aberth method, combined with a careful use of multiprecision.
"Mpsolve takes advantage of sparsity, and has spec ...
. This is a reference implementation, which can find routinely the roots of polynomials of degree larger than 1,000, with more than 1,000 significant decimal digits.
The methods for computing all roots may be used for computing real roots. However, it may be difficult to decide whether a root with a small imaginary part is real or not. Moreover, as the number of the real roots is, on the average, the logarithm of the degree, it is a waste of computer resources to compute the non-real roots when one is interested in real roots.
The oldest method for computing the number of real roots, and the number of roots in an interval results from
Sturm's theorem
In mathematics, the Sturm sequence of a univariate polynomial is a sequence of polynomials associated with and its derivative by a variant of Euclid's algorithm for polynomials. Sturm's theorem expresses the number of distinct real roots of ...
, but the methods based on
Descartes' rule of signs
In mathematics, Descartes' rule of signs, first described by René Descartes in his work ''La Géométrie'', is a technique for getting information on the number of positive real roots of a polynomial. It asserts that the number of positive roots ...
and its extensions—
Budan's and
Vincent's theorem In mathematics, Vincent's theorem—named after Alexandre Joseph Hidulphe Vincent—is a theorem that isolates the real roots of polynomials with rational coefficients.
Even though Vincent's theorem is the basis of the fastest method for the isolat ...
s—are generally more efficient. For root finding, all proceed by reducing the size of the intervals in which roots are searched until getting intervals containing zero or one root. Then the intervals containing one root may be further reduced for getting a quadratic convergence of
Newton's method
In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real ...
to the isolated roots. The main
computer algebra system
A computer algebra system (CAS) or symbolic algebra system (SAS) is any mathematical software with the ability to manipulate mathematical expressions in a way similar to the traditional manual computations of mathematicians and scientists. The ...
s (
Maple
''Acer'' () is a genus of trees and shrubs commonly known as maples. The genus is placed in the family Sapindaceae.Stevens, P. F. (2001 onwards). Angiosperm Phylogeny Website. Version 9, June 2008 nd more or less continuously updated since ht ...
,
Mathematica
Wolfram Mathematica is a software system with built-in libraries for several areas of technical computing that allow machine learning, statistics, symbolic computation, data manipulation, network analysis, time series analysis, NLP, optimi ...
,
SageMath
SageMath (previously Sage or SAGE, "System for Algebra and Geometry Experimentation") is a computer algebra system (CAS) with features covering many aspects of mathematics, including algebra, combinatorics, graph theory, numerical analysis, nu ...
,
PARI/GP
PARI/GP is a computer algebra system with the main aim of facilitating number theory computations. Versions 2.1.0 and higher are distributed under the GNU General Public License. It runs on most common operating systems.
System overview
The P ...
) have each a variant of this method as the default algorithm for the real roots of a polynomial.
The class of methods is based on converting the problem of finding polynomial roots to the problem of finding
eigenvalues
In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted b ...
of the
companion matrix In linear algebra, the Frobenius companion matrix of the monic polynomial
:
p(t)=c_0 + c_1 t + \cdots + c_t^ + t^n ~,
is the square matrix defined as
:C(p)=\begin
0 & 0 & \dots & 0 & -c_0 \\
1 & 0 & \dots & 0 & -c_1 \\
0 & 1 & \dots & 0 & -c_ ...
of the polynomial,
in principle, can use any
eigenvalue algorithm
In numerical analysis, one of the most important problems is designing efficient and Numerical stability, stable algorithms for finding the eigenvalues of a Matrix (mathematics), matrix. These eigenvalue algorithms may also find eigenvectors.
Eig ...
to find the roots of the polynomial. However, for efficiency reasons one prefers methods that employ the structure of the matrix, that is, can be implemented in matrix-free form. Among these methods are the
power method
In mathematics, power iteration (also known as the power method) is an eigenvalue algorithm: given a diagonalizable matrix (mathematics), matrix A, the algorithm will produce a number \lambda, which is the greatest (in absolute value) eigenvalue of ...
, whose application to the transpose of the companion matrix is the classical
Bernoulli's method to find the root of greatest modulus. The
inverse power method with shifts, which finds some smallest root first, is what drives the complex (''cpoly'') variant of the
Jenkins–Traub algorithm The Jenkins–Traub algorithm for polynomial zeros is a fast globally convergent iterative polynomial root-finding method published in 1970 by Michael A. Jenkins and Joseph F. Traub. They gave two variants, one for general polynomials with complex ...
and gives it its numerical stability. Additionally, it has fast convergence with order
(where
is the
golden ratio
In mathematics, two quantities are in the golden ratio if their ratio is the same as the ratio of their sum to the larger of the two quantities. Expressed algebraically, for quantities a and b with a > b > 0,
where the Greek letter phi ( ...
) even in the presence of clustered roots. This fast convergence comes with a cost of three polynomial evaluations per step, resulting in a residual of , that is a slower convergence than with three steps of Newton's method.
Finding one root
The most widely used method for computing a root is
Newton's method
In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real ...
, which consists of the iterations of the computation of
:
by starting from a well-chosen value
If is a polynomial, the computation is faster when using
Horner's method
In mathematics and computer science, Horner's method (or Horner's scheme) is an algorithm for polynomial evaluation. Although named after William George Horner, this method is much older, as it has been attributed to Joseph-Louis Lagrange by Hor ...
or
evaluation with preprocessing for computing the polynomial and its derivative in each iteration.
Though the convergence is generally
quadratic
In mathematics, the term quadratic describes something that pertains to squares, to the operation of squaring, to terms of the second degree, or equations or formulas that involve such terms. ''Quadratus'' is Latin for ''square''.
Mathematics ...
, it may converge much slowly or even not converge at all. In particular, if the polynomial has no real root, and
is real, then Newton's method cannot converge. However, if the polynomial has a real root, which is larger than the larger real root of its derivative, then Newton's method converges quadratically to this largest root if
is larger than this larger root (there are easy ways for computing an upper bound of the roots, see
Properties of polynomial roots
Property is the ownership of land, resources, improvements or other tangible objects, or intellectual property.
Property may also refer to:
Mathematics
* Property (mathematics)
Philosophy and science
* Property (philosophy), in philosophy an ...
). This is the starting point of
Horner method
In mathematics and computer science, Horner's method (or Horner's scheme) is an algorithm for polynomial evaluation. Although named after William George Horner, this method is much older, as it has been attributed to Joseph-Louis Lagrange by Hor ...
for computing the roots.
When one root has been found, one may use
Euclidean division
In arithmetic, Euclidean division – or division with remainder – is the process of dividing one integer (the dividend) by another (the divisor), in a way that produces an integer quotient and a natural number remainder strictly smaller than ...
for removing the factor from the polynomial. Computing a root of the resulting quotient, and repeating the process provides, in principle, a way for computing all roots. However, this iterative scheme is numerically unstable; the approximation errors accumulate during the successive factorizations, so that the last roots are determined with a polynomial that deviates widely from a factor of the original polynomial. To reduce this error, one may, for each root that is found, restart Newton's method with the original polynomial, and this approximate root as starting value.
However, there is no warranty that this will allow finding all roots. In fact, the problem of finding the roots of a polynomial from its coefficients can be highly
ill-conditioned
In numerical analysis, the condition number of a function measures how much the output value of the function can change for a small change in the input argument. This is used to measure how sensitive a function is to changes or errors in the inpu ...
. This is illustrated by
Wilkinson's polynomial
In numerical analysis, Wilkinson's polynomial is a specific polynomial which was used by James H. Wilkinson in 1963 to illustrate a difficulty when finding the root of a polynomial: the location of the roots can be very sensitive to perturbation ...
: the roots of this polynomial of degree 20 are the 20 first positive integers; changing the last bit of the 32-bit representation of one of its coefficient (equal to –210) produces a polynomial with only 10 real roots and 10 complex roots with imaginary parts larger than 0.6.
Closely related to Newton's method are
Halley's method
In numerical analysis, Halley's method is a root-finding algorithm used for functions of one real variable with a continuous second derivative. It is named after its inventor Edmond Halley.
The algorithm is second in the class of Householder's ...
and
Laguerre's method
In numerical analysis, Laguerre's method is a root-finding algorithm tailored to polynomials. In other words, Laguerre's method can be used to numerically solve the equation for a given polynomial . One of the most useful properties of this metho ...
. Both use the polynomial and its two first derivations for an iterative process that has a
cubic convergence. Combining two consecutive steps of these methods into a single test, one gets a
rate of convergence
In numerical analysis, the order of convergence and the rate of convergence of a convergent sequence are quantities that represent how quickly the sequence approaches its limit. A sequence (x_n) that converges to x^* is said to have ''order of co ...
of 9, at the cost of 6 polynomial evaluations (with Horner rule). On the other hand, combining three steps of Newtons method gives a rate of convergence of 8 at the cost of the same number of polynomial evaluation. This gives a slight advantage to these methods (less clear for Laguerre's method, as a square root has to be computed at each step).
When applying these methods to polynomials with real coefficients and real starting points, Newton's and Halley's method stay inside the real number line. One has to choose complex starting points to find complex roots. In contrast, the Laguerre method with a square root in its evaluation will leave the real axis of its own accord.
Finding roots in pairs
If the given polynomial only has real coefficients, one may wish to avoid computations with complex numbers. To that effect, one has to find quadratic factors for pairs of conjugate complex roots. The application of the
multidimensional Newton's method
In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better Numerical analysis, approximations to the root of a fu ...
to this task results in
Bairstow's method
In numerical analysis, Bairstow's method is an efficient algorithm for finding the roots of a real polynomial of arbitrary degree. The algorithm first appeared in the appendix of the 1920 book ''Applied Aerodynamics'' by Leonard Bairstow. The alg ...
.
The real variant of
Jenkins–Traub algorithm The Jenkins–Traub algorithm for polynomial zeros is a fast globally convergent iterative polynomial root-finding method published in 1970 by Michael A. Jenkins and Joseph F. Traub. They gave two variants, one for general polynomials with complex ...
is an improvement of this method.
Finding all roots at once
The simple
Durand–Kerner and the slightly more complicated
Aberth method The Aberth method, or Aberth–Ehrlich method or Ehrlich–Aberth method, named after Oliver Aberth and Louis W. Ehrlich, is a root-finding algorithm developed in 1967 for simultaneous approximation of all the roots of a univariate polynomial.
This ...
simultaneously find all of the roots using only simple
complex number
In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation i^= -1; every complex number can be expressed in the for ...
arithmetic. Accelerated algorithms for multi-point evaluation and interpolation similar to the
fast Fourier transform
A fast Fourier transform (FFT) is an algorithm that computes the discrete Fourier transform (DFT) of a sequence, or its inverse (IDFT). Fourier analysis converts a signal from its original domain (often time or space) to a representation in t ...
can help speed them up for large degrees of the polynomial. It is advisable to choose an asymmetric, but evenly distributed set of initial points. The implementation of this method in the
free software
Free software or libre software is computer software distributed under terms that allow users to run the software for any purpose as well as to study, change, and distribute it and any adapted versions. Free software is a matter of liberty, ...
MPSolve
MPSolve (Multiprecision Polynomial Solver) is a package for the approximation of the roots of a univariate polynomial. It uses the Aberth method, combined with a careful use of multiprecision.
"Mpsolve takes advantage of sparsity, and has spec ...
is a reference for its efficiency and its accuracy.
Another method with this style is the
Dandelin–Gräffe method
In mathematics, Graeffe's method or Dandelin–Lobachesky–Graeffe method is an algorithm for finding all of the roots of a polynomial. It was developed independently by Germinal Pierre Dandelin in 1826 and Lobachevsky in 1834. In 1837 ...
(sometimes also ascribed to
Lobachevsky
Nikolai Ivanovich Lobachevsky ( rus, Никола́й Ива́нович Лобаче́вский, p=nʲikɐˈlaj ɪˈvanəvʲɪtɕ ləbɐˈtɕɛfskʲɪj, a=Ru-Nikolai_Ivanovich_Lobachevsky.ogg; – ) was a Russian mathematician and geometer, k ...
), which uses
polynomial transformations
In mathematics, a polynomial transformation consists of computing the polynomial whose roots are a given function of the roots of a polynomial. Polynomial transformations such as Tschirnhaus transformations are often used to simplify the solut ...
to repeatedly and implicitly square the roots. This greatly magnifies variances in the roots. Applying
Viète's formulas
In mathematics, Vieta's formulas relate the coefficients of a polynomial to sums and products of its roots. They are named after François Viète (more commonly referred to by the Latinised form of his name, "Franciscus Vieta").
Basic formula ...
, one obtains easy approximations for the modulus of the roots, and with some more effort, for the roots themselves.
Exclusion and enclosure methods
Several fast tests exist that tell if a segment of the real line or a region of the complex plane contains no roots. By bounding the modulus of the roots and recursively subdividing the initial region indicated by these bounds, one can isolate small regions that may contain roots and then apply other methods to locate them exactly.
All these methods involve finding the coefficients of shifted and scaled versions of the polynomial. For large degrees,
FFT
A fast Fourier transform (FFT) is an algorithm that computes the discrete Fourier transform (DFT) of a sequence, or its inverse (IDFT). Fourier analysis converts a signal from its original domain (often time or space) to a representation in th ...
-based accelerated methods become viable.
For real roots, see next sections.
The
Lehmer–Schur algorithm In mathematics, the Lehmer–Schur algorithm (named after Derrick Henry Lehmer and Issai Schur) is a root-finding algorithm for complex polynomials, extending the idea of enclosing roots like in the one-dimensional bisection method to the complex ...
uses the
Schur–Cohn test for circles; a variant,
Wilf's global bisection algorithm uses a winding number computation for rectangular regions in the complex plane.
The
splitting circle method In mathematics, the splitting circle method is a numerical algorithm for the numerical factorization of a polynomial and, ultimately, for finding its complex roots. It was introduced by Arnold Schönhage in his 1982 paper ''The fundamental theorem ...
uses FFT-based polynomial transformations to find large-degree factors corresponding to clusters of roots. The precision of the factorization is maximized using a Newton-type iteration. This method is useful for finding the roots of polynomials of high degree to arbitrary precision; it has almost optimal complexity in this setting.
Real-root isolation
Finding the real roots of a polynomial with real coefficients is a problem that has received much attention since the beginning of 19th century, and is still an active domain of research. Most root-finding algorithms can find some real roots, but cannot certify having found all the roots. Methods for finding all complex roots, such as
Aberth method The Aberth method, or Aberth–Ehrlich method or Ehrlich–Aberth method, named after Oliver Aberth and Louis W. Ehrlich, is a root-finding algorithm developed in 1967 for simultaneous approximation of all the roots of a univariate polynomial.
This ...
can provide the real roots. However, because of the numerical instability of polynomials (see
Wilkinson's polynomial
In numerical analysis, Wilkinson's polynomial is a specific polynomial which was used by James H. Wilkinson in 1963 to illustrate a difficulty when finding the root of a polynomial: the location of the roots can be very sensitive to perturbation ...
), they may need
arbitrary-precision arithmetic
In computer science, arbitrary-precision arithmetic, also called bignum arithmetic, multiple-precision arithmetic, or sometimes infinite-precision arithmetic, indicates that calculations are performed on numbers whose digits of precision are l ...
for deciding which roots are real. Moreover, they compute all complex roots when only few are real.
It follows that the standard way of computing real roots is to compute first disjoint intervals, called ''isolating intervals'', such that each one contains exactly one real root, and together they contain all the roots. This computation is called ''real-root isolation''. Having isolating interval, one may use fast numerical methods, such as
Newton's method
In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real ...
for improving the precision of the result.
The oldest complete algorithm for real-root isolation results from
Sturm's theorem
In mathematics, the Sturm sequence of a univariate polynomial is a sequence of polynomials associated with and its derivative by a variant of Euclid's algorithm for polynomials. Sturm's theorem expresses the number of distinct real roots of ...
. However, it appears to be much less efficient than the methods based on
Descartes' rule of signs
In mathematics, Descartes' rule of signs, first described by René Descartes in his work ''La Géométrie'', is a technique for getting information on the number of positive real roots of a polynomial. It asserts that the number of positive roots ...
and
Vincent's theorem In mathematics, Vincent's theorem—named after Alexandre Joseph Hidulphe Vincent—is a theorem that isolates the real roots of polynomials with rational coefficients.
Even though Vincent's theorem is the basis of the fastest method for the isolat ...
. These methods divide into two main classes, one using
continued fraction
In mathematics, a continued fraction is an expression obtained through an iterative process of representing a number as the sum of its integer part and the reciprocal of another number, then writing this other number as the sum of its integ ...
s and the other using bisection. Both method have been dramatically improved since the beginning of 21st century. With these improvements they reach a
computational complexity
In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations) ...
that is similar to that of the best algorithms for computing all the roots (even when all roots are real).
These algorithms have been implemented and are available in
Mathematica
Wolfram Mathematica is a software system with built-in libraries for several areas of technical computing that allow machine learning, statistics, symbolic computation, data manipulation, network analysis, time series analysis, NLP, optimi ...
(continued fraction method) and
Maple
''Acer'' () is a genus of trees and shrubs commonly known as maples. The genus is placed in the family Sapindaceae.Stevens, P. F. (2001 onwards). Angiosperm Phylogeny Website. Version 9, June 2008 nd more or less continuously updated since ht ...
(bisection method). Both implementations can routinely find the real roots of polynomials of degree higher than 1,000.
Finding multiple roots of polynomials
Numerical computation of multiple roots
Multiple roots are highly sensitive, known to be ill-conditioned and inaccurate in numerical computation in general. A method by
Zhonggang Zeng (2004), implemented as a
MATLAB
MATLAB (an abbreviation of "MATrix LABoratory") is a proprietary multi-paradigm programming language and numeric computing environment developed by MathWorks. MATLAB allows matrix manipulations, plotting of functions and data, implementa ...
package, computes multiple roots and corresponding multiplicities of a polynomial accurately even if the coefficients are inexact.
[
]
The method can be summarized in two steps. Let
be the given polynomial. The first step determines the multiplicity structure by applying
square-free factorization
In mathematics, a square-free polynomial is a polynomial defined over a field (mathematics), field (or more generally, an integral domain) that does not have as a divisibility (ring theory), divisor any square of a constant polynomial, non-constant ...
with a numerical greatest common divisor algorithm.
This allows writing
as
:
where
are the multiplicities of the distinct roots. This equation is an
overdetermined system
In mathematics, a system of equations is considered overdetermined if there are more equations than unknowns. An overdetermined system is almost always inconsistent (it has no solution) when constructed with random coefficients. However, an o ...
for having
variables
on
equations matching coefficients with