Polynomial Root-finding
   HOME

TheInfoList



OR:

Finding the roots of polynomials is a long-standing problem that has been extensively studied throughout the history and substantially influenced the development of mathematics. It involves determining either a numerical approximation or a closed-form expression of the roots of a univariate polynomial, i.e., determining approximate or closed form solutions of x in the equation a_0 + a_1 x + a_2 x^2 + \cdots + a_n x^n = 0 where a_i are either real or
complex number In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation i^= -1; every complex number can be expressed in the for ...
s. Efforts to understand and solve polynomial equations led to the development of important mathematical concepts, including
irrational Irrationality is cognition, thinking, talking, or acting without rationality. Irrationality often has a negative connotation, as thinking and actions that are less useful or more illogical than other more rational alternatives. The concept of ...
and complex numbers, as well as foundational structures in modern algebra such as fields, rings, and groups. Despite being historically important, finding the roots of higher degree polynomials no longer play a central role in mathematics and computational mathematics, with one major exception in
computer algebra In mathematics and computer science, computer algebra, also called symbolic computation or algebraic computation, is a scientific area that refers to the study and development of algorithms and software for manipulating expression (mathematics), ...
.


Overview


Closed-form formulas

Closed-form formulas for polynomial roots exist only when the degree of the polynomial is less than 5. The quadratic formula has been known since antiquity, and the
cubic Cubic may refer to: Science and mathematics * Cube (algebra), "cubic" measurement * Cube, a three-dimensional solid object bounded by six square faces, facets or sides, with three meeting at each vertex ** Cubic crystal system, a crystal system w ...
and quartic formulas were discovered in full generality during the 16th century. When the degree of polynomial is at least 5, a closed-form expression for the roots by the polynomial coefficients does not exist in general, if we only uses additions, subtractions, multiplications, divisions, and radicals (taking n-th roots) in the formula. This is due to the celebrated Abel-Ruffini theorem. On the other hand, the
fundamental theorem of algebra The fundamental theorem of algebra, also called d'Alembert's theorem or the d'Alembert–Gauss theorem, states that every non-constant polynomial, constant single-variable polynomial with Complex number, complex coefficients has at least one comp ...
shows that all nonconstant polynomials have at least one root. Therefore, root-finding algorithms consists of finding numerical solutions in most cases.


Numerical algorithms

Root-finding algorithms can be broadly categorized according to the goal of the computation. Some methods aim to find a single root, while others are designed to find all complex roots at once. In certain cases, the objective may be to find roots within a specific region of the complex plane. It is often desirable and even necessary to select algorithms specific to the computational task due to efficiency and accuracy reasons. See Root Finding Methods for a summary of the existing methods available in each case.


History


Closed-form formulas

The root-finding problem of polynomials was first recognized by the Sumerians and then the Babylonians. Since then, the search for closed-form formulas for polynomial equations lasted for thousands of years.


The quadratics

The Babylonions and Egyptians were able to solve specific quadratic equations in the second millennium BCE, and their solutions essentially correspond to the quadratic formula. However, it took 2 millennia of effort to state the quadratic formula in an explicit form similar to the modern formulation, provided by Indian Mathematician
Brahmagupta Brahmagupta ( – ) was an Indian Indian mathematics, mathematician and Indian astronomy, astronomer. He is the author of two early works on mathematics and astronomy: the ''Brāhmasphuṭasiddhānta'' (BSS, "correctly established Siddhanta, do ...
in his book '' Brāhmasphuṭasiddhānta'' 625 CE. The full recognition of the quadratic formula requires the introduction of complex numbers, which took another a millennia.


The cubics and the quartics

The first breakthrough in a closed-form formula of polynomials with degree higher than two took place in Italy. In the early 16th century, the Italian mathematician Scipione del Ferro found a closed-form formula for
cubic equation In algebra, a cubic equation in one variable is an equation of the form ax^3+bx^2+cx+d=0 in which is not zero. The solutions of this equation are called roots of the cubic function defined by the left-hand side of the equation. If all of th ...
s of the form x^3+mx=n, where m,n are nonnegative numbers. Later,
Niccolò Tartaglia Nicolo, known as Tartaglia (; 1499/1500 – 13 December 1557), was an Italian mathematician, engineer (designing fortifications), a surveyor (of topography, seeking the best means of defense or offense) and a bookkeeper from the then Republi ...
also discovered methods to solve such cubic equations, and
Gerolamo Cardano Gerolamo Cardano (; also Girolamo or Geronimo; ; ; 24 September 1501– 21 September 1576) was an Italian polymath whose interests and proficiencies ranged through those of mathematician, physician, biologist, physicist, chemist, astrologer, as ...
summarized and published their work in his book '' Ars Magna'' in 1545. Meanwhile, Cardano's student
Lodovico Ferrari Lodovico de Ferrari (2 February 1522 – 5 October 1565) was an Italians, Italian mathematician best known today for solving the biquadratic equation. Biography Born in Bologna, Lodovico's grandfather, Bartolomeo Ferrari, was forced out of M ...
discovered the closed-form formula of the quartic equations in 1540. His solution is based on the closed-form formula of the cubic equations, thus had to wait until the cubic formula to be published. In ''Ars Magna,'' Cardano noticed that Tartaglia's method sometimes involves extracting the square root of a negative number. In fact, this could happen even if the roots are real
themselves Themselves, previously known as Them, is an American hip hop duo based in Oakland, California. It consists of Doseone and Jel. They are also part of Subtle and 13 & God. The duo's first studio album, '' Them'', was included on ''Fact''s "100 ...
. Later, the Italian mathematician
Rafael Bombelli Rafael Bombelli (baptised on 20 January 1526; died 1572) was an Italian mathematician. Born in Bologna, he is the author of a treatise on algebra and is a central figure in the understanding of imaginary numbers. He was the one who finally manag ...
investigated further into these mathematical objects by giving an explicit arithmetic rules in his book ''Algebra'' published in 1569. These mathematical objects are now known as the
complex number In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation i^= -1; every complex number can be expressed in the for ...
s, which are foundational in mathematics, physics, and engineering.


Insolvability of the quintics

Since the discovery of cubic and quartic formulas, solving quintic equations in a closed form had been a major problem in algebra. The French lawyer Viete, who first formulated the root formula for cubics in modern language and applied trigonometric methods to root-solving, believed that his methods generalize to a closed-form formula in radicals for polynomial with arbitrary degree. Descartes also hold the same opinion. However,
Lagrange Joseph-Louis Lagrange (born Giuseppe Luigi Lagrangiainsolvability of the quintic was given by the Italian mathematician Paolo Ruffini. He published six versions of his proof between 1799 and 1813, yet his proof was not widely accepted as the writing was long and difficult to understand, and turned out to have a gap. The first rigorous and accepted proof of the insolvability of the quintic was famously given by
Niels Henrik Abel Niels Henrik Abel ( , ; 5 August 1802 – 6 April 1829) was a Norwegian mathematician who made pioneering contributions in a variety of fields. His most famous single result is the first complete proof demonstrating the impossibility of solvin ...
in 1824, which made essential use of the
Galois theory In mathematics, Galois theory, originally introduced by Évariste Galois, provides a connection between field (mathematics), field theory and group theory. This connection, the fundamental theorem of Galois theory, allows reducing certain problems ...
of field extensions. In the paper, Abel proved that polynomials with degree more than 4 do not have a closed-form root formula by radicals in general. This puts an end in the search of closed form formulas of the roots of polynomials by radicals of the polynomial coefficients.


General solution using combinatorics

In 2025, Norman Wildberger and Dean Rubine introduced a general solution for arbitrary degree, involving a
formal power series In mathematics, a formal series is an infinite sum that is considered independently from any notion of convergence, and can be manipulated with the usual algebraic operations on series (addition, subtraction, multiplication, division, partial su ...
. The equation 1-x+a_2x^2+a_3x^3+a_4x^4+... has a solution x=\sum_ This is a generalization of a solution for the quadratics using
Catalan number The Catalan numbers are a sequence of natural numbers that occur in various Enumeration, counting problems, often involving recursion, recursively defined objects. They are named after Eugène Charles Catalan, Eugène Catalan, though they were p ...
s C_, for which it reduces to x=\sum_C_t^n. For the quintic, this is closely related to the
Eisenstein series Eisenstein series, named after German mathematician Gotthold Eisenstein, are particular modular forms with infinite series expansions that may be written down directly. Originally defined for the modular group, Eisenstein series can be generalize ...
.


Numerical methods

Since finding a closed-form formula of higher degree polynomials is significantly harder than that of quadratic equations, the earliest attempts to solve cubic equations are either geometrical or numerical. Also, for practical purposes, numerical solutions are necessary.


Iterative methods

The earliest iterative approximation methods of root-finding were developed to compute square roots. In Heron of Alexandria's book ''Metrica'' (1st-2nd century CE), approximate values of square roots were computed by iteratively improving an initial estimate.
Jamshīd al-Kāshī Ghiyāth al-Dīn Jamshīd Masʿūd al-Kāshī (or al-Kāshānī) ( ''Ghiyās-ud-dīn Jamshīd Kāshānī'') (c. 1380 Kashan, Iran – 22 June 1429 Samarkand, Transoxiana) was a Persian astronomer and mathematician during the reign of Tamerlane. ...
presented a generalized version of the method to compute nth root''s.'' A similar method was also found in Henry Briggs's publication ''Trigonometria Britannica'' in 1633. Franciscus Vieta also developed an approximation method that is almost identical to Newton's method. Newton further generalized the method to compute the roots of arbitrary polynomials in '' De analysi per aequationes numero terminorum infinitas'' (written in 1669, published in 1711), now known as
Newton's method In numerical analysis, the Newton–Raphson method, also known simply as Newton's method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a ...
. In 1690,
Joseph Raphson Joseph Raphson (c. 1668 – c. 1715) was an England, English mathematician and intellectual known best for the Newton–Raphson method. Biography Very little is known about Raphson's life. Connor and Robertson give his date of birth as 1668 bas ...
published a refinement of Newton's method, presenting it in a form that more closely aligned with the modern version used today. In 1879, the English mathematician
Arthur Cayley Arthur Cayley (; 16 August 1821 – 26 January 1895) was a British mathematician who worked mostly on algebra. He helped found the modern British school of pure mathematics, and was a professor at Trinity College, Cambridge for 35 years. He ...
noticed the difficulties in generalizing Newton's method to complex roots of polynomials with degree greater than 2 and complex initial values in his paper ''The Newton–Fourier imaginary problem.'' This opened the way to the study of the theory of iterations of rational functions.


Real-root isolation methods

A class of methods of finding numerical value of real roots is based on
real-root isolation In mathematics, and, more specifically in numerical analysis and computer algebra, real-root isolation of a polynomial consist of producing disjoint intervals of the real line, which contain each one (and only one) real root of the polynomial, and ...
. The first example of such method is given by
René Descartes René Descartes ( , ; ; 31 March 1596 – 11 February 1650) was a French philosopher, scientist, and mathematician, widely considered a seminal figure in the emergence of modern philosophy and Modern science, science. Mathematics was paramou ...
in 1637. It counts the
roots A root is the part of a plant, generally underground, that anchors the plant body, and absorbs and stores water and nutrients. Root or roots may also refer to: Art, entertainment, and media * ''The Root'' (magazine), an online magazine focusin ...
of a
polynomial In mathematics, a polynomial is a Expression (mathematics), mathematical expression consisting of indeterminate (variable), indeterminates (also called variable (mathematics), variables) and coefficients, that involves only the operations of addit ...
by examining sign changes in its coefficients. In 1807, the French mathematician François Budan de Boislaurent generalized Descarte's result into Budan's theorem which counts the real roots in a
half-open interval In mathematics, a real interval is the set of all real numbers lying between two fixed endpoints with no "gaps". Each endpoint is either a real number or positive or negative infinity, indicating the interval extends without a bound. A real in ...
(''a'', ''b'']. However, both methods are not suitable as an effective algorithm. The first complete real-root isolation algorithm was given by Jacques Charles François Sturm in 1829, known as the
Sturm's theorem In mathematics, the Sturm sequence of a univariate polynomial is a sequence of polynomials associated with and its derivative by a variant of Euclid's algorithm for polynomials. Sturm's theorem expresses the number of distinct real number, real R ...
. In 1836, Alexandre Joseph Hidulphe Vincent proposed a method for isolating real roots of polynomials using continued fractions, a result now known as Vincent's theorem. The work was largely forgotten until it was rediscovered over a century later by J. V. Uspensky, who included it in his 1948 textbook ''Theory of Equations''. The theorem was subsequently brought to wider academic attention by the American mathematician Alkiviadis G. Akritas, who recognized its significance while studying Uspensky's account. The first implimentation of real-root isolation method by modern computer is given by G.E. Collins and Alkiviadis G. Akritas in 1976, where they proved an effective version of Vincent's theorem. Variants of the algorithm were subsequently studied.


Mechanical methods

Before electronic computers were invented, people used mechanical computers to automate the polynomial-root solving problems. In 1758, the Hungarian scientist J.A. De Segner proposed a design of root-solving machine in his paper, which operates by drawing the graph of the polynomial on a plane and find the roots as the intersections of the graph with x-axis. In 1770, the English mathematician Jack Rowning investigated the possibility of drawing the graph of polynomials via local motions. In 1845, the English mathematician Francis Bushforth proposed to use trignometric methods to simplify the root finding problem. Given a polynomial a_0+a_1x+...+a_nx^n=0, substitute x=\cos t. Since \cos^n t can be written as a linear combination of \cos kt, k\in\Z (See
Chebyshev polynomials The Chebyshev polynomials are two sequences of orthogonal polynomials related to the cosine and sine functions, notated as T_n(x) and U_n(x). They can be defined in several equivalent ways, one of which starts with trigonometric functions: ...
), the polynomial can be reformulated into the following form b_0+b_1\cos t+b_2\cos^2 t+...+b_n\cos^nt Such curves can be drawn by a harmonic analyzer (also known as tide predicting machines). The first harmonic analyzer was built by
Lord Kelvin William Thomson, 1st Baron Kelvin (26 June 182417 December 1907), was a British mathematician, Mathematical physics, mathematical physicist and engineer. Born in Belfast, he was the Professor of Natural Philosophy (Glasgow), professor of Natur ...
in 1872, while Bashforth envisioned such machine in his paper 27 years ago. The Spanish engineer and mathematician
Leonardo Torres Quevedo Leonardo Torres Quevedo (; 28 December 1852 – 18 December 1936) was a Spanish civil engineer, mathematician and inventor, known for his numerous engineering innovations, including Aerial tramway, aerial trams, airships, catamarans, and remote ...
built several machines for solving real and complex roots of polynomials between 1893-1900. His machine employs a logarithmic algorithm, and has a mechanical component called the Endless principle to the value of \log(a+b) from \log a,\log b with high accuracy. This allow him to achieve high accuracy in polynomial root-finding: the machine computes the roots of deg 8 polynomials with an accuracy of 10^ .


Common root-finding algorithms


Finding one root

The most widely used method for computing a root of any differentiable function f is
Newton's method In numerical analysis, the Newton–Raphson method, also known simply as Newton's method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a ...
, in which an initial guess x_0 is iteratively refined. At each iteration the
tangent line In geometry, the tangent line (or simply tangent) to a plane curve at a given point is, intuitively, the straight line that "just touches" the curve at that point. Leibniz defined it as the line through a pair of infinitely close points o ...
to f at x_n is used as a linear approximation to f, and its root is used as the succeeding guess x_: :x_=x_n-\frac, In general, the value of x_n will converge to a root of f. In particular, the method can be applied to compute a root of polynomial functions. In this case, the computations in Newton's method can be accelerated using
Horner's method In mathematics and computer science, Horner's method (or Horner's scheme) is an algorithm for polynomial evaluation. Although named after William George Horner, this method is much older, as it has been attributed to Joseph-Louis Lagrange by Hor ...
or evaluation with preprocessing for computing the polynomial and its derivative in each iteration. Though the rate of convergence of Newton's method is generally quadratic, it might converge much slowly or even not converge at all. In particular, if the polynomial has no real root, and x_0 is chosen to be a real number, then Newton's method cannot converge. However, if the polynomial has a real root, which is larger than the larger real root of its derivative, then Newton's method converges quadratically to this largest root if x_0 is larger than this larger root (there are easy ways for computing an upper bound of the roots, see Properties of polynomial roots). This is the starting point of
Horner's method In mathematics and computer science, Horner's method (or Horner's scheme) is an algorithm for polynomial evaluation. Although named after William George Horner, this method is much older, as it has been attributed to Joseph-Louis Lagrange by Hor ...
for computing the roots. Closely related to Newton's method are Halley's method and Laguerre's method. Both use the polynomial and its two first derivations for an iterative process that has a cubic convergence. Combining two consecutive steps of these methods into a single test, one gets a rate of convergence of 9, at the cost of 6 polynomial evaluations (with Horner's rule). On the other hand, combining three steps of Newtons method gives a rate of convergence of 8 at the cost of the same number of polynomial evaluation. This gives a slight advantage to these methods (less clear for Laguerre's method, as a square root has to be computed at each step). When applying these methods to polynomials with real coefficients and real starting points, Newton's and Halley's method stay inside the real number line. One has to choose complex starting points to find complex roots. In contrast, the Laguerre method with a square root in its evaluation will leave the real axis of its own accord.


Finding all complex roots


Methods using complex-number arithmetic

Both the Aberth method and the similar yet simpler Durand–Kerner method simultaneously find all of the roots using only simple
complex number In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation i^= -1; every complex number can be expressed in the for ...
arithmetic. The Aberth method is presently the most efficient method. Accelerated algorithms for multi-point evaluation and interpolation similar to the
fast Fourier transform A fast Fourier transform (FFT) is an algorithm that computes the discrete Fourier transform (DFT) of a sequence, or its inverse (IDFT). A Fourier transform converts a signal from its original domain (often time or space) to a representation in ...
can help speed them up for large degrees of the polynomial. A free implementation of Aberth's method is available under the name of MPSolve. This is a reference implementation, which can find routinely the roots of polynomials of degree larger than 1,000, with more than 1,000 significant decimal digits. Another method with this style is the Dandelin–Gräffe method (sometimes also ascribed to Lobachevsky), which uses polynomial transformations to repeatedly and implicitly square the roots. This greatly magnifies variances in the roots. Applying Viète's formulas, one obtains easy approximations for the modulus of the roots, and with some more effort, for the roots themselves.


Methods using linear algebra

Arguably, the most reliable method to find all roots of a polynomial is to find the eigenvalues of the companion matrix of monic polynomial, which coincides with the roots of the polynomial. There are plenty of algorithms for computing the eigenvalue of matrices. The standard method for finding all roots of a polynomial in MATLAB uses the Francis QR algorithm to compute the
eigenvalue In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a ...
s of the corresponding companion matrix of the polynomial. In principle, can use any
eigenvalue algorithm In numerical analysis, one of the most important problems is designing efficient and Numerical stability, stable algorithms for finding the eigenvalues of a Matrix (mathematics), matrix. These eigenvalue algorithms may also find eigenvectors. Eig ...
to find the roots of the polynomial. However, for efficiency reasons one prefers methods that employ the structure of the matrix, that is, can be implemented in matrix-free form. Among these methods are the power method, whose application to the transpose of the companion matrix is the classical Bernoulli's method to find the root of greatest modulus. The inverse power method with shifts, which finds some smallest root first, is what drives the complex (''cpoly'') variant of the Jenkins–Traub algorithm and gives it its numerical stability. Additionally, it has fast convergence with order 1+\varphi\approx 2.6 (where \varphi is the
golden ratio In mathematics, two quantities are in the golden ratio if their ratio is the same as the ratio of their summation, sum to the larger of the two quantities. Expressed algebraically, for quantities and with , is in a golden ratio to if \fr ...
) even in the presence of clustered roots. This fast convergence comes with a cost of three polynomial evaluations per step, resulting in a residual of , that is a slower convergence than with three steps of Newton's method.


Limitations of iterative methods for finding all roots

The oldest method of finding all roots is to start by finding a single root. When a root has been found, it can be removed from the polynomial by dividing out the binomial . The resulting polynomial contains the remaining roots, which can be found by iterating on this process. This idea, despite being common in theoretical deriviations, does not work well in numerical computations because of the phenomenon of numerical instability: Wilkinson's polynomial shows that a very small modification of one coefficient may change dramatically not only the value of the roots, but also their nature (real or complex). Also, even with a good approximation, when one evaluates a polynomial at an approximate root, one may get a result that is far to close to zero. For example, if a polynomial of degree 20 (the degree of Wilkinson's polynomial) has a root close to 10, the derivative of the polynomial at the root may be of the order of 10^; this implies that an error of 10^ on the value of the root may produce a value of the polynomial at the approximate root that is of the order of 10^.


Finding all real roots

Finding the real roots of a polynomial with real coefficients is a problem that has received much attention since the beginning of 19th century, and is still an active domain of research. Methods for finding all complex roots can provide the real roots. However, because of the numerical instability of polynomials, it may need
arbitrary-precision arithmetic In computer science, arbitrary-precision arithmetic, also called bignum arithmetic, multiple-precision arithmetic, or sometimes infinite-precision arithmetic, indicates that calculations are performed on numbers whose digits of precision are po ...
to decide whether a root with a small imaginary part is real or not. Moreover, as the number of the real roots is, on the average, proportional to the logarithm of the degree, it is a waste of computer resources to compute the non-real roots when one is interested in real roots. The standard way of computing real roots is to compute first disjoint intervals, called ''isolating intervals'', such that each one contains exactly one real root, and together they contain all the roots. This computation is called ''real-root isolation''. Having an isolating interval, one may use fast numerical methods, such as
Newton's method In numerical analysis, the Newton–Raphson method, also known simply as Newton's method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a ...
for improving the precision of the result. The oldest complete algorithm for real-root isolation results from
Sturm's theorem In mathematics, the Sturm sequence of a univariate polynomial is a sequence of polynomials associated with and its derivative by a variant of Euclid's algorithm for polynomials. Sturm's theorem expresses the number of distinct real number, real R ...
. However, it appears to be much less efficient than the methods based on Descartes' rule of signs and its extensions— Budan's and Vincent's theorems. These methods divide into two main classes, one using
continued fraction A continued fraction is a mathematical expression that can be written as a fraction with a denominator that is a sum that contains another simple or continued fraction. Depending on whether this iteration terminates with a simple fraction or not, ...
s and the other using bisection. Both method have been dramatically improved since the beginning of 21st century. With these improvements they reach a
computational complexity In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations ...
that is similar to that of the best algorithms for computing all the roots (even when all roots are real). These algorithms have been implemented and are available in
Mathematica Wolfram (previously known as Mathematica and Wolfram Mathematica) is a software system with built-in libraries for several areas of technical computing that allows machine learning, statistics, symbolic computation, data manipulation, network ...
(continued fraction method) and
Maple ''Acer'' is a genus of trees and shrubs commonly known as maples. The genus is placed in the soapberry family Sapindaceae.Stevens, P. F. (2001 onwards). Angiosperm Phylogeny Website. Version 9, June 2008 nd more or less continuously updated si ...
(bisection method), as well as in other main
computer algebra system A computer algebra system (CAS) or symbolic algebra system (SAS) is any mathematical software with the ability to manipulate mathematical expressions in a way similar to the traditional manual computations of mathematicians and scientists. The de ...
s (
SageMath SageMath (previously Sage or SAGE, "System for Algebra and Geometry Experimentation") is a computer algebra system (CAS) with features covering many aspects of mathematics, including algebra, combinatorics, graph theory, group theory, differentia ...
,
PARI/GP PARI/GP is a computer algebra system with the main aim of facilitating number theory computations. Versions 2.1.0 and higher are distributed under the GNU General Public License. It runs on most common operating systems. System overview The P ...
) . Both implementations can routinely find the real roots of polynomials of degree higher than 1,000.


Finding roots in a restricted domain

Several fast tests exist that tell if a segment of the real line or a region of the complex plane contains no roots. By bounding the modulus of the roots and recursively subdividing the initial region indicated by these bounds, one can isolate small regions that may contain roots and then apply other methods to locate them exactly. All these methods involve finding the coefficients of shifted and scaled versions of the polynomial. For large degrees, FFT-based accelerated methods become viable. The Lehmer–Schur algorithm uses the Schur–Cohn test for circles; a variant, Wilf's global bisection algorithm uses a winding number computation for rectangular regions in the complex plane. The splitting circle method uses FFT-based polynomial transformations to find large-degree factors corresponding to clusters of roots. The precision of the factorization is maximized using a Newton-type iteration. This method is useful for finding the roots of polynomials of high degree to arbitrary precision; it has almost optimal complexity in this setting.


Finding complex roots in pairs

If the given polynomial only has real coefficients, one may wish to avoid computations with complex numbers. To that effect, one has to find quadratic factors for pairs of conjugate complex roots. The application of the multidimensional Newton's method to this task results in Bairstow's method. The real variant of Jenkins–Traub algorithm is an improvement of this method.


Polynomials with rational coefficients

For polynomials whose coefficients are exactly given as
integer An integer is the number zero (0), a positive natural number (1, 2, 3, ...), or the negation of a positive natural number (−1, −2, −3, ...). The negations or additive inverses of the positive natural numbers are referred to as negative in ...
s or
rational number In mathematics, a rational number is a number that can be expressed as the quotient or fraction of two integers, a numerator and a non-zero denominator . For example, is a rational number, as is every integer (for example, The set of all ...
s, there is an efficient method to factorize them into factors that have only simple roots and whose coefficients are also given in precise terms. This method, called ''
square-free factorization In mathematics, a square-free polynomial is a univariate polynomial (over a field or an integral domain) that has no multiple root in an algebraically closed field containing its coefficients. In characteristic 0, or over a finite field, a univar ...
'', is based on the multiple roots of a polynomial being the roots of the
greatest common divisor In mathematics, the greatest common divisor (GCD), also known as greatest common factor (GCF), of two or more integers, which are not all zero, is the largest positive integer that divides each of the integers. For two integers , , the greatest co ...
of the polynomial and its derivative. The square-free factorization of a polynomial ''p'' is a factorization p=p_1p_2^2\cdots p_k^k where each p_i is either 1 or a polynomial without multiple roots, and two different p_i do not have any common root. An efficient method to compute this factorization is Yun's algorithm.


See also

* Rational root theorem


References

{{root-finding algorithms Polynomials