HOME

TheInfoList



OR:

In
mathematics Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. There are many ar ...
and
computer algebra In mathematics and computer science, computer algebra, also called symbolic computation or algebraic computation, is a scientific area that refers to the study and development of algorithms and software for manipulating expression (mathematics), ...
, factorization of polynomials or polynomial factorization expresses a
polynomial In mathematics, a polynomial is a Expression (mathematics), mathematical expression consisting of indeterminate (variable), indeterminates (also called variable (mathematics), variables) and coefficients, that involves only the operations of addit ...
with coefficients in a given field or in the
integers An integer is the number zero (0), a positive natural number (1, 2, 3, ...), or the negation of a positive natural number (−1, −2, −3, ...). The negations or additive inverses of the positive natural numbers are referred to as negative in ...
as the product of irreducible factors with coefficients in the same domain. Polynomial factorization is one of the fundamental components of
computer algebra system A computer algebra system (CAS) or symbolic algebra system (SAS) is any mathematical software with the ability to manipulate mathematical expressions in a way similar to the traditional manual computations of mathematicians and scientists. The de ...
s. The first polynomial factorization algorithm was published by Theodor von Schubert in 1793.
Leopold Kronecker Leopold Kronecker (; 7 December 1823 – 29 December 1891) was a German mathematician who worked on number theory, abstract algebra and logic, and criticized Georg Cantor's work on set theory. Heinrich Weber quoted Kronecker as having said, ...
rediscovered Schubert's algorithm in 1882 and extended it to multivariate polynomials and coefficients in an algebraic extension. But most of the knowledge on this topic is not older than circa 1965 and the first computer algebra systems:
When the long-known finite step algorithms were first put on computers, they turned out to be highly inefficient. The fact that almost any uni- or multivariate polynomial of degree up to 100 and with coefficients of a moderate size (up to 100 bits) can be factored by modern algorithms in a few minutes of computer time indicates how successfully this problem has been attacked during the past fifteen years. (Erich Kaltofen, 1982)
Modern algorithms and computers can quickly factor univariate polynomials of degree more than 1000 having coefficients with thousands of digits. For this purpose, even for factoring over the
rational number In mathematics, a rational number is a number that can be expressed as the quotient or fraction of two integers, a numerator and a non-zero denominator . For example, is a rational number, as is every integer (for example, The set of all ...
s and number fields, a fundamental step is a factorization of a polynomial over a finite field.


Formulation of the question

Polynomial ring In mathematics, especially in the field of algebra, a polynomial ring or polynomial algebra is a ring formed from the set of polynomials in one or more indeterminates (traditionally also called variables) with coefficients in another ring, ...
s over the integers or over a field are
unique factorization domain In mathematics, a unique factorization domain (UFD) (also sometimes called a factorial ring following the terminology of Bourbaki) is a ring in which a statement analogous to the fundamental theorem of arithmetic holds. Specifically, a UFD is ...
s. This means that every element of these rings is a product of a constant and a product of
irreducible polynomial In mathematics, an irreducible polynomial is, roughly speaking, a polynomial that cannot be factored into the product of two non-constant polynomials. The property of irreducibility depends on the nature of the coefficients that are accepted f ...
s (those that are not the product of two non-constant polynomials). Moreover, this decomposition is unique up to multiplication of the factors by invertible constants. Factorization depends on the base field. For example, the fundamental theorem of algebra, which states that every polynomial with
complex Complex commonly refers to: * Complexity, the behaviour of a system whose components interact in multiple ways so possible interactions are difficult to describe ** Complex system, a system composed of many components which may interact with each ...
coefficients has complex roots, implies that a polynomial with integer coefficients can be factored (with root-finding algorithms) into linear factors over the complex field C. Similarly, over the field of reals, the irreducible factors have degree at most two, while there are polynomials of any degree that are irreducible over the field of rationals Q. The question of polynomial factorization makes sense only for coefficients in a ''computable field'' whose every element may be represented in a computer and for which there are algorithms for the arithmetic operations. However, this is not a sufficient condition: Fröhlich and Shepherdson give examples of such fields for which no factorization algorithm can exist. The fields of coefficients for which factorization algorithms are known include
prime field In mathematics, a field is a set on which addition, subtraction, multiplication, and division are defined and behave as the corresponding operations on rational and real numbers. A field is thus a fundamental algebraic structure which is wid ...
s (that is, the field of the
rational number In mathematics, a rational number is a number that can be expressed as the quotient or fraction of two integers, a numerator and a non-zero denominator . For example, is a rational number, as is every integer (for example, The set of all ...
and the fields of the integers modulo a prime number) and their finitely generated field extensions. Integer coefficients are also tractable. Kronecker's classical method is interesting only from a historical point of view; modern algorithms proceed by a succession of: * Square-free factorization * Factorization over finite fields and reductions: * From the multivariate case to the univariate case. * From coefficients in a purely transcendental extension to the multivariate case over the ground field (see below). * From coefficients in an algebraic extension to coefficients in the ground field (see below). * From rational coefficients to integer coefficients (see below). * From integer coefficients to coefficients in a prime field with ''p'' elements, for a well chosen ''p'' (see below).


Primitive part–content factorization

In this section, we show that factoring over Q (the rational numbers) and over Z (the integers) is essentially the same problem. The ''content'' of a polynomial ''p'' ∈ Z 'X'' denoted "cont(''p'')", is,
up to Two Mathematical object, mathematical objects and are called "equal up to an equivalence relation " * if and are related by , that is, * if holds, that is, * if the equivalence classes of and with respect to are equal. This figure of speech ...
its sign, the
greatest common divisor In mathematics, the greatest common divisor (GCD), also known as greatest common factor (GCF), of two or more integers, which are not all zero, is the largest positive integer that divides each of the integers. For two integers , , the greatest co ...
of its coefficients. The ''primitive part'' of ''p'' is primpart(''p'') = ''p''/cont(''p''), which is a primitive polynomial with integer coefficients. This defines a factorization of ''p'' into the product of an integer and a primitive polynomial. This factorization is unique up to the sign of the content. It is a usual convention to choose the sign of the content such that the leading coefficient of the primitive part is positive. For example, : -10x^2 + 5x + 5 = (-5) (2x^2 - x - 1) \, is a factorization into content and primitive part. Every polynomial ''q'' with rational coefficients may be written :q = \frac, where ''p'' ∈ Z 'X''and ''c'' ∈ Z: it suffices to take for ''c'' a multiple of all denominators of the coefficients of ''q'' (for example their product) and ''p'' = ''cq''. The ''content'' of ''q'' is defined as: :\text (q) =\frac, and the ''primitive part'' of ''q'' is that of ''p''. As for the polynomials with integer coefficients, this defines a factorization into a rational number and a primitive polynomial with integer coefficients. This factorization is also unique up to the choice of a sign. For example, : \frac + \frac + 2x + 1 = \frac is a factorization into content and primitive part.
Gauss Johann Carl Friedrich Gauss (; ; ; 30 April 177723 February 1855) was a German mathematician, astronomer, Geodesy, geodesist, and physicist, who contributed to many fields in mathematics and science. He was director of the Göttingen Observat ...
proved that the product of two primitive polynomials is also primitive ( Gauss's lemma). This implies that a primitive polynomial is irreducible over the rationals if and only if it is irreducible over the integers. This implies also that the factorization over the rationals of a polynomial with rational coefficients is the same as the factorization over the integers of its primitive part. Similarly, the factorization over the integers of a polynomial with integer coefficients is the product of the factorization of its primitive part by the factorization of its content. In other words, an integer GCD computation reduces the factorization of a polynomial over the rationals to the factorization of a primitive polynomial with integer coefficients, and the factorization over the integers to the factorization of an integer and a primitive polynomial. Everything that precedes remains true if Z is replaced by a polynomial ring over a field ''F'' and Q is replaced by a field of rational functions over ''F'' in the same variables, with the only difference that "up to a sign" must be replaced by "up to the multiplication by an invertible constant in ''F''". This reduces the factorization over a purely transcendental field extension of ''F'' to the factorization of multivariate polynomials over ''F''.


Square-free factorization

If two or more factors of a polynomial are identical, then the polynomial is a multiple of the square of this factor. The multiple factor is also a factor of the polynomial's
derivative In mathematics, the derivative is a fundamental tool that quantifies the sensitivity to change of a function's output with respect to its input. The derivative of a function of a single variable at a chosen input value, when it exists, is t ...
(with respect to any of the variables, if several). For univariate polynomials, multiple factors are equivalent to multiple roots (over a suitable extension field). For univariate polynomials over the rationals (or more generally over a field of characteristic zero), Yun's algorithm exploits this to efficiently factorize the polynomial into square-free factors, that is, factors that are not a multiple of a square, performing a sequence of GCD computations starting with gcd(''f''(''x''), ''f'' '(''x'')). To factorize the initial polynomial, it suffices to factorize each square-free factor. Square-free factorization is therefore the first step in most polynomial factorization algorithms. Yun's algorithm extends this to the multivariate case by considering a multivariate polynomial as a univariate polynomial over a polynomial ring. In the case of a polynomial over a finite field, Yun's algorithm applies only if the degree is smaller than the characteristic, because, otherwise, the derivative of a non-zero polynomial may be zero (over the field with ''p'' elements, the derivative of a polynomial in ''x''''p'' is always zero). Nevertheless, a succession of GCD computations, starting from the polynomial and its derivative, allows one to compute the square-free decomposition; see Polynomial factorization over finite fields#Square-free factorization.


Classical methods

This section describes textbook methods that can be convenient when computing by hand. These methods are not used for computer computations because they use
integer factorization In mathematics, integer factorization is the decomposition of a positive integer into a product of integers. Every positive integer greater than 1 is either the product of two or more integer factors greater than 1, in which case it is a comp ...
, which is currently slower than polynomial factorization. The two methods that follow start from a univariate polynomial with integer coefficients for finding factors that are also polynomials with integer coefficients.


Obtaining linear factors

All linear factors with rational coefficients can be found using the rational root test. If the polynomial to be factored is a_nx^n + a_x^ + \cdots + a_1x + a_0, then all possible linear factors are of the form b_1x-b_0, where b_1 is an integer factor of a_n and b_0 is an integer factor of a_0. All possible combinations of integer factors can be tested for validity, and each valid one can be factored out using polynomial long division. If the original polynomial is the product of factors at least two of which are of degree 2 or higher, this technique only provides a partial factorization; otherwise the factorization is complete. In particular, if there is exactly one non-linear factor, it will be the polynomial left after all linear factors have been factorized out. In the case of a
cubic polynomial In mathematics, a cubic function is a function (mathematics), function of the form f(x)=ax^3+bx^2+cx+d, that is, a polynomial function of degree three. In many texts, the ''coefficients'' , , , and are supposed to be real numbers, and the func ...
, if the cubic is factorizable at all, the rational root test gives a complete factorization, either into a linear factor and an irreducible quadratic factor, or into three linear factors.


Kronecker's method

Kronecker's method is aimed to factor univariate polynomials with integer coefficients into polynomials with integer coefficients. The method uses the fact that evaluating integer polynomials at integer values must produce integers. That is, if f(x) is a polynomial with integer coefficients, then f(a) is an integer as soon as is an integer. There are only a finite number of possible integer values for a factor of . So, if g(x) is a factor of f(x), the value of g(a) must be one of the factors of f(a). If one searches for all factors of a given degree , one can consider d+1 values, a_0, \ldots, a_d for , which give a finite number of possibilities for the
tuple In mathematics, a tuple is a finite sequence or ''ordered list'' of numbers or, more generally, mathematical objects, which are called the ''elements'' of the tuple. An -tuple is a tuple of elements, where is a non-negative integer. There is o ...
(f(a_0),\ldots, f(a_d)). Each f(a_i) has a finite number of divisors b_,\ldots,b_, and, each (d+1)-tuple where the i^ entry is a divisor of f(a_i), that is, a tuple of the form (b_, \ldots, b_), produces a unique polynomial of degree at most d, which can be computed by polynomial interpolation. Each of these polynomials can be tested for being a factor by polynomial division. Since there were finitely many a_i and each f(a_i) has finitely many divisors, there are finitely many such tuples. So, an exhaustive search allows finding all factors of degree at most . For example, consider :f(x) = x^5 + x^4 + x^2 + x + 2. If this polynomial factors over Z, then at least one of its factors p(x) must be of degree two or less, so p(x) is uniquely determined by three values. Thus, we compute three values f(0) = 2, f(1) = 6 and f(-1) = 2. If one of these values is 0, we have a linear factor. If the values are nonzero, we can list the possible factorizations for each. Now, 2 can only factor as :1×2, 2×1, (−1)×(−2), or (−2)×(−1). Therefore, if a second degree integer polynomial factor exists, it must take one of the values :''p''(0) ''='' 1, 2, −1, or −2 and likewise for ''p''(−1). There are eight factorizations of 6 (four each for 1×6 and 2×3), making a total of 4×4×8 = 128 possible triples (''p''(0), ''p''(1), ''p''(−1)), of which half can be discarded as the negatives of the other half. Thus, we must check 64 explicit integer polynomials p(x) = ax^2+bx+c as possible factors of f(x). Testing them exhaustively reveals that :p(x) = x^2 + x + 1 constructed from (''g''(0), ''g''(1), ''g''(−1)) = (1,3,1) factors f(x). Dividing ''f''(''x'') by ''p''(''x'') gives the other factor q(x) = x^3 - x + 2, so that f(x) = p(x)q(x). Now one can test recursively to find factors of ''p''(''x'') and ''q''(''x''), in this case using the rational root test. It turns out they are both irreducible, so the irreducible factorization of ''f''(''x'') is: :f(x) = p(x)q(x) = (x^2 + x + 1)(x^3 - x + 2).


Modern methods


Factoring over finite fields


Factoring univariate polynomials over the integers

If f(x) is a univariate polynomial over the integers, assumed to be content-free and square-free, one starts by computing a bound B such that any factor g(x) has coefficients of absolute value bounded by B. This way, if m is an integer larger than 2B, and if g(x) is known modulo m, then g(x) can be reconstructed from its image mod m. The Zassenhaus algorithm proceeds as follows. First, choose a prime number p such that the image of f(x)\bmod p remains square-free, and of the same degree as f(x). A random choice will almost always satisfy these constraints, since only a finite number of prime numbers do not satify them, namely the prime divisors of the product of the discriminant and the leading coefficient of the polynomial. Then factor f(x) \bmod p. This produces integer polynomials f_1(x),\ldots,f_r(x) whose product matches f(x) \bmod p. Next, apply Hensel lifting; this updates the f_i(x) in such a way that their product matches f(x) \bmod p^a, where a is large enough that p^a exceeds 2B: thus each f_i(x) corresponds to a well-defined integer polynomial. Modulo p^a, the polynomial f(x) has 2^r factors (up to units): the products of all subsets of \ \bmod p^a. These factors modulo p^a need not correspond to "true" factors of f(x) in \mathbb Z /math>, but we can easily test them by division in \mathbb Z /math>. This way, all irreducible true factors can be found by checking at most 2^r cases, reduced to 2^ cases by skipping complements. If f(x) is reducible, the number of cases is reduced further by removing those f_i(x) that appear in an already found true factor. The Zassenhaus algorithm processes each case (each subset) quickly, however, in the worst case, it considers an exponential number of cases. The first polynomial time algorithm for factoring rational polynomials was discovered by Lenstra, Lenstra and Lovász and is an application of the Lenstra–Lenstra–Lovász lattice basis reduction (LLL) algorithm . A simplified version of the LLL factorization algorithm is as follows: calculate a complex (or ''p''-adic) root α of the polynomial f(x) to high precision, then use the Lenstra–Lenstra–Lovász lattice basis reduction algorithm to find an approximate linear relation between 1, ''α'', ''α''2, ''α''3, . . . with integer coefficients, which might be an exact linear relation and a polynomial factor of f(x). One can determine a bound for the precision that guarantees that this method produces either a factor, or an irreducibility proof. Although this method finishes in polynomial time, it is not used in practice because the lattice has high dimension and huge entries, which makes the computation slow. The exponential complexity in the Zassenhaus algorithm comes from a combinatorial problem: how to select the right subsets of f_1(x),\ldots,f_r(x). State-of-the-art factoring implementations work in a manner similar to Zassenhaus, except that the combinatorial problem is translated to a lattice problem that is then solved by LLL.M. van Hoeij
''Factoring polynomials and the knapsack problem.''
Journal of Number Theory, 95, 167–189, (2002). In this approach, LLL is not used to compute coefficients of factors, but rather to compute vectors with r entries in that encode the subsets of f_1(x),\ldots,f_r(x) corresponding to the irreducible true factors.


Factoring over algebraic extensions (Trager's method)

We can factor a polynomial p(x) \in K , where the field K is a finite extension of \mathbb. First, using square-free factorization, we may suppose that the polynomial is square-free. Next we define the
quotient ring In ring theory, a branch of abstract algebra, a quotient ring, also known as factor ring, difference ring or residue class ring, is a construction quite similar to the quotient group in group theory and to the quotient space in linear algebra. ...
L= K p(x) of degree n= :\mathbb= \deg p(x)\, :\mathbb/math>; this is not a field unless p(x) is irreducible, but it is a reduced ring since p(x) is square-free. Indeed, if
p(x) = \prod_^m p_i(x)
is the desired factorization of ''p''(''x''), the ring decomposes uniquely into fields as: :L = K p(x) \cong \prod_^m K p_i(x). We will find this decomposition without knowing the factorization. First, we write ''L'' explicitly as an algebra over \mathbb: we pick a random element \alpha \in L, which generates L over \mathbb with high probability by the primitive element theorem. If this is the case, we can compute the minimal polynomial q(y)\in \mathbb /math> of \alpha over \mathbb, by finding a \mathbb-linear relation among 1, ''α'', . . . , ''αn''. Using a factoring algorithm for rational polyomials, we factor into irreducibles in \mathbb /math>: :q(y) = \prod_^ q_i(y). Thus we have: :L \cong \mathbb q(y) \cong \prod_^n \mathbb q_i(y), where \alpha corresponds to y\leftrightarrow (y,y,\ldots,y). This must be isomorphic to the previous decomposition of L. The generators of ''L'' are ''x'' along with the generators of K over \mathbb; writing these as a polynomials in \alpha, we can determine the embeddings of x and K into each component \mathbb q_i(y)=K p_i(x). By finding the minimal polynomial of x in \mathbb q_i(y), we compute p_i(x), and thus factor p(x) over K.


Numerical factorization

"Numerical factorization" refers commonly to the factorization of polynomials with real or complex coefficients, whose coefficients are only approximately known, generally because they are represented as floating point numbers. For univariate polynomials with complex coefficients, factorization can easily be reduced to numerical computation of polynomial roots and multiplicities. In the multivariate case, a random infinitesimal perturbation of the coefficients produces with probability one an
irreducible polynomial In mathematics, an irreducible polynomial is, roughly speaking, a polynomial that cannot be factored into the product of two non-constant polynomials. The property of irreducibility depends on the nature of the coefficients that are accepted f ...
, even when starting from a polynomial with many factors. So, the very meaning of numerical factorization needs to be clarified precisely. Let p be a polynomial with complex coefficients with an irreducible factorization :p = \alpha p_1^ \cdots p_k^ where \alpha\in C and the factors p_1,\ldots,p_k are
irreducible polynomial In mathematics, an irreducible polynomial is, roughly speaking, a polynomial that cannot be factored into the product of two non-constant polynomials. The property of irreducibility depends on the nature of the coefficients that are accepted f ...
s with complex coefficients. Assume that p is approximated through a polynomial \tilde whose coefficients are close to those of p. The exact factorization of \tilde is pointless, since it is generally irreducible. There are several possible definitions of what can be called a ''numerical factorization'' of \tilde. If k and m_i's are known, an approximate factorization consists of finding a polynomial close to \tilde that factors as above. If one does not know the factorization scheme, identifying m_1,\ldots,m_k becomes necessary. For example, the number of irreducible factors of a polynomial is the nullity of its Ruppert matrix.

Thus the multiplicities m_1,\ldots,m_k can be identified by square-free factorization via numerical GCD computation and rank-revealing on Ruppert matrices. Several algorithms have been developed and implemented for numerical factorization as an on-going subject of research.For example:


See also

* , for elementary heuristic methods and explicit formulas * Swinnerton-Dyer polynomials, a family of polynomials having worst-case runtime for the Zassenhaus method


Bibliography

* * * (accessible to readers with undergraduate mathematics) * * * * * Van der Waerden, ''Algebra'' (1970), trans. Blum and Schulenberger, Frederick Ungar.


Further reading

* * * {{Polynomials Polynomials Computer algebra Factorization Polynomial factorization algorithms