A system of polynomial equations (sometimes simply a polynomial system) is a set of
simultaneous equations where the are
polynomials in several variables, say , over some
field .
A ''solution'' of a polynomial system is a set of values for the s which belong to some
algebraically closed field extension
In mathematics, particularly in algebra, a field extension is a pair of fields E\subseteq F, such that the operations of ''E'' are those of ''F'' restricted to ''E''. In this case, ''F'' is an extension field of ''E'' and ''E'' is a subfield of ...
of , and make all equations true. When is the field of
rational numbers, is generally assumed to be the field of
complex numbers, because each solution belongs to a
field extension
In mathematics, particularly in algebra, a field extension is a pair of fields E\subseteq F, such that the operations of ''E'' are those of ''F'' restricted to ''E''. In this case, ''F'' is an extension field of ''E'' and ''E'' is a subfield of ...
of , which is isomorphic to a subfield of the complex numbers.
This article is about the methods for solving, that is, finding all solutions or describing them. As these methods are designed for being implemented in a computer, emphasis is given on fields in which computation (including equality testing) is easy and efficient, that is the field of
rational numbers and
finite fields.
Searching for solutions that belong to a specific set is a problem which is generally much more difficult, and is outside the scope of this article, except for the case of the solutions in a given finite field. For the case of solutions of which all components are integers or rational numbers, see
Diophantine equation
In mathematics, a Diophantine equation is an equation, typically a polynomial equation in two or more unknowns with integer coefficients, such that the only solutions of interest are the integer ones. A linear Diophantine equation equates to a c ...
.
Definition

A simple example of a system of polynomial equations is
:
Its solutions are the four pairs . These solutions can easily checked by substitution, but more work is needed for proving that there are no other solutions.
The subject of this article is the study of generalizations of such an examples, and the description of the methods that are used for computing the solutions.
A ''system of polynomial equations,'' or ''polynomial system'' is a collection of equations
:
where each is a
polynomial in the
indeterminates , with integer coefficients, or coefficients in some fixed
field, often the field of
rational numbers or a
finite field.
Other fields of coefficients, such as the
real numbers, are less often used, as their elements cannot be represented in a computer (only approximations of real numbers can be used in computations, and these approximations are always rational numbers).
A ''solution'' of a polynomial system is a
tuple of values of that satisfies all equations of the polynomial system. The solutions are sought in the
complex numbers, or more generally in an
algebraically closed field
In mathematics, a field is algebraically closed if every non-constant polynomial in (the univariate polynomial ring with coefficients in ) has a root in .
Examples
As an example, the field of real numbers is not algebraically closed, because ...
containing the coefficients. In particular, in
characteristic zero, all
complex solutions are sought. Searching for the
real or
rational solutions are much more difficult problems that are not considered in this article.
The set of solutions is not always finite; for example, the solutions of the system
:
are a point and a line . Even when the solution set is finite, there is, in general, no
closed-form expression of the solutions (in the case of a single equation, this is
Abel–Ruffini theorem).
The
Barth surface, shown in the figure is the geometric representation of the solutions of a polynomial system reduced to a single equation of degree 6 in 3 variables. Some of its numerous
singular points are visible on the image. They are the solutions of a system of 4 equations of degree 5 in 3 variables. Such an
overdetermined system has no solution in general (that is if the coefficients are not specific). If it has a finite number of solutions, this number is at most , by
Bézout's theorem. However, it has been shown that, for the case of the singular points of a surface of degree 6, the maximum number of solutions is 65, and is reached by the Barth surface.
Basic properties and definitions
A system is
overdetermined if the number of equations is higher than the number of variables. A system is
inconsistent if it has no
complex solution (or, if the coefficients are not complex numbers, no solution in an
algebraically closed field
In mathematics, a field is algebraically closed if every non-constant polynomial in (the univariate polynomial ring with coefficients in ) has a root in .
Examples
As an example, the field of real numbers is not algebraically closed, because ...
containing the coefficients). By
Hilbert's Nullstellensatz this means that 1 is a
linear combination (with polynomials as coefficients) of the first members of the equations. Most but not all overdetermined systems, when constructed with random coefficients, are inconsistent. For example, the system is overdetermined (having two equations but only one unknown), but it is not inconsistent since it has the solution .
A system is
underdetermined if the number of equations is lower than the number of the variables. An underdetermined system is either inconsistent or has infinitely many complex solutions (or solutions in an
algebraically closed field
In mathematics, a field is algebraically closed if every non-constant polynomial in (the univariate polynomial ring with coefficients in ) has a root in .
Examples
As an example, the field of real numbers is not algebraically closed, because ...
that contains the coefficients of the equations). This is a non-trivial result of
commutative algebra that involves, in particular,
Hilbert's Nullstellensatz and
Krull's principal ideal theorem.
A system is
zero-dimensional if it has a finite number of complex solutions (or solutions in an algebraically closed field). This terminology comes from the fact that the
algebraic variety of the solutions has
dimension zero. A system with infinitely many solutions is said to be ''positive-dimensional''.
A zero-dimensional system with as many equations as variables is sometimes said to be ''well-behaved''.
Bézout's theorem asserts that a well-behaved system whose equations have degrees has at most solutions. This bound is sharp. If all the degrees are equal to , this bound becomes and is exponential in the number of variables. (The
fundamental theorem of algebra is the special case .)
This exponential behavior makes solving polynomial systems difficult and explains why there are few solvers that are able to automatically solve systems with Bézout's bound higher than, say, 25 (three equations of degree 3 or five equations of degree 2 are beyond this bound).
What is solving?
The first thing to do for solving a polynomial system is to decide whether it is inconsistent, zero-dimensional or positive dimensional. This may be done by the computation of a
Gröbner basis of the left-hand sides of the equations. The system is ''inconsistent'' if this Gröbner basis is reduced to 1. The system is ''zero-dimensional'' if, for every variable there is a
leading monomial
In mathematics, a monomial order (sometimes called a term order or an admissible order) is a total order on the set of all ( monic) monomials in a given polynomial ring, satisfying the property of respecting multiplication, i.e.,
* If u \leq v ...
of some element of the Gröbner basis which is a pure power of this variable. For this test, the best
monomial order
In mathematics, a monomial order (sometimes called a term order or an admissible order) is a total order on the set of all ( monic) monomials in a given polynomial ring, satisfying the property of respecting multiplication, i.e.,
* If u \leq v and ...
(that is the one which leads generally to the fastest computation) is usually the
graded reverse lexicographic one (grevlex).
If the system is ''positive-dimensional'', it has infinitely many solutions. It is thus not possible to enumerate them. It follows that, in this case, solving may only mean "finding a description of the solutions from which the relevant properties of the solutions are easy to extract". There is no commonly accepted such description. In fact there are many different "relevant properties", which involve almost every subfield of
algebraic geometry
Algebraic geometry is a branch of mathematics, classically studying zeros of multivariate polynomials. Modern algebraic geometry is based on the use of abstract algebraic techniques, mainly from commutative algebra, for solving geometrical ...
.
A natural example of such a question concerning positive-dimensional systems is the following: ''decide if a polynomial system over the
rational numbers has a finite number of real solutions and compute them''. A generalization of this question is ''find at least one solution in each
connected component of the set of real solutions of a polynomial system''. The classical algorithm for solving these question is
cylindrical algebraic decomposition, which has a
doubly exponential computational complexity
In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations) ...
and therefore cannot be used in practice, except for very small examples.
For zero-dimensional systems, solving consists of computing all the solutions. There are two different ways of outputting the solutions. The most common way is possible only for real or complex solutions, and consists of outputting numeric approximations of the solutions. Such a solution is called ''numeric''. A solution is ''certified'' if it is provided with a bound on the error of the approximations, and if this bound separates the different solutions.
The other way of representing the solutions is said to be ''algebraic''. It uses the fact that, for a zero-dimensional system, the solutions belong to the
algebraic closure of the field ''k'' of the coefficients of the system. There are several ways to represent the solution in an algebraic closure, which are discussed below. All of them allow one to compute a numerical approximation of the solutions by solving one or several univariate equations. For this computation, it is preferable to use a representation that involves solving only one univariate polynomial per solution, because computing the roots of a polynomial which has approximate coefficients is a highly unstable problem.
Extensions
Trigonometric equations
A trigonometric equation is an equation where is a
trigonometric polynomial. Such an equation may be converted into a polynomial system by expanding the sines and cosines in it (using
sum and difference formulas), replacing and by two new variables and and adding the new equation .
For example, because of the identity
:
solving the equation
:
is equivalent to solving the polynomial system
:
For each solution of this system, there is a unique solution of the equation such that .
In the case of this simple example, it may be unclear whether the system is, or not, easier to solve than the equation. On more complicated examples, one lacks systematic methods for solving directly the equation, while software are available for automatically solving the corresponding system.
Solutions in a finite field
When solving a system over a finite field with elements, one is primarily interested in the solutions in . As the elements of are exactly the solutions of the equation , it suffices, for restricting the solutions to , to add the equation for each variable .
Coefficients in a number field or in a finite field with non-prime order
The elements of an
algebraic number field
In mathematics, an algebraic number field (or simply number field) is an extension field K of the field of rational numbers such that the field extension K / \mathbb has finite degree (and hence is an algebraic field extension).
Thus K is a f ...
are usually represented as polynomials in a generator of the field which satisfies some univariate polynomial equation. To work with a polynomial system whose coefficients belong to a number field, it suffices to consider this generator as a new variable and to add the equation of the generator to the equations of the system. Thus solving a polynomial system over a number field is reduced to solving another system over the rational numbers.
For example, if a system contains
, a system over the rational numbers is obtained by adding the equation and replacing
by in the other equations.
In the case of a finite field, the same transformation allows always supposing that the field has a prime order.
Algebraic representation of the solutions
Regular chains
The usual way of representing the solutions is through zero-dimensional regular chains. Such a chain consists of a sequence of polynomials , , ..., such that, for every such that
* is a polynomial in only, which has a degree in ;
* the coefficient of in is a polynomial in which does not have any common zero with , ..., .
To such a
regular chain is associated a ''triangular system of equations''
:
The solutions of this system are obtained by solving the first univariate equation, substituting the solutions in the other equations, then solving the second equation which is now univariate, and so on. The definition of regular chains implies that the univariate equation obtained from has degree and thus that the system has solutions, provided that there is no multiple root in this resolution process (
fundamental theorem of algebra).
Every zero-dimensional system of polynomial equations is equivalent (i.e. has the same solutions) to a finite number of regular chains. Several regular chains may be needed, as it is the case for the following system which has three solutions.
:
There are several algorithms for computing a
triangular decomposition of an arbitrary polynomial system (not necessarily zero-dimensional) into
regular chains (or
regular semi-algebraic systems).
There is also an algorithm which is specific to the zero-dimensional case and is competitive, in this case, with the direct algorithms. It consists in computing first the
Gröbner basis for the
graded reverse lexicographic order (grevlex), then deducing the lexicographical Gröbner basis by FGLM algorithm and finally applying the Lextriangular algorithm.
This representation of the solutions are fully convenient for coefficients in a finite field. However, for rational coefficients, two aspects have to be taken care of:
* The output may involve huge integers which may make the computation and the use of the result problematic.
* To deduce the numeric values of the solutions from the output, one has to solve univariate polynomials with approximate coefficients, which is a highly unstable problem.
The first issue has been solved by Dahan and Schost: Among the sets of regular chains that represent a given set of solutions, there is a set for which the coefficients are explicitly bounded in terms of the size of the input system, with a nearly optimal bound. This set, called ''equiprojectable decomposition'', depends only on the choice of the coordinates. This allows the use of
modular methods for computing efficiently the equiprojectable decomposition.
The second issue is generally solved by outputting regular chains of a special form, sometimes called ''shape lemma'', for which all but the first one are equal to . For getting such regular chains, one may have to add a further variable, called ''separating variable'', which is given the index . The ''rational univariate representation'', described below, allows computing such a special regular chain, satisfying Dahan–Schost bound, by starting from either a regular chain or a Gröbner basis.
Rational univariate representation
The ''rational univariate representation'' or RUR is a representation of the solutions of a zero-dimensional polynomial system over the rational numbers which has been introduced by F. Rouillier.
A RUR of a zero-dimensional system consists in a linear combination of the variables, called ''separating variable'', and a system of equations
:
where is a univariate polynomial in of degree and are univariate polynomials in of degree less than .
Given a zero-dimensional polynomial system over the rational numbers, the RUR has the following properties.
* All but a finite number linear combinations of the variables are separating variables.
* When the separating variable is chosen, the RUR exists and is unique. In particular and the are defined independently of any algorithm to compute them.
* The solutions of the system are in one-to-one correspondence with the roots of and the
multiplicity of each root of equals the multiplicity of the corresponding solution.
* The solutions of the system are obtained by substituting the roots of in the other equations.
* If does not have any multiple root then is the
derivative of .
For example, for the system in the previous section, every linear combination of the variable, except the multiples of , and , is a separating variable. If one chooses as a separating variable, then the RUR is
:
The RUR is uniquely defined for a given separating variable, independently of any algorithm, and it preserves the multiplicities of the roots. This is a notable difference with triangular decompositions (even the equiprojectable decomposition), which, in general, do not preserve multiplicities. The RUR shares with equiprojectable decomposition the property of producing an output with coefficients of relatively small size.
For zero-dimensional systems, the RUR allows retrieval of the numeric values of the solutions by solving a single univariate polynomial and substituting them in rational functions. This allows production of certified approximations of the solutions to any given precision.
Moreover, the univariate polynomial of the RUR may be factorized, and this gives a RUR for every irreducible factor. This provides the ''prime decomposition'' of the given ideal (that is the
primary decomposition of the
radical
Radical may refer to:
Politics and ideology Politics
*Radical politics, the political intent of fundamental societal change
*Radicalism (historical), the Radical Movement that began in late 18th century Britain and spread to continental Europe and ...
of the ideal). In practice, this provides an output with much smaller coefficients, especially in the case of systems with high multiplicities.
Contrarily to triangular decompositions and equiprojectable decompositions, the RUR is not defined in positive dimension.
Solving numerically
General solving algorithms
The general numerical algorithms which are designed for any
system of nonlinear equations work also for polynomial systems. However the specific methods will generally be preferred, as the general methods generally do not allow one to find ''all'' solutions. In particular, when a general method does not find any solution, this is usually not an indication that there is no solution.
Nevertheless, two methods deserve to be mentioned here.
*
Newton's method
In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valu ...
may be used if the number of equations is equal to the number of variables. It does not allow one to find all the solutions nor to prove that there is no solution. But it is very fast when starting from a point which is close to a solution. Therefore, it is a basic tool for the homotopy continuation method described below.
*
Optimization is rarely used for solving polynomial systems, but it succeeded, circa 1970, in showing that a system of 81 quadratic equations in 56 variables is not inconsistent. With the other known methods, this remains beyond the possibilities of modern technology, . This method consists simply in minimizing the sum of the squares of the equations. If zero is found as a local minimum, then it is attained at a solution. This method works for overdetermined systems, but outputs an empty information if all local minimums which are found are positive.
Homotopy continuation method
This is a semi-numeric method which supposes that the number of equations is equal to the number of variables. This method is relatively old but it has been dramatically improved in the last decades.
This method divides into three steps. First an upper bound on the number of solutions is computed. This bound has to be as sharp as possible. Therefore, it is computed by, at least, four different methods and the best value, say
, is kept.
In the second step, a system
of polynomial equations is generated which has exactly
solutions that are easy to compute. This new system has the same number
of variables and the same number
of equations and the same general structure as the system to solve,
.
Then a
homotopy between the two systems is considered. It consists, for example, of the straight line between the two systems, but other paths may be considered, in particular to avoid some singularities, in the system
:
.
The homotopy continuation consists in deforming the parameter
from 0 to 1 and ''following'' the
solutions during this deformation. This gives the desired solutions for
. ''Following'' means that, if