polynomial evaluation
   HOME

TheInfoList



OR:

In
mathematics Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. There are many ar ...
and
computer science Computer science is the study of computation, information, and automation. Computer science spans Theoretical computer science, theoretical disciplines (such as algorithms, theory of computation, and information theory) to Applied science, ...
, polynomial evaluation refers to computation of the value of a
polynomial In mathematics, a polynomial is a Expression (mathematics), mathematical expression consisting of indeterminate (variable), indeterminates (also called variable (mathematics), variables) and coefficients, that involves only the operations of addit ...
when its indeterminates are substituted for some values. In other words, evaluating the polynomial P(x_1, x_2) = 2x_1x_2 + x_1^3 + 4 at x_1=2, x_2=3 consists of computing P(2,3)= 2\cdot 2\cdot 3 + 2^3+4=24. See also For evaluating the
univariate polynomial In mathematics, a polynomial is a mathematical expression consisting of indeterminates (also called variables) and coefficients, that involves only the operations of addition, subtraction, multiplication and exponentiation to nonnegative integer ...
a_nx^n+a_x^+\cdots +a_0, the most naive method would use n multiplications to compute a_n x^n, use n-1 multiplications to compute a_ x^ and so on for a total of \tfrac multiplications and n additions. Using better methods, such as Horner's rule, this can be reduced to n multiplications and n additions. If some preprocessing is allowed, even more savings are possible.


Background

This problem arises frequently in practice. In computational geometry, polynomials are used to compute function approximations using Taylor polynomials. In
cryptography Cryptography, or cryptology (from "hidden, secret"; and ''graphein'', "to write", or ''-logy, -logia'', "study", respectively), is the practice and study of techniques for secure communication in the presence of Adversary (cryptography), ...
and
hash table In computer science, a hash table is a data structure that implements an associative array, also called a dictionary or simply map; an associative array is an abstract data type that maps Unique key, keys to Value (computer science), values. ...
s, polynomials are used to compute ''k''-independent hashing. In the former case, polynomials are evaluated using
floating-point arithmetic In computing, floating-point arithmetic (FP) is arithmetic on subsets of real numbers formed by a ''significand'' (a Sign (mathematics), signed sequence of a fixed number of digits in some Radix, base) multiplied by an integer power of that ba ...
, which is not exact. Thus different schemes for the evaluation will, in general, give slightly different answers. In the latter case, the polynomials are usually evaluated in a
finite field In mathematics, a finite field or Galois field (so-named in honor of Évariste Galois) is a field (mathematics), field that contains a finite number of Element (mathematics), elements. As with any field, a finite field is a Set (mathematics), s ...
, in which case the answers are always exact.


General methods


Horner's rule

Horner's method evaluates a polynomial using repeated bracketing: \begin a_0 + &a_1x + a_2x^2 + a_3x^3 + \cdots + a_nx^n \\ &= a_0 + x \bigg(a_1 + x \Big(a_2 + x \big(a_3 + \cdots + x(a_ + x\,a_n) \cdots \big) \Big) \bigg). \end This method reduces the number of multiplications and additions to just n Horner's method is so common that a computer instruction " multiply–accumulate operation" has been added to many computer processors, which allow doing the addition and multiplication operations in one combined step.


Multivariate

If the polynomial is multivariate, Horner's rule can be applied recursively over some ordering of the variables. E.g. :P(x, y) = 4 + x + 2 x y + 2 x^2 y + x^2 y^2 can be written as :\begin P(x, y) &= 4 + x (1 + y(2) + x (y(2 + y))) \quad\text\\ P(x, y) &= 4 + x + y(x(2 + x(2)) + y(x^2)). \end An efficient version of this approach was described by Carnicer and Gasca.


Estrin's scheme

While it's not possible to do less computation than Horner's rule (without preprocessing), on modern computers the order of evaluation can matter a lot for the computational efficiency. A method known as Estrin's scheme computes a (single variate) polynomial in a tree like pattern: \begin P(x) = (a_0 + a_1 x) + (a_2 + a_3 x) x^2 + ((a_4 + a_5 x) + (a_6 + a_7 x) x^2)x^4. \end Combined by Exponentiation by squaring, this allows parallelizing the computation.


Evaluation with preprocessing

Arbitrary polynomials can be evaluated with fewer operations than Horner's rule requires if we first "preprocess" the coefficients a_n, \dots, a_0. An example was first given by Motzkin who noted that :P(x)=x^4 + a_3 x^3 + a_2 x^2 + a_1 x + a_0 can be written as :y = (x+\beta_0)x+\beta_1,\quad P(x)=(y+x+\beta_2)y+\beta_3, where the values \beta_0, \dots, \beta_3 are computed in advanced, based on a_0, \dots, a_3. Motzkin's method uses just 3 multiplications compared to Horner's 4. The values for each \beta_i can be easily computed by expanding P(x) and equating the coefficients: :\begin \beta_0&=\tfrac12(a_3-1),\quad &z&=a_2-\beta_0(\beta_0+1),\quad &\beta_1&=a_1-\beta_0 z,\\ \beta_2&=z-2\beta_1, \quad &\beta_3&=a_0-\beta_1(\beta_1+\beta_2).\end


Example

To compute the Taylor expansion \exp(x) \approx 1+x+x^2/2+x^3/6+x^4/24, we can upscale by a factor 24, apply the above steps, and scale back down. That gives us the three multiplication computation : y = (x+1.5)x+11.625,\quad P(x)=(y+x-15)y/24+2.63477. Improving over the equivalent Horner form (that is P(x) = 1 + x (1 + x (1/2 + x(1/6 + x/24)))) by 1 multiplication. Some general methods include the Knuth–Eve algorithm and the Rabin–Winograd algorithm.


Multipoint evaluation

Evaluation of a degree-n polynomial P(x) at multiple points x_1, \dots, x_m can be done with mn multiplications by using
Horner's method In mathematics and computer science, Horner's method (or Horner's scheme) is an algorithm for polynomial evaluation. Although named after William George Horner, this method is much older, as it has been attributed to Joseph-Louis Lagrange by Hor ...
m times. Using the above preprocessing approach, this can be reduced by a factor of two; that is, to mn/2 multiplications. However, it is possible to do better and reduce the time requirement to just O\big((n + m) \log^2(n + m)\big). The idea is to define two polynomials that are zero in respectively the first and second half of the points: m_0(x)=(x-x_1)\cdots(x-x_) and m_1(x)=(x-x_)\cdots(x-x_). We then compute R_0 = P \bmod m_0 and R_1 = P \bmod m_1 using the
Polynomial remainder theorem In algebra, the polynomial remainder theorem or little Bézout's theorem (named after Étienne Bézout) is an application of Euclidean division of polynomials. It states that, for every number r, any polynomial f(x) is the sum of f(r) and the p ...
, which can be done in O(n\log n) time using a
fast Fourier transform A fast Fourier transform (FFT) is an algorithm that computes the discrete Fourier transform (DFT) of a sequence, or its inverse (IDFT). A Fourier transform converts a signal from its original domain (often time or space) to a representation in ...
. This means P(x) = Q(x)m_0(x) + R_0(x) and P(x) = Q(x)m_1(x) + R_1(x) by construction, where R_0 and R_1 are polynomials of degree at most n/2. Because of how m_0 and m_1 were defined, we have :\begin R_0(x_i) &= P(x_i) \quad\text i \le n/2 \quad\text\\ R_1(x_i) &= P(x_i) \quad\text i > n/2. \end Thus to compute P on all n of the x_i, it suffices to compute the smaller polynomials R_0 and R_1 on each half of the points. This gives us a
divide-and-conquer algorithm In computer science, divide and conquer is an algorithm design paradigm. A divide-and-conquer algorithm recursively breaks down a problem into two or more sub-problems of the same or related type, until these become simple enough to be solved dir ...
with T(n) = 2T(n/2) + n\log n, which implies T(n)=O(n(\log n)^2) by the master theorem. In the case where the points in which we wish to evaluate the polynomials have some structure, simpler methods exist. For example, Knuth section 4.6.4 gives a method for tabulating polynomial values of the type :P(x_0 + h), P(x_0 + 2h), \dots.


Dynamic evaluation

In the case where x_1, \dots, x_m are not known in advance, Kedlaya and Umans gave a data structure for evaluating polynomials over a
finite field In mathematics, a finite field or Galois field (so-named in honor of Évariste Galois) is a field (mathematics), field that contains a finite number of Element (mathematics), elements. As with any field, a finite field is a Set (mathematics), s ...
of size F_q in time (\log n)^(\log_2 q)^ per evaluation after some initial preprocessing. This was shown by Larsen to be essentially optimal. The idea is to transform P(x) of degree n into a multivariate polynomial f(x_1, x_2, \dots, x_m), such that P(x) = f(x, x^d, x^, \dots, x^) and the individual degrees of f is at most d. Since this is over \bmod q, the largest value f can take (over \mathbb Z) is M = d^m (q-1)^. Using the
Chinese remainder theorem In mathematics, the Chinese remainder theorem states that if one knows the remainders of the Euclidean division of an integer ''n'' by several integers, then one can determine uniquely the remainder of the division of ''n'' by the product of thes ...
, it suffices to evaluate f modulo different primes p_1, \dots, p_\ell with a product at least M. Each prime can be taken to be roughly \log M = O(dm\log q), and the number of primes needed, \ell, is roughly the same. Doing this process recursively, we can get the primes as small as \log\log q. That means we can compute and store f on all the possible values in T = (\log\log q)^m time and space. If we take d = \log q, we get m = \tfrac, so the time/space requirement is just n^\frac. Kedlaya and Umans further show how to combine this preprocessing with fast (FFT) multipoint evaluation. This allows optimal algorithms for many important algebraic problems, such as polynomial modular composition.


Specific polynomials

While general polynomials require \Omega(n) operations to evaluate, some polynomials can be computed much faster. For example, the polynomial P(x)=x^2+2x+1 can be computed using just one multiplication and one addition since P(x)=(x+1)^2


Evaluation of powers

A particularly interesting type of polynomial is powers like x^n. Such polynomials can always be computed in O(\log n) operations. Suppose, for example, that we need to compute x^; we could simply start with x and multiply by x to get x^2. We can then multiply that by itself to get x^4 and so on to get x^8 and x^ in just four multiplications. Other powers like x^5 can similarly be computed efficiently by first computing x^4 by 2 multiplications and then multiplying by x. The most efficient way to compute a given power x^n is provided by addition-chain exponentiation. However, this requires designing a specific algorithm for each exponent, and the computation needed for designing these algorithms are difficult (
NP-complete In computational complexity theory, NP-complete problems are the hardest of the problems to which ''solutions'' can be verified ''quickly''. Somewhat more precisely, a problem is NP-complete when: # It is a decision problem, meaning that for any ...
), so exponentiation by squaring is generally preferred for effective computations.


Polynomial families

Often polynomials show up in a different form than the well known a_n x^n + \dots + a_1 x + a_0. For polynomials in Chebyshev form we can use Clenshaw algorithm. For polynomials in Bézier form we can use De Casteljau's algorithm, and for
B-spline In numerical analysis, a B-spline (short for basis spline) is a type of Spline (mathematics), spline function designed to have minimal Support (mathematics), support (overlap) for a given Degree of a polynomial, degree, smoothness, and set of bre ...
s there is
De Boor's algorithm In the mathematical subfield of numerical analysis, de Boor's algorithmC. de Boor 971 "Subroutine package for calculating with B-splines", Techn.Rep. LA-4728-MS, Los Alamos Sci.Lab, Los Alamos NM; p. 109, 121. is a polynomial-time and numericall ...
.


Hard polynomials

The fact that some polynomials can be computed significantly faster than "general polynomials" suggests the question: Can we give an example of a simple polynomial that cannot be computed in time much smaller than its degree? Volker Strassen has shown that the polynomial :P(x)=\sum_^n 2^x^k cannot be evaluated with less than \tfrac12 n - 2 multiplications and n - 4 additions. At least this bound holds if only operations of those types are allowed, giving rise to a so-called "polynomial chain of length ". The polynomial given by Strassen has very large coefficients, but by probabilistic methods, one can show there must exist even polynomials with coefficients just 0's and 1's such that the evaluation requires at least \Omega(n/\log n) multiplications. For other simple polynomials, the complexity is unknown. The polynomial (x+1)(x+2)\cdots(x+n) is conjectured to not be computable in time (\log n)^ for any c. This is supported by the fact that, if it can be computed fast, then
integer factorization In mathematics, integer factorization is the decomposition of a positive integer into a product of integers. Every positive integer greater than 1 is either the product of two or more integer factors greater than 1, in which case it is a comp ...
can be computed in polynomial time, breaking the RSA cryptosystem.


Matrix polynomials

Sometimes the computational cost of scalar multiplications (like ax) is less than the computational cost of "non scalar" multiplications (like x^2). The typical example of this is matrices. If M is an m\times m matrix, a scalar multiplication aM takes about m^2 arithmetic operations, while computing M^2 takes about m^3 (or m^ using fast matrix multiplication). Matrix polynomials are used, for example, for computing matrix exponentials. Paterson and Stockmeyer showed how to compute a degree n polynomial using only O(\sqrt n) non scalar multiplications and O(n) scalar multiplications. Thus a matrix polynomial of degree can be evaluated in O(m^\alpha\sqrt + m^2n) time, where is the time needed for multiplying two matices. If m=n this is O(n^\beta), where or deprending whether usual or fast matrix multiplication is used. This is to be compared to the usual Horner method, which gives or respectively. This method works as follows: For a polynomial :P(M)=a_ M^ + \dots + a_M + a_0 I, let be the least integer not smaller than \sqrt. The powers M, M^2, \dots, M^k are computed with k matrix multiplications, and M^, M^, \dots, M^ are then computed by repeated multiplication by M^k. Now, :\beginP(M) = &\,(a_0 I + a_1 M + \dots + a_M^) \\+&\,(a_k I + a_ M + \dots + a_M^)M^k \\+&\,\dots \\+&\,(a_ I + a_ M + \dots + a_M^)M^, \end, where a_i=0 for . This requires just k more non-scalar multiplications. The direct application of this method uses 2\sqrt non-scalar multiplications, but combining it with Evaluation with preprocessing, Paterson and Stockmeyer show you can reduce this to \sqrt. Methods based on matrix polynomial multiplications and additions have been proposed allowing to save nonscalar matrix multiplications with respect to the Paterson-Stockmeyer method.


See also

* Estrin's scheme to facilitate parallelization on modern computer architectures *
Arithmetic circuit complexity In computational complexity theory, ''arithmetic circuits'' are the standard model for computing polynomials. Informally, an arithmetic circuit takes as inputs either variables or numbers, and is allowed to either add or multiply two expressions it ...
theory studies the
computational complexity In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations ...
of evaluating different polynomials.


References

{{reflist Polynomials