Quadratic programming (QP) is the process of solving certain
mathematical optimization
Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criteria, from some set of available alternatives. It is generally divided into two subfiel ...
problems involving
quadratic function
In mathematics, a quadratic function of a single variable (mathematics), variable is a function (mathematics), function of the form
:f(x)=ax^2+bx+c,\quad a \ne 0,
where is its variable, and , , and are coefficients. The mathematical expression, e ...
s. Specifically, one seeks to optimize (minimize or maximize) a multivariate quadratic function subject to linear
constraints on the variables. Quadratic programming is a type of
nonlinear programming.
"Programming" in this context refers to a formal procedure for solving mathematical problems. This usage dates to the 1940s and is not specifically tied to the more recent notion of "computer programming." To avoid confusion, some practitioners prefer the term "optimization" — e.g., "quadratic optimization."
Problem formulation
The quadratic programming problem with variables and constraints can be formulated as follows.
Given:
* a
real-valued, -dimensional vector ,
* an -dimensional real
symmetric matrix
In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally,
Because equal matrices have equal dimensions, only square matrices can be symmetric.
The entries of a symmetric matrix are symmetric with ...
,
* an -dimensional real
matrix
Matrix (: matrices or matrixes) or MATRIX may refer to:
Science and mathematics
* Matrix (mathematics), a rectangular array of numbers, symbols or expressions
* Matrix (logic), part of a formula in prenex normal form
* Matrix (biology), the m ...
, and
* an -dimensional real vector ,
the objective of quadratic programming is to find an -dimensional vector , that will
:
where denotes the vector
transpose
In linear algebra, the transpose of a Matrix (mathematics), matrix is an operator which flips a matrix over its diagonal;
that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other ...
of , and the notation means that every entry of the vector is less than or equal to the corresponding entry of the vector (component-wise inequality).
Constrained least squares
As a special case when is
symmetric positive-definite, the cost function reduces to least squares:
:
where follows from the
Cholesky decomposition of and . Conversely, any such
constrained least squares program can be equivalently framed as a quadratic programming problem, even for a generic non-square matrix.
Generalizations
When minimizing a function in the neighborhood of some reference point , is set to its
Hessian matrix
In mathematics, the Hessian matrix, Hessian or (less commonly) Hesse matrix is a square matrix of second-order partial derivatives of a scalar-valued Function (mathematics), function, or scalar field. It describes the local curvature of a functio ...
and is set to its
gradient
In vector calculus, the gradient of a scalar-valued differentiable function f of several variables is the vector field (or vector-valued function) \nabla f whose value at a point p gives the direction and the rate of fastest increase. The g ...
. A related programming problem,
quadratically constrained quadratic programming, can be posed by adding quadratic constraints on the variables.
Solution methods
For general problems a variety of methods are commonly used, including
:*
interior point,
:*
active set,
:*
augmented Lagrangian,
:*
conjugate gradient,
:*
gradient projection,
:*extensions of the
simplex algorithm.
[
In the case in which is positive definite, the problem is a special case of the more general field of ]convex optimization
Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets (or, equivalently, maximizing concave functions over convex sets). Many classes of convex optimization problems ...
.
Equality constraints
Quadratic programming is particularly simple when is positive definite and there are only equality constraints; specifically, the solution process is linear. By using Lagrange multipliers and seeking the extremum of the Lagrangian, it may be readily shown that the solution to the equality constrained problem
:
:
is given by the linear system
:
where is a set of Lagrange multipliers which come out of the solution alongside .
The easiest means of approaching this system is direct solution (for example, LU factorization), which for small problems is very practical. For large problems, the system poses some unusual difficulties, most notably that the problem is never positive definite (even if is), making it potentially very difficult to find a good numeric approach, and there are many approaches to choose from dependent on the problem.
If the constraints don't couple the variables too tightly, a relatively simple attack is to change the variables so that constraints are unconditionally satisfied. For example, suppose (generalizing to nonzero is straightforward). Looking at the constraint equations:
:
introduce a new variable defined by
:
where has dimension of minus the number of constraints. Then
:
and if is chosen so that the constraint equation will be always satisfied. Finding such entails finding the null space of , which is more or less simple depending on the structure of . Substituting into the quadratic form gives an unconstrained minimization problem:
:
the solution of which is given by:
:
Under certain conditions on , the reduced matrix will be positive definite. It is possible to write a variation on the conjugate gradient method which avoids the explicit calculation of .
Lagrangian duality
The Lagrangian dual of a quadratic programming problem is also a quadratic programming problem. To see this let us focus on the case where and is positive definite. We write the Lagrangian function as
:
Defining the (Lagrangian) dual function as , we find an infimum
In mathematics, the infimum (abbreviated inf; : infima) of a subset S of a partially ordered set P is the greatest element in P that is less than or equal to each element of S, if such an element exists. If the infimum of S exists, it is unique ...
of , using and positive-definiteness of :
:
Hence the dual function is
:
and so the Lagrangian dual of the quadratic programming problem is
:
Besides the Lagrangian duality theory, there are other duality pairings (e.g. Wolfe, etc.).
Run-time complexity
Convex quadratic programming
For positive definite , when the problem is convex, the ellipsoid method
In mathematical optimization, the ellipsoid method is an iterative method for convex optimization, minimizing convex functions over convex sets. The ellipsoid method generates a sequence of ellipsoids whose volume uniformly decreases at every ste ...
solves the problem in (weakly) polynomial time.
Ye and Tse present a polynomial-time algorithm, which extends Karmarkar's algorithm from linear programming to convex quadratic programming. On a system with ''n'' variables and ''L'' input bits, their algorithm requires O(L n) iterations, each of which can be done using O(L n3) arithmetic operations, for a total runtime complexity of O(''L''2 ''n''4).
Kapoor and Vaidya present another algorithm, which requires O(''L'' * log ''L'' ''* n''3.67 * log ''n'') arithmetic operations.
Non-convex quadratic programming
If is indefinite, (so the problem is non-convex) then the problem is NP-hard
In computational complexity theory, a computational problem ''H'' is called NP-hard if, for every problem ''L'' which can be solved in non-deterministic polynomial-time, there is a polynomial-time reduction from ''L'' to ''H''. That is, assumi ...
. A simple way to see this is to consider the non-convex quadratic constraint ''xi''2 = ''xi''. This constraint is equivalent to requiring that ''xi'' is in , that is, ''xi'' is a binary integer variable. Therefore, such constraints can be used to model any integer program with binary variables, which is known to be NP-hard.
Moreover, these non-convex problems might have several stationary points and local minima. In fact, even if has only one negative eigenvalue
In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a ...
, the problem is (strongly) NP-hard
In computational complexity theory, a computational problem ''H'' is called NP-hard if, for every problem ''L'' which can be solved in non-deterministic polynomial-time, there is a polynomial-time reduction from ''L'' to ''H''. That is, assumi ...
.
Moreover, finding a KKT point of a non-convex quadratic program is CLS-hard.
Mixed-integer quadratic programming
There are some situations where one or more elements of the vector will need to take on integer
An integer is the number zero (0), a positive natural number (1, 2, 3, ...), or the negation of a positive natural number (−1, −2, −3, ...). The negations or additive inverses of the positive natural numbers are referred to as negative in ...
values. This leads to the formulation of a mixed-integer quadratic programming (MIQP) problem. Applications of MIQP include water resources
Water resources are natural resources of water that are potentially useful for humans, for example as a source of drinking water supply or irrigation water. These resources can be either Fresh water, freshwater from natural sources, or water produ ...
and the construction of index funds.
Solvers and scripting (programming) languages
Extensions
Polynomial optimization is a more general framework, in which the constraints can be polynomial functions of any degree, not only 2.
See also
* Sequential quadratic programming
*Linear programming
Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements and objective are represented by linear function#As a polynomia ...
* Critical line method
References
Further reading
*
* A6: MP2, pg.245.
*
External links
A page about quadratic programming
NEOS Optimization Guide: Quadratic Programming
Quadratic Programming
Cubic programming and beyond
in Operations Research stack exchange
{{Authority control
Optimization algorithms and methods