HOME

TheInfoList



OR:

A sum-of-squares optimization program is an
optimization Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfi ...
problem with a linear cost function and a particular type of
constraint Constraint may refer to: * Constraint (computer-aided design), a demarcation of geometrical characteristics between two or more entities or solid modeling bodies * Constraint (mathematics), a condition of an optimization problem that the solution ...
on the decision variables. These constraints are of the form that when the decision variables are used as coefficients in certain
polynomials In mathematics, a polynomial is an expression consisting of indeterminates (also called variables) and coefficients, that involves only the operations of addition, subtraction, multiplication, and positive-integer powers of variables. An example ...
, those polynomials should have the polynomial SOS property. When fixing the maximum degree of the polynomials involved, sum-of-squares optimization is also known as the Lasserre hierarchy of relaxations in
semidefinite programming Semidefinite programming (SDP) is a subfield of convex optimization concerned with the optimization of a linear objective function (a user-specified function that the user wants to minimize or maximize) over the intersection of the cone of positiv ...
. Sum-of-squares optimization techniques have been applied across a variety of areas, including control theory (in particular, for searching for polynomial Lyapunov functions for dynamical systems described by polynomial vector fields), statistics, finance and machine learning.


Optimization problem

The problem can be expressed as \max_ c^T u subject to a_(x) + a_(x)u_1 + \cdots + a_(x)u_n \in \text \quad (k=1,\ldots, N_s). Here "SOS" represents the class of sum-of-squares (SOS) polynomials. The vector c\in \R^n and polynomials \ are given as part of the data for the optimization problem. The quantities u\in \R^n are the decision variables. SOS programs can be converted to
semidefinite programs Semidefinite programming (SDP) is a subfield of convex optimization concerned with the optimization of a linear objective function (a user-specified function that the user wants to minimize or maximize) over the intersection of the cone of positive ...
(SDPs) using the
duality Duality may refer to: Mathematics * Duality (mathematics), a mathematical concept ** Dual (category theory), a formalization of mathematical duality ** Duality (optimization) ** Duality (order theory), a concept regarding binary relations ** Dual ...
of the SOS polynomial program and a relaxation for constrained polynomial optimization using positive-semidefinite matrices, see the following section.


Dual problem: constrained polynomial optimization

Suppose we have an n -variate polynomial p(x): \mathbb^n \to \mathbb , and suppose that we would like to minimize this polynomial over a subset A \subseteq \mathbb^n . Suppose furthermore that the constraints on the subset A can be encoded using m polynomial equalities of degree at most 2d , each of the form a_i(x) = 0 where a_i: \mathbb^n \to \mathbb is a polynomial of degree at most 2d . A natural, though generally non-convex program for this optimization problem is the following: \min_ \langle C, x^ (x^)^\top \rangle subject to: x_ = 1, where x^ is the n^ -dimensional vector with one entry for every monomial in x of degree at most d , so that for each multiset S \subset , S, \le d, x_S = \prod_x_i , C is a matrix of coefficients of the polynomial p(x) that we want to minimize, and A_i is a matrix of coefficients of the polynomial a_i(x) encoding the i -th constraint on the subset A \subset \R^n . The additional, fixed constant index in our search space, x_ = 1 , is added for the convenience of writing the polynomials p(x) and a_i(x) in a matrix representation. This program is generally non-convex, because the constraints () are not convex. One possible convex relaxation for this minimization problem uses
semidefinite programming Semidefinite programming (SDP) is a subfield of convex optimization concerned with the optimization of a linear objective function (a user-specified function that the user wants to minimize or maximize) over the intersection of the cone of positiv ...
to replace the rank-one matrix of variables x^ (x^)^\top with a positive-semidefinite matrix X : we index each monomial of size at most 2d by a multiset S of at most 2d indices, S \subset , S, \le 2d . For each such monomial, we create a variable X_S in the program, and we arrange the variables X_S to form the matrix X \in \mathbb^ , where \R^ is the set of real matrices whose rows and columns are identified with multisets of elements from n of size at most d . We then write the following semidefinite program in the variables X_S : \min_\langle C, X \rangle subject to: \langle A_i, X \rangle =0 \qquad \forall \ i \in Q X_ = 1 , X_ = X_ \qquad \forall \ U,V,S,T \subseteq , U, ,, V, ,, S, ,, T, \le d,\text \ U \cup V = S \cup T , X \succeq 0, where again C is the matrix of coefficients of the polynomial p(x) that we want to minimize, and A_i is the matrix of coefficients of the polynomial a_i(x) encoding the i -th constraint on the subset A \subset \R^n . The third constraint ensures that the value of a monomial that appears several times within the matrix is equal throughout the matrix, and is added to make X respect the symmetries present in the quadratic form x^(x^)^\top .


Duality

One can take the dual of the above semidefinite program and obtain the following program: \max_ y_0 , subject to: C - y_0 e_- \sum_ y_i A_i - \sum_ y_ (e_ - e_)\succeq 0. We have a variable y_0 corresponding to the constraint \langle e_, X\rangle = 1 (where e_ is the matrix with all entries zero save for the entry indexed by (\varnothing,\varnothing) ), a real variable y_i for each polynomial constraint \langle X,A_i \rangle = 0 \quad s.t. i \in and for each group of multisets S,T,U,V \subset , S, ,, T, ,, U, ,, V, \le d, S\cup T = U \cup V , we have a dual variable y_ for the symmetry constraint \langle X, e_ - e_ \rangle = 0 . The positive-semidefiniteness constraint ensures that p(x) - y_0 is a sum-of-squares of polynomials over A \subset \R^n : by a characterization of positive-semidefinite matrices, for any positive-semidefinite matrix Q\in \mathbb^ , we can write Q = \sum_ f_i f_i^\top for vectors f_i \in \mathbb^m . Thus for any x \in A \subset \mathbb^n , \begin p(x) - y_0 &= p(x) - y_0 - \sum_ y_i a_i(x) \qquad \text x \in A\\ &=(x^)^\top \left( C - y_0 e_ - \sum_ y_i A_i - \sum_ y_(e_-e_) \right)x^\qquad \text\\ &= (x^)^\top \left( \sum_ f_i f_i^\top \right)x^ \\ &= \sum_ \langle x^, f_i\rangle^2 \\ &= \sum_ f_i(x)^2, \end where we have identified the vectors f_i with the coefficients of a polynomial of degree at most d . This gives a sum-of-squares proof that the value p(x) \ge y_0 over A \subset \mathbb^n . The above can also be extended to regions A \subset \mathbb^n defined by polynomial inequalities.


Sum-of-squares hierarchy

The sum-of-squares hierarchy (SOS hierarchy), also known as the Lasserre hierarchy, is a hierarchy of convex relaxations of increasing power and increasing computational cost. For each natural number d \in \mathbb the corresponding convex relaxation is known as the ''dth level'' or '' d-th round of the SOS hierarchy.'' The 1st round, when d=1, corresponds to a basic
semidefinite program Semidefinite programming (SDP) is a subfield of convex optimization concerned with the optimization of a linear objective function (a user-specified function that the user wants to minimize or maximize) over the intersection of the cone of positiv ...
, or to sum-of-squares optimization over polynomials of degree at most 2. To augment the basic convex program at the 1st level of the hierarchy to d-th level, additional variables and constraints are added to the program to have the program consider polynomials of degree at most 2d. The SOS hierarchy derives its name from the fact that the value of the objective function at the d-th level is bounded with a sum-of-squares proof using polynomials of degree at most 2d via the dual (see "Duality" above). Consequently, any sum-of-squares proof that uses polynomials of degree at most 2d can be used to bound the objective value, allowing one to prove guarantees on the tightness of the relaxation. In conjunction with a theorem of Berg, this further implies that given sufficiently many rounds, the relaxation becomes arbitrarily tight on any fixed interval. Berg's result states that every non-negative real polynomial within a bounded interval can be approximated within accuracy \varepsilon on that interval with a sum-of-squares of real polynomials of sufficiently high degree, and thus if OBJ(x) is the polynomial objective value as a function of the point x, if the inequality c + \varepsilon - OBJ(x) \ge 0 holds for all x in the region of interest, then there must be a sum-of-squares proof of this fact. Choosing c to be the minimum of the objective function over the feasible region, we have the result.


Computational cost

When optimizing over a function in n variables, the d-th level of the hierarchy can be written as a semidefinite program over n^ variables, and can be solved in time n^ using the ellipsoid method.


Sum-of-squares background

A polynomial p is a ''sum of squares'' (''SOS'') if there exist polynomials \_^m such that p = \sum_^m f_i^2 . For example, p=x^2 - 4xy + 7y^2 is a sum of squares since p = f_1^2 + f_2^2 where f_1 = (x-2y)\textf_2 = \sqrty. Note that if p is a sum of squares then p(x) \ge 0 for all x \in \R^n. Detailed descriptions of polynomial SOS are available.Lasserre, J. (2001)
Global optimization with polynomials and the problem of moments
. ''SIAM Journal on Optimization'', 11 (3), 796{817.
Quadratic forms can be expressed as p(x)=x^T Q x where Q is a symmetric matrix. Similarly, polynomials of degree ≤ 2''d'' can be expressed as p(x)=z(x)^\mathsf{T} Q z(x) , where the vector z contains all monomials of degree \le d . This is known as the Gram matrix form. An important fact is that p is SOS if and only if there exists a symmetric and
positive-semidefinite matrix In mathematics, a symmetric matrix M with real entries is positive-definite if the real number z^\textsfMz is positive for every nonzero real column vector z, where z^\textsf is the transpose of More generally, a Hermitian matrix (that is, a ...
Q such that p(x) = z(x)^\mathsf{T} Q z(x) . This provides a connection between SOS polynomials and positive-semidefinite matrices.


Software tools


SOSTOOLS
licensed under the
GNU GPL The GNU General Public License (GNU GPL or simply GPL) is a series of widely used free software licenses that guarantee end users the four freedoms to run, study, share, and modify the software. The license was the first copyleft for general ...
. The reference guide is available at arXiv:1310.4716 /nowiki>math.OC/nowiki>, and a presentation about its internals is availabl
here

CDCS-sos
a package fro
CDCS
an
augmented Lagrangian method Augmented Lagrangian methods are a certain class of algorithms for solving constrained optimization problems. They have similarities to penalty methods in that they replace a constrained optimization problem by a series of unconstrained problems ...
solver, to deal with large scale SOS programs. * Th
SumOfSquares
extension o
JuMP
for Julia.
TSSOS
for Julia, a polynomial optimization tool based on the sparsity adapted moment-SOS hierarchies. * For the dual problem of constrained polynomial optimization
GloptiPoly
for MATLAB/Octave
Ncpol2sdpa
for Python an
MomentOpt
for Julia.


References

Mathematical optimization Real algebraic geometry