HOME

TheInfoList



OR:

In mathematical optimization theory, the linear complementarity problem (LCP) arises frequently in computational mechanics and encompasses the well-known
quadratic programming Quadratic programming (QP) is the process of solving certain mathematical optimization problems involving quadratic functions. Specifically, one seeks to optimize (minimize or maximize) a multivariate quadratic function subject to linear constr ...
as a special case. It was proposed by Cottle and
Dantzig Dantzig is a surname. Notable people with the surname include: * Tobias Dantzig (1884–1956), mathematician from Lithuania, father of George Dantzig * George Dantzig (1914–2005), American mathematician who introduced the simplex algorithm * Da ...
in 1968.


Formulation

Given a real matrix ''M'' and vector ''q'', the linear complementarity problem LCP(''q'', ''M'') seeks vectors ''z'' and ''w'' which satisfy the following constraints: * w, z \geqslant 0, (that is, each component of these two vectors is non-negative) * z^Tw = 0 or equivalently \sum\nolimits_i w_i z_i = 0. This is the complementarity condition, since it implies that, for all i, at most one of w_i and z_i can be positive. * w = Mz + q A sufficient condition for existence and uniqueness of a solution to this problem is that ''M'' be
symmetric Symmetry (from grc, συμμετρία "agreement in dimensions, due proportion, arrangement") in everyday language refers to a sense of harmonious and beautiful proportion and balance. In mathematics, "symmetry" has a more precise definit ...
positive-definite. If ''M'' is such that has a solution for every ''q'', then ''M'' is a
Q-matrix In mathematics, a Q-matrix is a square matrix Matrix most commonly refers to: * ''The Matrix'' (franchise), an American media franchise ** '' The Matrix'', a 1999 science-fiction action film ** "The Matrix", a fictional setting, a virtual real ...
. If ''M'' is such that have a unique solution for every ''q'', then ''M'' is a P-matrix. Both of these characterizations are sufficient and necessary. The vector ''w'' is a slack variable, and so is generally discarded after ''z'' is found. As such, the problem can also be formulated as: * Mz+q \geqslant 0 * z \geqslant 0 * z^(Mz+q) = 0 (the complementarity condition)


Convex quadratic-minimization: Minimum conditions

Finding a solution to the linear complementarity problem is associated with minimizing the quadratic function : f(z) = z^T(Mz+q) subject to the constraints : +q \geqslant 0 : z \geqslant 0 These constraints ensure that ''f'' is always non-negative. The minimum of ''f'' is 0 at ''z'' if and only if ''z'' solves the linear complementarity problem. If ''M'' is positive definite, any algorithm for solving (strictly) convex QPs can solve the LCP. Specially designed basis-exchange pivoting algorithms, such as
Lemke's algorithm In mathematical optimization, Lemke's algorithm is a procedure for solving linear complementarity problems, and more generally mixed linear complementarity problems. It is named after Carlton E. Lemke. Lemke's algorithm is of pivoting or basis ...
and a variant of the simplex algorithm of Dantzig have been used for decades. Besides having polynomial time complexity, interior-point methods are also effective in practice. Also, a quadratic-programming problem stated as minimize f(x)=c^Tx+\tfrac x^T Qx subject to Ax \geqslant b as well as x \geqslant 0 with ''Q'' symmetric is the same as solving the LCP with :q = \begin c \\ -b \end, \qquad M = \begin Q & -A^T \\ A & 0 \end This is because the Karush–Kuhn–Tucker conditions of the QP problem can be written as: :\begin v = Q x - A^T + c \\ s = A x - b \\ x, , v, s \geqslant 0 \\ x^ v+ ^T s = 0 \end with ''v'' the Lagrange multipliers on the non-negativity constraints, ''λ'' the multipliers on the inequality constraints, and ''s'' the slack variables for the inequality constraints. The fourth condition derives from the complementarity of each group of variables with its set of KKT vectors (optimal Lagrange multipliers) being . In that case, : z = \begin x \\ \lambda \end, \qquad w = \begin v \\ s \end If the non-negativity constraint on the ''x'' is relaxed, the dimensionality of the LCP problem can be reduced to the number of the inequalities, as long as ''Q'' is non-singular (which is guaranteed if it is positive definite). The multipliers ''v'' are no longer present, and the first KKT conditions can be rewritten as: : Q x = A^ - c or: : x = Q^(A^ - c) pre-multiplying the two sides by ''A'' and subtracting ''b'' we obtain: : A x - b = A Q^(A^ - c) -b \, The left side, due to the second KKT condition, is ''s''. Substituting and reordering: : s = (A Q^ A^) + (- A Q^ c - b )\, Calling now :\begin M &:= (A Q^ A^) \\ q &:= (- A Q^ c - b) \end we have an LCP, due to the relation of complementarity between the slack variables ''s'' and their Lagrange multipliers ''λ''. Once we solve it, we may obtain the value of ''x'' from ''λ'' through the first KKT condition. Finally, it is also possible to handle additional equality constraints: : A_x = b_ This introduces a vector of Lagrange multipliers ''μ'', with the same dimension as b_. It is easy to verify that the ''M'' and ''Q'' for the LCP system s = M + Q are now expressed as: :\begin M &:= \begin A & 0 \end \begin Q & A_^ \\ -A_ & 0 \end^ \begin A^T \\ 0 \end \\ q &:= - \begin A & 0 \end \begin Q & A_^ \\ -A_ & 0 \end^ \begin c \\ b_ \end - b \end From ''λ'' we can now recover the values of both ''x'' and the Lagrange multiplier of equalities ''μ'': :\begin x \\ \mu \end = \begin Q & A_^ \\ -A_ & 0 \end^ \begin A^T \lambda - c \\ -b_ \end In fact, most QP solvers work on the LCP formulation, including the
interior point method Interior-point methods (also referred to as barrier methods or IPMs) are a certain class of algorithms that solve linear and nonlinear convex optimization problems. An interior point method was discovered by Soviet mathematician I. I. Dikin in 1 ...
, principal / complementarity pivoting, and active set methods. LCP problems can be solved also by the criss-cross algorithm, conversely, for linear complementarity problems, the criss-cross algorithm terminates finitely only if the matrix is a sufficient matrix. A sufficient matrix is a generalization both of a
positive-definite matrix In mathematics, a symmetric matrix M with real entries is positive-definite if the real number z^\textsfMz is positive for every nonzero real column vector z, where z^\textsf is the transpose of More generally, a Hermitian matrix (that is, ...
and of a P-matrix, whose
principal minor In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix, cut down from A by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square matrices (first minors ...
s are each positive. Such LCPs can be solved when they are formulated abstractly using oriented-matroid theory.


See also

* Complementarity theory * Physics engine Impulse/constraint type physics engines for games use this approach. * Contact dynamics Contact dynamics with the nonsmooth approach. * Bimatrix games can be reduced to a LCP.


Notes


References

* * * * * * * * * * * * *


Further reading

*


External links


LCPSolve
— A simple procedure in GAUSS to solve a linear complementarity problem * Siconos/Numerics open-source GPL implementation in C of Lemke's algorithm and other methods to solve LCPs and MLCPs {{Mathematical programming Linear algebra Mathematical optimization