HOME

TheInfoList



OR:

In mathematical optimization theory, the linear complementarity problem (LCP) arises frequently in computational mechanics and encompasses the well-known quadratic programming as a special case. It was proposed by Cottle and Dantzig in 1968.


Formulation

Given a real matrix ''M'' and vector ''q'', the linear complementarity problem LCP(''q'', ''M'') seeks vectors ''z'' and ''w'' which satisfy the following constraints: * w, z \geqslant 0, (that is, each component of these two vectors is non-negative) * z^Tw = 0 or equivalently \sum\nolimits_i w_i z_i = 0. This is the complementarity condition, since it implies that, for all i, at most one of w_i and z_i can be positive. * w = Mz + q A sufficient condition for existence and uniqueness of a solution to this problem is that ''M'' be symmetric positive-definite. If ''M'' is such that has a solution for every ''q'', then ''M'' is a Q-matrix. If ''M'' is such that have a unique solution for every ''q'', then ''M'' is a P-matrix. Both of these characterizations are sufficient and necessary. The vector ''w'' is a slack variable, and so is generally discarded after ''z'' is found. As such, the problem can also be formulated as: * Mz+q \geqslant 0 * z \geqslant 0 * z^(Mz+q) = 0 (the complementarity condition)


Convex quadratic-minimization: Minimum conditions

Finding a solution to the linear complementarity problem is associated with minimizing the quadratic function : f(z) = z^T(Mz+q) subject to the constraints : +q \geqslant 0 : z \geqslant 0 These constraints ensure that ''f'' is always non-negative. The minimum of ''f'' is 0 at ''z'' if and only if ''z'' solves the linear complementarity problem. If ''M'' is positive definite, any algorithm for solving (strictly) convex QPs can solve the LCP. Specially designed basis-exchange pivoting algorithms, such as Lemke's algorithm and a variant of the simplex algorithm of Dantzig have been used for decades. Besides having polynomial time complexity, interior-point methods are also effective in practice. Also, a quadratic-programming problem stated as minimize f(x)=c^Tx+\tfrac x^T Qx subject to Ax \geqslant b as well as x \geqslant 0 with ''Q'' symmetric is the same as solving the LCP with :q = \begin c \\ -b \end, \qquad M = \begin Q & -A^T \\ A & 0 \end This is because the Karush–Kuhn–Tucker conditions of the QP problem can be written as: :\begin v = Q x - A^T + c \\ s = A x - b \\ x, , v, s \geqslant 0 \\ x^ v+ ^T s = 0 \end with ''v'' the Lagrange multipliers on the non-negativity constraints, ''λ'' the multipliers on the inequality constraints, and ''s'' the slack variables for the inequality constraints. The fourth condition derives from the complementarity of each group of variables with its set of KKT vectors (optimal Lagrange multipliers) being . In that case, : z = \begin x \\ \lambda \end, \qquad w = \begin v \\ s \end If the non-negativity constraint on the ''x'' is relaxed, the dimensionality of the LCP problem can be reduced to the number of the inequalities, as long as ''Q'' is non-singular (which is guaranteed if it is positive definite). The multipliers ''v'' are no longer present, and the first KKT conditions can be rewritten as: : Q x = A^ - c or: : x = Q^(A^ - c) pre-multiplying the two sides by ''A'' and subtracting ''b'' we obtain: : A x - b = A Q^(A^ - c) -b \, The left side, due to the second KKT condition, is ''s''. Substituting and reordering: : s = (A Q^ A^) + (- A Q^ c - b )\, Calling now :\begin M &:= (A Q^ A^) \\ q &:= (- A Q^ c - b) \end we have an LCP, due to the relation of complementarity between the slack variables ''s'' and their Lagrange multipliers ''λ''. Once we solve it, we may obtain the value of ''x'' from ''λ'' through the first KKT condition. Finally, it is also possible to handle additional equality constraints: : A_x = b_ This introduces a vector of Lagrange multipliers ''μ'', with the same dimension as b_. It is easy to verify that the ''M'' and ''Q'' for the LCP system s = M + Q are now expressed as: :\begin M &:= \begin A & 0 \end \begin Q & A_^ \\ -A_ & 0 \end^ \begin A^T \\ 0 \end \\ q &:= - \begin A & 0 \end \begin Q & A_^ \\ -A_ & 0 \end^ \begin c \\ b_ \end - b \end From ''λ'' we can now recover the values of both ''x'' and the Lagrange multiplier of equalities ''μ'': :\begin x \\ \mu \end = \begin Q & A_^ \\ -A_ & 0 \end^ \begin A^T \lambda - c \\ -b_ \end In fact, most QP solvers work on the LCP formulation, including the interior point method, principal / complementarity pivoting, and active set methods. LCP problems can be solved also by the criss-cross algorithm, conversely, for linear complementarity problems, the criss-cross algorithm terminates finitely only if the matrix is a sufficient matrix. A sufficient matrix is a generalization both of a positive-definite matrix and of a P-matrix, whose principal minors are each positive. Such LCPs can be solved when they are formulated abstractly using oriented-matroid theory.


See also

* Complementarity theory *
Physics engine A physics engine is computer software that provides an approximate simulation of certain physical systems, typically classical dynamics, including rigid body dynamics (including collision detection), soft body dynamics, and fluid dynamics. I ...
Impulse/constraint type physics engines for games use this approach. * Contact dynamics Contact dynamics with the nonsmooth approach. * Bimatrix games can be reduced to LCP.


Notes


References

* * * * * * * * * * * * *


Further reading

*


External links


LCPSolve
— A simple procedure in GAUSS to solve a linear complementarity problem * Siconos/Numerics open-source GPL implementation in C of Lemke's algorithm and other methods to solve LCPs and MLCPs {{Mathematical programming Linear algebra Mathematical optimization