HOME

TheInfoList



OR:

In constrained optimization, a field of
mathematics Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics ...
, a barrier function is a
continuous function In mathematics, a continuous function is a function such that a continuous variation (that is a change without jump) of the argument induces a continuous variation of the value of the function. This means that there are no abrupt changes in value ...
whose value on a point increases to infinity as the point approaches the boundary of the feasible region of an optimization problem. Such functions are used to replace inequality constraints by a penalizing term in the objective function that is easier to handle. The two most common types of barrier functions are inverse barrier functions and logarithmic barrier functions. Resumption of interest in logarithmic barrier functions was motivated by their connection with primal-dual interior point methods.


Motivation

Consider the following constrained optimization problem: :minimize :subject to where is some constant. If one wishes to remove the inequality constraint, the problem can be re-formulated as :minimize , :where if , and zero otherwise. This problem is equivalent to the first. It gets rid of the inequality, but introduces the issue that the penalty function , and therefore the objective function , is discontinuous, preventing the use of calculus to solve it. A barrier function, now, is a continuous approximation to that tends to infinity as approaches from above. Using such a function, a new optimization problem is formulated, viz. :minimize where is a free parameter. This problem is not equivalent to the original, but as approaches zero, it becomes an ever-better approximation.


Logarithmic barrier function

For logarithmic barrier functions, g(x,b) is defined as -\log(b-x) when x < b and \infty otherwise (in 1 dimension. See below for a definition in higher dimensions). This essentially relies on the fact that \log(t) tends to negative infinity as t tends to 0. This introduces a gradient to the function being optimized which favors less extreme values of x (in this case values lower than b), while having relatively low impact on the function away from these extremes. Logarithmic barrier functions may be favored over less computationally expensive inverse barrier functions depending on the function being optimized.


Higher dimensions

Extending to higher dimensions is simple, provided each dimension is independent. For each variable x_i which should be limited to be strictly lower than b_i, add -\log(b_i-x_i).


Formal definition

Minimize \mathbf c^Tx subject to \mathbf a_i^T x \le b_i, i = 1,\ldots,m Assume strictly feasible: \\ne\emptyset Define logarithmic barrier \Phi(x) = \begin \sum_^m -\log(b_i - a_i^Tx) & \text Ax


See also

* Penalty method * Augmented Lagrangian method


References


External links


Lecture 14: Barrier method
from Professor Lieven Vandenberghe of UCLA Constraint programming Convex optimization Types of functions {{mathapplied-stub