HOME

TheInfoList



OR:

In mathematics, nonlinear programming (NLP) is the process of solving an
optimization problem In mathematics, computer science and economics, an optimization problem is the problem of finding the ''best'' solution from all feasible solutions. Optimization problems can be divided into two categories, depending on whether the variables ...
where some of the constraints or the objective function are nonlinear. An
optimization problem In mathematics, computer science and economics, an optimization problem is the problem of finding the ''best'' solution from all feasible solutions. Optimization problems can be divided into two categories, depending on whether the variables ...
is one of calculation of the extrema (maxima, minima or stationary points) of an
objective function In mathematical optimization and decision theory, a loss function or cost function (sometimes also called an error function) is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost ...
over a set of unknown real variables and conditional to the satisfaction of a system of equalities and
inequalities Inequality may refer to: Economics * Attention inequality, unequal distribution of attention across users, groups of people, issues in etc. in attention economy * Economic inequality, difference in economic well-being between population groups * ...
, collectively termed constraints. It is the sub-field of mathematical optimization that deals with problems that are not linear.


Applicability

A typical non-
convex Convex or convexity may refer to: Science and technology * Convex lens, in optics Mathematics * Convex set, containing the whole line segment that joins points ** Convex polygon, a polygon which encloses a convex set of points ** Convex polytop ...
problem is that of optimizing transportation costs by selection from a set of transportation methods, one or more of which exhibit
economies of scale In microeconomics, economies of scale are the cost advantages that enterprises obtain due to their scale of operation, and are typically measured by the amount of output produced per unit of time. A decrease in cost per unit of output enables ...
, with various connectivities and capacity constraints. An example would be petroleum product transport given a selection or combination of pipeline, rail tanker, road tanker, river barge, or coastal tankship. Owing to economic batch size the cost functions may have discontinuities in addition to smooth changes. In experimental science, some simple data analysis (such as fitting a spectrum with a sum of peaks of known location and shape but unknown magnitude) can be done with linear methods, but in general these problems are also nonlinear. Typically, one has a theoretical model of the system under study with variable parameters in it and a model the experiment or experiments, which may also have unknown parameters. One tries to find a best fit numerically. In this case one often wants a measure of the precision of the result, as well as the best fit itself.


Definition

Let ''n'', ''m'', and ''p'' be positive integers. Let ''X'' be a subset of ''Rn'', let ''f'', ''gi'', and ''hj'' be
real-valued function In mathematics, a real-valued function is a function whose values are real numbers. In other words, it is a function that assigns a real number to each member of its domain. Real-valued functions of a real variable (commonly called ''real f ...
s on ''X'' for each ''i'' in and each ''j'' in , with at least one of ''f'', ''gi'', and ''hj'' being nonlinear. A nonlinear minimization problem is an
optimization problem In mathematics, computer science and economics, an optimization problem is the problem of finding the ''best'' solution from all feasible solutions. Optimization problems can be divided into two categories, depending on whether the variables ...
of the form : \begin \text & f(x) \\ \text & g_i(x) \leq 0 \text i \in \ \\ & h_j(x) = 0 \text j \in \ \\ & x \in X. \end A nonlinear maximization problem is defined in a similar way.


Possible types of constraint set

There are several possibilities for the nature of the constraint set, also known as the feasible set or
feasible region In mathematical optimization, a feasible region, feasible set, search space, or solution space is the set of all possible points (sets of values of the choice variables) of an optimization problem that satisfy the problem's constraints, potent ...
. An ''infeasible'' problem is one for which no set of values for the choice variables satisfies all the constraints. That is, the constraints are mutually contradictory, and no solution exists; the feasible set is the empty set. A ''feasible'' problem is one for which there exists at least one set of values for the choice variables satisfying all the constraints. An ''unbounded'' problem is a feasible problem for which the objective function can be made to be better than any given finite value. Thus there is no optimal solution, because there is always a feasible solution that gives a better objective function value than does any given proposed solution.


Methods for solving the problem

If the objective function is
concave Concave or concavity may refer to: Science and technology * Concave lens * Concave mirror Mathematics * Concave function, the negative of a convex function * Concave polygon, a polygon which is not convex * Concave set * The concavity of a ...
(maximization problem), or
convex Convex or convexity may refer to: Science and technology * Convex lens, in optics Mathematics * Convex set, containing the whole line segment that joins points ** Convex polygon, a polygon which encloses a convex set of points ** Convex polytop ...
(minimization problem) and the constraint set is
convex Convex or convexity may refer to: Science and technology * Convex lens, in optics Mathematics * Convex set, containing the whole line segment that joins points ** Convex polygon, a polygon which encloses a convex set of points ** Convex polytop ...
, then the program is called convex and general methods from
convex optimization Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets (or, equivalently, maximizing concave functions over convex sets). Many classes of convex optimization pr ...
can be used in most cases. If the objective function is quadratic and the constraints are linear,
quadratic programming Quadratic programming (QP) is the process of solving certain mathematical optimization problems involving quadratic functions. Specifically, one seeks to optimize (minimize or maximize) a multivariate quadratic function subject to linear constr ...
techniques are used. If the objective function is a ratio of a concave and a convex function (in the maximization case) and the constraints are convex, then the problem can be transformed to a convex optimization problem using
fractional programming In mathematical optimization, fractional programming is a generalization of linear-fractional programming. The objective function in a fractional program is a ratio of two functions that are in general nonlinear. The ratio to be optimized often desc ...
techniques. Several methods are available for solving nonconvex problems. One approach is to use special formulations of linear programming problems. Another method involves the use of
branch and bound Branch and bound (BB, B&B, or BnB) is an algorithm design paradigm for discrete and combinatorial optimization problems, as well as mathematical optimization. A branch-and-bound algorithm consists of a systematic enumeration of candidate solut ...
techniques, where the program is divided into subclasses to be solved with convex (minimization problem) or linear approximations that form a lower bound on the overall cost within the subdivision. With subsequent divisions, at some point an actual solution will be obtained whose cost is equal to the best lower bound obtained for any of the approximate solutions. This solution is optimal, although possibly not unique. The algorithm may also be stopped early, with the assurance that the best possible solution is within a tolerance from the best point found; such points are called ε-optimal. Terminating to ε-optimal points is typically necessary to ensure finite termination. This is especially useful for large, difficult problems and problems with uncertain costs or values where the uncertainty can be estimated with an appropriate reliability estimation. Under
differentiability In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. In other words, the graph of a differentiable function has a non- vertical tangent line at each interior point in ...
and
constraint qualification Constraint may refer to: * Constraint (computer-aided design), a demarcation of geometrical characteristics between two or more entities or solid modeling bodies * Constraint (mathematics), a condition of an optimization problem that the solution ...
s, the Karush–Kuhn–Tucker (KKT) conditions provide necessary conditions for a solution to be optimal. Under convexity, these conditions are also sufficient. If some of the functions are non-differentiable,
subdifferential In mathematics, the subderivative, subgradient, and subdifferential generalize the derivative to convex functions which are not necessarily differentiable. Subderivatives arise in convex analysis, the study of convex functions, often in connect ...
versions of Karush–Kuhn–Tucker (KKT) conditions are available.


Numerical Examples


2-dimensional example

A simple problem (shown in the diagram) can be defined by the constraints :''x''1 ≥ 0 :''x''2 ≥ 0 :''x''12 + ''x''22 ≥ 1 :''x''12 + ''x''22 ≤ 2 with an objective function to be maximized :''f''(x) = ''x''1 + ''x''2 where x = (''x''1, ''x''2).


3-dimensional example

Another simple problem (see diagram) can be defined by the constraints :''x''12 − ''x''22 + ''x''32 ≤ 2 :''x''12 + ''x''22 + ''x''32 ≤ 10 with an objective function to be maximized :''f''(x) = ''x''1''x''2 + ''x''2''x''3 where x = (''x''1, ''x''2, ''x''3).


See also

*
Curve fitting Curve fitting is the process of constructing a curve, or mathematical function, that has the best fit to a series of data points, possibly subject to constraints. Curve fitting can involve either interpolation, where an exact fit to the data i ...
* Least squares minimization * Linear programming *
nl (format) nl is a file format for presenting and archiving mathematical programming problems. Initially, this format has been invented for connecting solvers to AMPL. It has also been adopted by other systems such as COIN-OR (as one of the input formats) ...
*
Nonlinear least squares Non-linear least squares is the form of least squares analysis used to fit a set of ''m'' observations with a model that is non-linear in ''n'' unknown parameters (''m'' ≥ ''n''). It is used in some forms of nonlinear regression. The ...
* List of optimization software * Quadratically constrained quadratic programming * Werner Fenchel, who created the foundation for nonlinear programming


References


Further reading

* Avriel, Mordecai (2003). ''Nonlinear Programming: Analysis and Methods.'' Dover Publishing. . * Bazaraa, Mokhtar S. and Shetty, C. M. (1979). ''Nonlinear programming. Theory and algorithms.'' John Wiley & Sons. . * * * Nocedal, Jorge and Wright, Stephen J. (1999). ''Numerical Optimization.'' Springer. . *
Jan Brinkhuis Jan Brinkhuis (born 1952) is a Dutch mathematician, and Associate Professor of Finance and Mathematical Methods and Techniques at the Econometric Institute of Erasmus University Rotterdam, specialized in the theory and application of optimization t ...
and Vladimir Tikhomirov, ''Optimization: Insights and Applications'', 2005, Princeton University Press


External links


Mathematical Programming Glossary
{{DEFAULTSORT:Nonlinear Programming Optimization algorithms and methods