Active Set
In mathematical optimization, the active-set method is an algorithm used to identify the active constraints in a set of inequality constraints. The active constraints are then expressed as equality constraints, thereby transforming an inequality-constrained problem into a simpler equality-constrained subproblem. An optimization problem is defined using an objective function to minimize or maximize, and a set of constraints : g_1(x) \ge 0, \dots, g_k(x) \ge 0 that define the feasible region, that is, the set of all ''x'' to search for the optimal solution. Given a point x in the feasible region, a constraint : g_i(x) \ge 0 is called active at x_0 if g_i(x_0) = 0, and inactive at x_0 if g_i(x_0) > 0. Equality constraints are always active. The active set at x_0 is made up of those constraints g_i(x_0) that are active at the current point . The active set is particularly important in optimization theory, as it determines which constraints will influence the final result of o ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Optimization (mathematics)
Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criteria, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries. In the more general approach, an optimization problem consists of maxima and minima, maximizing or minimizing a Function of a real variable, real function by systematically choosing Argument of a function, input values from within an allowed set and computing the Value (mathematics), value of the function. The generalization of optimization theory and techniques to other formulations constitutes a large area of applied mathematics. Optimization problems Opti ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Constraint (mathematics)
In mathematics, a constraint is a condition of an optimization problem that the solution must satisfy. There are several types of constraints—primarily equality constraints, inequality constraints, and integer constraints. The set of candidate solutions that satisfy all constraints is called the feasible set. Example The following is a simple optimization problem: :\min f(\mathbf x) = x_1^2+x_2^4 subject to :x_1 \ge 1 and :x_2 = 1, where \mathbf x denotes the vector (''x''1, ''x''2). In this example, the first line defines the function to be minimized (called the objective function, loss function, or cost function). The second and third lines define two constraints, the first of which is an inequality constraint and the second of which is an equality constraint. These two constraints are hard constraints, meaning that it is required that they be satisfied; they define the feasible set of candidate solutions. Without the constraints, the solution would be (0,0), ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Inequality (mathematics)
In mathematics, an inequality is a relation which makes a non-equal comparison between two numbers or other mathematical expressions. It is used most often to compare two numbers on the number line by their size. The main types of inequality are less than and greater than (denoted by and , respectively the less-than sign, less-than and greater-than sign, greater-than signs). Notation There are several different notations used to represent different kinds of inequalities: * The notation ''a'' ''b'' means that ''a'' is greater than ''b''. In either case, ''a'' is not equal to ''b''. These relations are known as strict inequalities, meaning that ''a'' is strictly less than or strictly greater than ''b''. Equality is excluded. In contrast to strict inequalities, there are two types of inequality relations that are not strict: * The notation ''a'' ≤ ''b'' or ''a'' ⩽ ''b'' or ''a'' ≦ ''b'' means that ''a'' is less than or equal to ''b'' (or, equivalently, at most ''b'', or no ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Feasible Region
In mathematical optimization and computer science, a feasible region, feasible set, or solution space is the set of all possible points (sets of values of the choice variables) of an optimization problem that satisfy the problem's constraints, potentially including inequalities, equalities, and integer constraints. This is the initial set of candidate solutions to the problem, before the set of candidates has been narrowed down. For example, consider the problem of minimizing the function x^2+y^4 with respect to the variables x and y, subject to 1 \le x \le 10 and 5 \le y \le 12. \, Here the feasible set is the set of pairs (''x'', ''y'') in which the value of ''x'' is at least 1 and at most 10 and the value of ''y'' is at least 5 and at most 12. The feasible set of the problem is separate from the objective function, which states the criterion to be optimized and which in the above example is x^2+y^4. In many problems, the feasible set reflects a constraint that one ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Linear Programming
Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements and objective are represented by linear function#As a polynomial function, linear relationships. Linear programming is a special case of mathematical programming (also known as mathematical optimization). More formally, linear programming is a technique for the mathematical optimization, optimization of a linear objective function, subject to linear equality and linear inequality Constraint (mathematics), constraints. Its feasible region is a convex polytope, which is a set defined as the intersection (mathematics), intersection of finitely many Half-space (geometry), half spaces, each of which is defined by a linear inequality. Its objective function is a real number, real-valued affine function, affine (linear) function defined on this polytope. A linear programming algorithm finds a point in the po ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Hyperplane
In geometry, a hyperplane is a generalization of a two-dimensional plane in three-dimensional space to mathematical spaces of arbitrary dimension. Like a plane in space, a hyperplane is a flat hypersurface, a subspace whose dimension is one less than that of the ambient space. Two lower-dimensional examples of hyperplanes are one-dimensional lines in a plane and zero-dimensional points on a line. Most commonly, the ambient space is -dimensional Euclidean space, in which case the hyperplanes are the -dimensional "flats", each of which separates the space into two half spaces. A reflection across a hyperplane is a kind of motion ( geometric transformation preserving distance between points), and the group of all motions is generated by the reflections. A convex polytope is the intersection of half-spaces. In non-Euclidean geometry, the ambient space might be the -dimensional sphere or hyperbolic space, or more generally a pseudo-Riemannian space form, and ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Quadratic Programming
Quadratic programming (QP) is the process of solving certain mathematical optimization problems involving quadratic functions. Specifically, one seeks to optimize (minimize or maximize) a multivariate quadratic function subject to linear constraints on the variables. Quadratic programming is a type of nonlinear programming. "Programming" in this context refers to a formal procedure for solving mathematical problems. This usage dates to the 1940s and is not specifically tied to the more recent notion of "computer programming." To avoid confusion, some practitioners prefer the term "optimization" — e.g., "quadratic optimization." Problem formulation The quadratic programming problem with variables and constraints can be formulated as follows. Given: * a real-valued, -dimensional vector , * an -dimensional real symmetric matrix , * an -dimensional real matrix , and * an -dimensional real vector , the objective of quadratic programming is to find an -dimensional vector , that ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Lagrange Multipliers
In mathematical optimization, the method of Lagrange multipliers is a strategy for finding the local maxima and minima of a function subject to equation constraints (i.e., subject to the condition that one or more equations have to be satisfied exactly by the chosen values of the variables). It is named after the mathematician Joseph-Louis Lagrange. Summary and rationale The basic idea is to convert a constrained problem into a form such that the derivative test of an unconstrained problem can still be applied. The relationship between the gradient of the function and gradients of the constraints rather naturally leads to a reformulation of the original problem, known as the Lagrangian function or Lagrangian. In the general case, the Lagrangian is defined as \mathcal(x, \lambda) \equiv f(x) + \langle \lambda, g(x)\rangle for functions f, g; the notation \langle \cdot, \cdot \rangle denotes an inner product. The value \lambda is called the Lagrange multiplier. In simple ca ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Successive Linear Programming
Successive Linear Programming (SLP), also known as Sequential Linear Programming, is an optimization technique for approximately solving nonlinear optimization problems. It is related to, but distinct from, quasi-Newton methods. Starting at some estimate of the optimal solution, the method is based on solving a sequence of first-order approximations (i.e. linearizations) of the model. The linearizations are linear programming problems, which can be solved efficiently. As the linearizations need not be bounded, trust regions or similar techniques are needed to ensure convergence in theory. SLP has been used widely in the petrochemical industry since the 1970s. Since then, however, they have been superseded by sequential quadratic programming methods. While solving a QP subproblem takes more time than solving an LP one, the overall decrease in the number of iterations, due to improved convergence, results in significantly lower running times and fewer function evaluations." S ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Sequential Quadratic Programming
Sequential quadratic programming (SQP) is an iterative method for constrained nonlinear optimization, also known as Lagrange-Newton method. SQP methods are used on mathematical problems for which the objective function and the constraints are twice continuously differentiable, but not necessarily convex. SQP methods solve a sequence of optimization subproblems, each of which optimizes a quadratic model of the objective subject to a linearization of the constraints. If the problem is unconstrained, then the method reduces to Newton's method for finding a point where the gradient of the objective vanishes. If the problem has only equality constraints, then the method is equivalent to applying Newton's method to the first-order optimality conditions, or Karush–Kuhn–Tucker conditions, of the problem. Algorithm basics Consider a nonlinear programming problem of the form: :\begin \min\limits_ & f(x) \\ \mbox & h(x) \ge 0 \\ & g(x) = 0. \end where x \in \mathbb^n, f: \mathbb ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Sequential Linear-quadratic Programming
Sequential linear-quadratic programming (SLQP) is an iterative method for nonlinear optimization problems where objective function and constraints are twice continuously differentiable. Similarly to sequential quadratic programming (SQP), SLQP proceeds by solving a sequence of optimization subproblems. The difference between the two approaches is that: * in SQP, each subproblem is a quadratic program, with a quadratic model of the objective subject to a linearization of the constraints * in SLQP, two subproblems are solved at each step: a linear program (LP) used to determine an active set, followed by an equality-constrained quadratic program (EQP) used to compute the total step This decomposition makes SLQP suitable to large-scale optimization problems, for which efficient LP and EQP solvers are available, these problems being easier to scale than full-fledged quadratic programs. It may be considered related to, but distinct from, quasi-Newton methods. Algorithm basics Con ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Frank–Wolfe Algorithm
The Frank–Wolfe algorithm is an iterative first-order optimization algorithm for constrained convex optimization. Also known as the conditional gradient method, reduced gradient algorithm and the convex combination algorithm, the method was originally proposed by Marguerite Frank and Philip Wolfe in 1956. In each iteration, the Frank–Wolfe algorithm considers a linear approximation of the objective function, and moves towards a minimizer of this linear function (taken over the same domain). Problem statement Suppose \mathcal is a compact convex set in a vector space and f \colon \mathcal \to \mathbb is a convex, differentiable real-valued function. The Frank–Wolfe algorithm solves the optimization problem :Minimize f(\mathbf) :subject to \mathbf \in \mathcal. Algorithm :''Initialization:'' Let k \leftarrow 0, and let \mathbf_0 \! be any point in \mathcal. :Step 1. ''Direction-finding subproblem:'' Find \mathbf_k solving ::Minimize \mathbf^T \nabla f(\mathb ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |