Value Function
The value function of an optimization problem gives the value attained by the objective function at a solution, while only depending on the parameters of the problem. In a controlled dynamical system, the value function represents the optimal payoff of the system over the interval , t1/var> when started at the time-t state variable x(t)=x. If the objective function represents some cost that is to be minimized, the value function can be interpreted as the cost to finish the optimal program, and is thus referred to as "cost-to-go function." In an economic context, where the objective function usually represents utility, the value function is conceptually equivalent to the indirect utility function. In a problem of optimal control, the value function is defined as the supremum of the objective function taken over the set of admissible controls. Given (t_, x_) \in , t_\times \mathbb^, a typical optimal control problem is to : \text \quad J(t_, x_; u) = \int_^ I(t,x(t), u(t)) \, \m ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Optimization Problem
In mathematics, engineering, computer science and economics Economics () is a behavioral science that studies the Production (economics), production, distribution (economics), distribution, and Consumption (economics), consumption of goods and services. Economics focuses on the behaviour and interac ..., an optimization problem is the problem of finding the ''best'' solution from all feasible solutions. Optimization problems can be divided into two categories, depending on whether the variables are continuous or discrete: * An optimization problem with discrete variables is known as a '' discrete optimization'', in which an object such as an integer, permutation or graph must be found from a countable set. * A problem with continuous variables is known as a '' continuous optimization'', in which an optimal value from a continuous function must be found. They can include constrained problems and multimodal problems. Search space In the context of an optim ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Envelope Theorem
In mathematics and economics, the envelope theorem is a major result about the differentiability properties of the value function of a parameterized optimization problem. As we change parameters of the objective, the envelope theorem shows that, in a certain sense, changes in the optimizer of the objective do not contribute to the change in the objective function. The envelope theorem is an important tool for comparative statics of optimization models. The term envelope derives from describing the graph of the value function as the "upper envelope" of the graphs of the parameterized family of functions \left\ _ that are optimized. Statement Let f(x,\alpha) and g_(x,\alpha), j = 1,2, \ldots, m be real-valued continuously differentiable functions on \mathbb^, where x \in \mathbb^ are choice variables and \alpha \in \mathbb^ are parameters, and consider the problem of choosing x, for a given \alpha, so as to: : \max_ f(x, \alpha) subject to g_(x,\alpha) \geq 0, j = 1,2, \ldots, m and ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Lyapunov Function
In the theory of ordinary differential equations (ODEs), Lyapunov functions, named after Aleksandr Lyapunov, are scalar functions that may be used to prove the stability of an equilibrium of an ODE. Lyapunov functions (also called Lyapunov’s second method for stability) are important to stability theory of dynamical systems and control theory. A similar concept appears in the theory of general state-space Markov chains usually under the name Foster–Lyapunov functions. For certain classes of ODEs, the existence of Lyapunov functions is a necessary and sufficient condition for stability. Whereas there is no general technique for constructing Lyapunov functions for ODEs, in many specific cases the construction of Lyapunov functions is known. For instance, quadratic functions suffice for systems with one state, the solution of a particular linear matrix inequality provides Lyapunov functions for linear systems, and conservation laws can often be used to construct Lyapunov fun ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Online Algorithm
In computer science, an online algorithm is one that can process its input piece-by-piece in a serial fashion, i.e., in the order that the input is fed to the algorithm, without having the entire input available from the start. In contrast, an offline algorithm is given the whole problem data from the beginning and is required to output an answer which solves the problem at hand. In operations research Operations research () (U.S. Air Force Specialty Code: Operations Analysis), often shortened to the initialism OR, is a branch of applied mathematics that deals with the development and application of analytical methods to improve management and ..., the area in which online algorithms are developed is called online optimization. As an example, consider the sorting algorithms selection sort and insertion sort: selection sort repeatedly selects the minimum element from the unsorted remainder and places it at the front, which requires access to the entire input; it is thus a ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Viscosity Solution
In mathematics, the viscosity solution concept was introduced in the early 1980s by Pierre-Louis Lions and Michael G. Crandall as a generalization of the classical concept of what is meant by a 'solution' to a partial differential equation (PDE). It has been found that the viscosity solution is the natural solution concept to use in many applications of PDE's, including for example first order equations arising in dynamic programming (the Hamilton–Jacobi–Bellman equation), differential games (the Hamilton–Jacobi–Isaacs equation) or front evolution problems, as well as second-order equations such as the ones arising in stochastic optimal control or stochastic differential games. The classical concept was that a PDE : F(x,u,Du,D^2 u) = 0 over a domain x\in\Omega has a solution if we can find a function ''u''(''x'') continuous and differentiable over the entire domain such that x, u, Du, D^2 u satisfy the above equation at every point. If a scalar equation is degenerate el ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Newton Notation
In differential calculus, there is no single standard notation for differentiation. Instead, several notations for the derivative of a function or a dependent variable have been proposed by various mathematicians, including Leibniz, Newton, Lagrange, and Arbogast. The usefulness of each notation depends on the context in which it is used, and it is sometimes advantageous to use more than one notation in a given context. For more specialized settings—such as partial derivatives in multivariable calculus, tensor analysis, or vector calculus—other notations, such as subscript notation or the ∇ operator are common. The most common notations for differentiation (and its opposite operation, antidifferentiation or indefinite integration) are listed below. Leibniz's notation The original notation employed by Gottfried Leibniz is used throughout mathematics. It is particularly common when the equation is regarded as a functional relationship between dependent and independent v ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Costate Equation
The costate equation is related to the state equation used in optimal control. It is also referred to as auxiliary, adjoint, influence, or multiplier equation. It is stated as a vector of first order differential equations : \dot^(t)=-\frac where the right-hand side is the vector of partial derivatives of the negative of the Hamiltonian with respect to the state variables. Interpretation The costate variables \lambda(t) can be interpreted as Lagrange multipliers associated with the state equations. The state equations represent constraints of the minimization problem, and the costate variables represent the marginal cost of violating those constraints; in economic terms the costate variables are the shadow prices. Solution The state equation is subject to an initial condition and is solved forwards in time. The costate equation must satisfy a transversality condition and is solved backwards in time, from the final time towards the beginning. For more details see Pontryagi ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Hamiltonian (control Theory)
The Hamiltonian is a function used to solve a problem of optimal control for a dynamical system. It can be understood as an instantaneous increment of the Lagrangian expression of the problem that is to be optimized over a certain time period. Inspired by—but distinct from—the Hamiltonian of classical mechanics, the Hamiltonian of optimal control theory was developed by Lev Pontryagin as part of his maximum principle. Pontryagin proved that a necessary condition for solving the optimal control problem is that the control should be chosen so as to optimize the Hamiltonian. Problem statement and definition of the Hamiltonian Consider a dynamical system of n first-order differential equations :\dot(t) = \mathbf(\mathbf(t),\mathbf(t),t) where \mathbf(t) = \left x_(t), x_(t), \ldots, x_(t) \right denotes a vector of state variables, and \mathbf(t) = \left u_(t), u_(t), \ldots, u_(t) \right a vector of control variables. Once initial conditions \mathbf(t_) = \mathbf_ and contro ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Hamilton–Jacobi–Bellman Equation
The Hamilton-Jacobi-Bellman (HJB) equation is a nonlinear partial differential equation that provides necessary and sufficient conditions for optimality of a control with respect to a loss function. Its solution is the value function of the optimal control problem which, once known, can be used to obtain the optimal control by taking the maximizer (or minimizer) of the Hamiltonian involved in the HJB equation. The equation is a result of the theory of dynamic programming which was pioneered in the 1950s by Richard Bellman and coworkers. The connection to the Hamilton–Jacobi equation from classical physics was first drawn by Rudolf Kálmán. In discrete-time problems, the analogous difference equation is usually referred to as the Bellman equation. While classical variational problems, such as the brachistochrone problem, can be solved using the Hamilton–Jacobi–Bellman equation, the method can be applied to a broader spectrum of problems. Further it can be generalized to ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Partial Differential Equation
In mathematics, a partial differential equation (PDE) is an equation which involves a multivariable function and one or more of its partial derivatives. The function is often thought of as an "unknown" that solves the equation, similar to how is thought of as an unknown number solving, e.g., an algebraic equation like . However, it is usually impossible to write down explicit formulae for solutions of partial differential equations. There is correspondingly a vast amount of modern mathematical and scientific research on methods to numerically approximate solutions of certain partial differential equations using computers. Partial differential equations also occupy a large sector of pure mathematical research, in which the usual questions are, broadly speaking, on the identification of general qualitative features of solutions of various partial differential equations, such as existence, uniqueness, regularity and stability. Among the many open questions are the existence ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |