Hamilton–Jacobi–Bellman equation
   HOME

TheInfoList



OR:

In
optimal control theory Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfi ...
, the Hamilton-Jacobi-Bellman (HJB) equation gives a
necessary and sufficient condition In logic and mathematics, necessity and sufficiency are terms used to describe a conditional or implicational relationship between two statements. For example, in the conditional statement: "If then ", is necessary for , because the truth of ...
for
optimality Optimality may refer to: * Mathematical optimization * Optimality Theory in linguistics * optimality model In biology, optimality models are a tool used to evaluate the costs and benefits of different organismal features, traits, and characterist ...
of a
control Control may refer to: Basic meanings Economics and business * Control (management), an element of management * Control, an element of management accounting * Comptroller (or controller), a senior financial officer in an organization * Controllin ...
with respect to a loss function. It is, in general, a nonlinear partial differential equation in the
value function The value function of an optimization problem gives the value attained by the objective function at a solution, while only depending on the parameters of the problem. In a controlled dynamical system, the value function represents the optimal payof ...
, which means its solution the value function itself. Once this solution is known, it can be used to obtain the optimal control by taking the maximizer (or minimizer) of the
Hamiltonian Hamiltonian may refer to: * Hamiltonian mechanics, a function that represents the total energy of a system * Hamiltonian (quantum mechanics), an operator corresponding to the total energy of that system ** Dyall Hamiltonian, a modified Hamiltonian ...
involved in the HJB equation. The equation is a result of the theory of
dynamic programming Dynamic programming is both a mathematical optimization method and a computer programming method. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics. ...
which was pioneered in the 1950s by Richard Bellman and coworkers. The connection to the
Hamilton–Jacobi equation In physics, the Hamilton–Jacobi equation, named after William Rowan Hamilton and Carl Gustav Jacob Jacobi, is an alternative formulation of classical mechanics, equivalent to other formulations such as Newton's laws of motion, Lagrangian mecha ...
from classical physics was first drawn by Rudolf Kálmán. In
discrete-time In mathematical dynamics, discrete time and continuous time are two alternative frameworks within which variables that evolve over time are modeled. Discrete time Discrete time views values of variables as occurring at distinct, separate "po ...
problems, the corresponding difference equation is usually referred to as the
Bellman equation A Bellman equation, named after Richard E. Bellman, is a necessary condition for optimality associated with the mathematical optimization method known as dynamic programming. It writes the "value" of a decision problem at a certain point in time ...
. While classical variational problems, such as the
brachistochrone problem In physics and mathematics, a brachistochrone curve (), or curve of fastest descent, is the one lying on the plane between a point ''A'' and a lower point ''B'', where ''B'' is not directly below ''A'', on which a bead slides frictionlessly under ...
, can be solved using the Hamilton–Jacobi–Bellman equation, the method can be applied to a broader spectrum of problems. Further it can be generalized to stochastic systems, in which case the HJB equation is a second-order elliptic partial differential equation. A major drawback, however, is that the HJB equation admits classical solutions only for a sufficiently smooth value function, which is not guaranteed in most situations. Instead, the notion of a
viscosity solution In mathematics, the viscosity solution concept was introduced in the early 1980s by Pierre-Louis Lions and Michael G. Crandall as a generalization of the classical concept of what is meant by a 'solution' to a partial differential equation (PDE) ...
is required, in which conventional derivatives are replaced by (set-valued)
subderivative In mathematics, the subderivative, subgradient, and subdifferential generalize the derivative to convex functions which are not necessarily differentiable. Subderivatives arise in convex analysis, the study of convex functions, often in connectio ...
s.


Optimal-control-problems

Consider the following problem in deterministic optimal control over the time period ,T/math>: :V_T(x(0), 0) = \min_u \left\ where C
cdot CDOT may refer to: *\cdot – the LaTeX input for the dot operator (⋅) *Cdot, a rapper from Sumter, South Carolina *Centre for Development of Telematics, India * Chicago Department of Transportation * Clustered Data ONTAP, an operating system from ...
/math> is the scalar cost rate function and D
cdot CDOT may refer to: *\cdot – the LaTeX input for the dot operator (⋅) *Cdot, a rapper from Sumter, South Carolina *Centre for Development of Telematics, India * Chicago Department of Transportation * Clustered Data ONTAP, an operating system from ...
/math> is a function that gives the bequest value at the final state, x(t) is the system state vector, x(0) is assumed given, and u(t) for 0 \leq t \leq T is the control vector that we are trying to find. The system must also be subject to : \dot(t)=F (t),u(t)\, where F
cdot CDOT may refer to: *\cdot – the LaTeX input for the dot operator (⋅) *Cdot, a rapper from Sumter, South Carolina *Centre for Development of Telematics, India * Chicago Department of Transportation * Clustered Data ONTAP, an operating system from ...
/math> gives the vector determining physical evolution of the state vector over time.


The partial differential equation

For this simple system (letting V=V_T), the Hamilton–Jacobi–Bellman partial differential equation is : \frac + \min_u \left\ = 0 subject to the terminal condition : V(x,T) = D(x),\, The unknown scalar V(x, t) in the above partial differential equation is the Bellman
value function The value function of an optimization problem gives the value attained by the objective function at a solution, while only depending on the parameters of the problem. In a controlled dynamical system, the value function represents the optimal payof ...
, which represents the cost incurred from starting in state x at time t and controlling the system optimally from then until time T.


Deriving the equation

Intuitively, the HJB equation can be derived as follows. If V(x(t), t) is the optimal cost-to-go function (also called the 'value function'), then by Richard Bellman's principle of optimality, going from time ''t'' to ''t'' + ''dt'', we have : V(x(t), t) = \min_u \left\. Note that the
Taylor expansion In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor seri ...
of the first term on the right-hand side is : V(x(t+dt), t+dt) = V(x(t), t) + \frac \, dt + \frac \cdot \dot(t) \, dt + \mathcal(dt), where \mathcal(dt) denotes the terms in the Taylor expansion of higher order than one in little-''o'' notation. Then if we subtract V(x(t), t) from both sides, divide by ''dt'', and take the limit as ''dt'' approaches zero, we obtain the HJB equation defined above.


Solving the equation

The HJB equation is usually solved backwards in time, starting from t = T and ending at t = 0. When solved over the whole of state space and V(x) is continuously differentiable, the HJB equation is a
necessary and sufficient condition In logic and mathematics, necessity and sufficiency are terms used to describe a conditional or implicational relationship between two statements. For example, in the conditional statement: "If then ", is necessary for , because the truth of ...
for an optimum when the terminal state is unconstrained. If we can solve for V then we can find from it a control u that achieves the minimum cost. In general case, the HJB equation does not have a classical (smooth) solution. Several notions of generalized solutions have been developed to cover such situations, including
viscosity solution In mathematics, the viscosity solution concept was introduced in the early 1980s by Pierre-Louis Lions and Michael G. Crandall as a generalization of the classical concept of what is meant by a 'solution' to a partial differential equation (PDE) ...
( Pierre-Louis Lions and Michael Crandall),
minimax solution Minimax (sometimes MinMax, MM or saddle point) is a decision rule used in artificial intelligence, decision theory, game theory, statistics, and philosophy for ''mini''mizing the possible loss for a worst case (''max''imum loss) scenario. When ...
(), and others. Approximate dynamic programming has been introduced by D. P. Bertsekas and J. N. Tsitsiklis with the use of
artificial neural network Artificial neural networks (ANNs), usually simply called neural networks (NNs) or neural nets, are computing systems inspired by the biological neural networks that constitute animal brains. An ANN is based on a collection of connected unit ...
s (
multilayer perceptron A multilayer perceptron (MLP) is a fully connected class of feedforward artificial neural network (ANN). The term MLP is used ambiguously, sometimes loosely to mean ''any'' feedforward ANN, sometimes strictly to refer to networks composed of mul ...
s) for approximating the Bellman function in general. This is an effective mitigation strategy for reducing the impact of dimensionality by replacing the memorization of the complete function mapping for the whole space domain with the memorization of the sole neural network parameters. In particular, for continuous-time systems, an approximate dynamic programming approach that combines both policy iterations with neural networks was introduced. In discrete-time, an approach to solve the HJB equation combining value iterations and neural networks was introduced. Alternatively, it has been shown that sum-of-squares optimization can yield an approximate polynomial solution to the Hamilton-Jacobi-Bellman equation arbitrarily well with respect to the L^1 norm.


Extension to stochastic problems

The idea of solving a control problem by applying Bellman's principle of optimality and then working out backwards in time an optimizing strategy can be generalized to stochastic control problems. Consider similar as above : \min_u \mathbb E \left\ now with (X_t)_\,\! the stochastic process to optimize and (u_t)_\,\! the steering. By first using Bellman and then expanding V(X_t,t) with Itô's rule, one finds the stochastic HJB equation : \min_u \left\ = 0, where \mathcal represents the stochastic differentiation operator, and subject to the terminal condition : V(x,T) = D(x)\,\!. Note that the randomness has disappeared. In this case a solution V\,\! of the latter does not necessarily solve the primal problem, it is a candidate only and a further verifying argument is required. This technique is widely used in Financial Mathematics to determine optimal investment strategies in the market (see for example Merton's portfolio problem).


Application to LQG-Control

As an example, we can look at a system with linear stochastic dynamics and quadratic cost. If the system dynamics is given by : dx_t = (a x_t + b u_t) dt + \sigma dw_t, and the cost accumulates at rate C(x_t,u_t) = r(t) u_t^2/2 + q(t) x_t^2/2, the HJB equation is given by : -\frac = \fracq(t) x^2 + \frac a x - \frac \left(\frac\right)^2 + \frac \frac. with optimal action given by : u_t = -\frac\frac Assuming a quadratic form for the value function, we obtain the usual
Riccati equation In mathematics, a Riccati equation in the narrowest sense is any first-order ordinary differential equation that is quadratic in the unknown function. In other words, it is an equation of the form : y'(x) = q_0(x) + q_1(x) \, y(x) + q_2(x) \, y^2( ...
for the Hessian of the value function as is usual for Linear-quadratic-Gaussian control.


See also

*
Bellman equation A Bellman equation, named after Richard E. Bellman, is a necessary condition for optimality associated with the mathematical optimization method known as dynamic programming. It writes the "value" of a decision problem at a certain point in time ...
, discrete-time counterpart of the Hamilton–Jacobi–Bellman equation. *
Pontryagin's maximum principle Pontryagin's maximum principle is used in optimal control theory to find the best possible control for taking a dynamical system from one state to another, especially in the presence of constraints for the state or input controls. It states that it ...
, necessary but not sufficient condition for optimum, by maximizing a
Hamiltonian Hamiltonian may refer to: * Hamiltonian mechanics, a function that represents the total energy of a system * Hamiltonian (quantum mechanics), an operator corresponding to the total energy of that system ** Dyall Hamiltonian, a modified Hamiltonian ...
, but this has the advantage over HJB of only needing to be satisfied over the single trajectory being considered.


References


Further reading

* * * {{DEFAULTSORT:Hamilton-Jacobi-Bellman equation Partial differential equations Optimal control Dynamic programming Stochastic control William Rowan Hamilton