Pontryagin maximum principle
   HOME

TheInfoList



OR:

Pontryagin's maximum principle is used in optimal control theory to find the best possible control for taking a
dynamical system In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in an ambient space. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in ...
from one state to another, especially in the presence of constraints for the state or input controls. It states that it is necessary for any optimal control along with the optimal state trajectory to solve the so-called Hamiltonian system, which is a two-point boundary value problem, plus a maximum condition of the control Hamiltonian. These necessary conditions become sufficient under certain convexity conditions on the objective and constraint functions. The maximum principle was formulated in 1956 by the Russian mathematician
Lev Pontryagin Lev Semenovich Pontryagin (russian: Лев Семёнович Понтрягин, also written Pontriagin or Pontrjagin) (3 September 1908 – 3 May 1988) was a Soviet mathematician. He was born in Moscow and lost his eyesight completely due ...
and his students, and its initial application was to the maximization of the terminal speed of a rocket. The result was derived using ideas from the classical calculus of variations. After a slight perturbation of the optimal control, one considers the first-order term of a
Taylor Taylor, Taylors or Taylor's may refer to: People * Taylor (surname) ** List of people with surname Taylor * Taylor (given name), including Tayla and Taylah * Taylor sept, a branch of Scottish clan Cameron * Justice Taylor (disambiguation) Pl ...
expansion with respect to the perturbation; sending the perturbation to zero leads to a variational inequality from which the maximum principle follows. Widely regarded as a milestone in optimal control theory, the significance of the maximum principle lies in the fact that maximizing the Hamiltonian is much easier than the original infinite-dimensional control problem; rather than maximizing over a function space, the problem is converted to a
pointwise In mathematics, the qualifier pointwise is used to indicate that a certain property is defined by considering each value f(x) of some function f. An important class of pointwise concepts are the ''pointwise operations'', that is, operations defined ...
optimization. A similar logic leads to Bellman's principle of optimality, a related approach to optimal control problems which states that the optimal trajectory remains optimal at intermediate points in time. The resulting Hamilton–Jacobi–Bellman equation provides a necessary and sufficient condition for an optimum, and admits a straightforward extension to stochastic optimal control problems, whereas the maximum principle does not. However in contrast to the Hamilton–Jacobi–Bellman equation, which needs to hold over the entire state space to be valid, Pontryagin's Maximum Principle is potentially more computationally efficient in that the conditions which it specifies only need to hold over a particular trajectory.


Notation

For set \mathcal and functions \Psi : \reals^n \to \reals, H : \reals^n \times \mathcal \times \reals^n \times \reals \to \reals, L : \reals^n \times \mathcal \to \reals and f : \reals^n \times \mathcal \to \reals^n, we use the following notation: : \Psi_T(x(T))= \left.\frac\_ \, : \Psi_x(x(T))=\begin \left.\frac\_ & \cdots & \left.\frac \_ \end : H_x(x^*,u^*,\lambda^*,t)=\begin \left.\frac\_ & \cdots & \left.\frac\_ \end : L_x(x^*,u^*)=\begin \left.\frac\_ & \cdots & \left.\frac\_ \end : f_x(x^*,u^*)=\begin \left.\frac\_ & \cdots & \left.\frac\_ \\ \vdots & \ddots & \vdots \\ \left.\frac\_ & \ldots & \left.\frac\_ \end


Formal statement of necessary conditions for minimization problem

Here the necessary conditions are shown for minimization of a functional. Take x to be the state of the
dynamical system In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in an ambient space. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in ...
with input u, such that : \dot=f(x,u), \quad x(0)=x_0, \quad u(t) \in \mathcal, \quad t \in ,T where \mathcal is the set of admissible controls and T is the terminal (i.e., final) time of the system. The control u \in \mathcal must be chosen for all t \in ,T/math> to minimize the objective functional J which is defined by the application and can be abstracted as : J=\Psi(x(T))+\int^T_0 L(x(t),u(t)) \,dt The constraints on the system dynamics can be adjoined to the
Lagrangian Lagrangian may refer to: Mathematics * Lagrangian function, used to solve constrained minimization problems in optimization theory; see Lagrange multiplier ** Lagrangian relaxation, the method of approximating a difficult constrained problem with ...
L by introducing time-varying
Lagrange multiplier In mathematical optimization, the method of Lagrange multipliers is a strategy for finding the local maxima and minima of a function subject to equality constraints (i.e., subject to the condition that one or more equations have to be satisfied e ...
vector \lambda, whose elements are called the costates of the system. This motivates the construction of the
Hamiltonian Hamiltonian may refer to: * Hamiltonian mechanics, a function that represents the total energy of a system * Hamiltonian (quantum mechanics), an operator corresponding to the total energy of that system ** Dyall Hamiltonian, a modified Hamiltonian ...
H defined for all t \in ,T/math> by: : H(x(t),u(t),\lambda(t),t)=\lambda^(t)f(x(t),u(t))+L(x(t),u(t)) where \lambda^ is the transpose of \lambda. Pontryagin's minimum principle states that the optimal state trajectory x^*, optimal control u^*, and corresponding Lagrange multiplier vector \lambda^* must minimize the Hamiltonian H so that for all time t \in ,T/math> and for all permissible control inputs u \in \mathcal. Additionally, the
costate equation The costate equation is related to the state equation used in optimal control. It is also referred to as auxiliary, adjoint, influence, or multiplier equation. It is stated as a vector of first order differential equations : \dot^(t)=-\frac where ...
and its terminal conditions must be satisfied. If the final state x(T) is not fixed (i.e., its differential variation is not zero), it must also be that These four conditions in (1)-(4) are the necessary conditions for an optimal control. Note that (4) only applies when x(T) is free. If it is fixed, then this condition is not necessary for an optimum.


See also

* Lagrange multipliers on Banach spaces, Lagrangian method in calculus of variations


Notes


References


Further reading

* * * *


External links

* {{DEFAULTSORT:Pontryagin's Minimum Principle Optimal control Principles fr:Commande optimale#Principe du maximum ru:Оптимальное управление#Принцип максимума Понтрягина