HOME

TheInfoList



OR:

In the
calculus of variations The calculus of variations (or variational calculus) is a field of mathematical analysis that uses variations, which are small changes in Function (mathematics), functions and functional (mathematics), functionals, to find maxima and minima of f ...
, the second variation extends the idea of the second derivative test to functionals. Much like for functions, at a
stationary point In mathematics, particularly in calculus, a stationary point of a differentiable function of one variable is a point on the graph of a function, graph of the function where the function's derivative is zero. Informally, it is a point where the ...
where the first derivative is zero, the second derivative determines the nature of the stationary point; it may be negative (if the point is a
maximum In mathematical analysis, the maximum and minimum of a function (mathematics), function are, respectively, the greatest and least value taken by the function. Known generically as extremum, they may be defined either within a given Interval (ma ...
point), positive (if a minimum) or zero (if a
saddle point In mathematics, a saddle point or minimax point is a Point (geometry), point on the surface (mathematics), surface of the graph of a function where the slopes (derivatives) in orthogonal directions are all zero (a Critical point (mathematics), ...
). Via the second functional, it is possible to derive powerful necessary conditions for solving variational problems, such as the Legendre–Clebsch condition and the Jacobi necessary condition detailed below.


Motivation

Much of the calculus of variations relies on the
first variation In applied mathematics and the calculus of variations, the first variation of a functional ''J''(''y'') is defined as the linear functional \delta J(y) mapping the function ''h'' to :\delta J(y,h) = \lim_ \frac = \left.\frac J(y + \varepsilon h ...
, which is a generalization of the first derivative to a functional. An example of a class of variational problems is to find the function y which minimizes the
integral In mathematics, an integral is the continuous analog of a Summation, sum, which is used to calculate area, areas, volume, volumes, and their generalizations. Integration, the process of computing an integral, is one of the two fundamental oper ...
J = \int_a^b f(x, y, y')dx on the interval , b/math>; J here is a functional (a mapping which takes a function and returns a scalar). It is known that any
smooth function In mathematical analysis, the smoothness of a function is a property measured by the number of continuous derivatives (''differentiability class)'' it has over its domain. A function of class C^k is a function of smoothness at least ; t ...
y which minimizes this functional satisfies the Euler-Lagrange equation f_ - \frac f_ = 0. These solutions are stationary, but there is no guarantee that they are the type of extremum desired (completely analogously to the first derivative, they may be a minimum, maximum or
saddle point In mathematics, a saddle point or minimax point is a Point (geometry), point on the surface (mathematics), surface of the graph of a function where the slopes (derivatives) in orthogonal directions are all zero (a Critical point (mathematics), ...
). A test via the second variation would ensure that the solution is a minimum.


Derivation

Take an extremum y. The
Taylor series In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor ser ...
of the integrand of our variational functional about a nearby point y + \varepsilon h where \varepsilon is small and h is a smooth function which is zero at a and b is f(x, y, y') = f(x, y, y') \varepsilon (h f_y + h' f_) + \frac (h^2 f_ + 2hh' f_ + h'^2 f_) + O(\varepsilon^3). The first term of the series is the
first variation In applied mathematics and the calculus of variations, the first variation of a functional ''J''(''y'') is defined as the linear functional \delta J(y) mapping the function ''h'' to :\delta J(y,h) = \lim_ \frac = \left.\frac J(y + \varepsilon h ...
, and the second is defined to be the second variation: \delta^2J(h, y) := \int_a^b h^2 f_ + 2hh' f_ + f_ h'^2. It can then be shown that J has a
local Local may refer to: Geography and transportation * Local (train), a train serving local traffic demand * Local, Missouri, a community in the United States Arts, entertainment, and media * ''Local'' (comics), a limited series comic book by Bria ...
minimum at y_0 if it is stationary (i.e. the first variation is zero) and \delta^2J(h, y_0) \geq 0 for all h.


The Jacobi necessary condition


The accessory problem and Jacobi differential equation

As discussed above, a minimum of the problem requires that \delta^2J(h, y_0) \geq 0 for all h; furthermore, the trivial solution h=0 gives \delta^2J(h, y_0) = 0. Thus consider \delta^2J(h, y_0) can be considered as a variational problem in itself - this is called the accessory problem with integrand denoted \Omega. The Jacobi differential equation is then the Euler-Lagrange equation for this accessory problem: \Omega_h - \frac \Omega_ = 0.


Conjugate points and the Jacobi necessary condition

As well as being easier to construct than the original Euler-Lagrange equation (due h and h' being at most quadratic) the Jacobi equation also expresses the conjugate points of the original variational problem in its solutions. A point c is conjugate to the lower boundary a if there is a nontrivial solution h to the Jacobi differential equation with h(a)=h(c)=0. The Jacobi necessary condition then follows: In particular, if f satisfies the strengthened Legendre condition f_ > 0, then y is only an extremal if it has no conjugate points. The Jacobi necessary condition is named after Carl Jacobi, who first utilized the solutions for the accessory problem in his articl
''Zur Theorie der Variations-Rechnung und der Differential-Gleichungen''
and the term 'accessory problem' was introduced by von Escherich.


An example: shortest path on a sphere

As an example, the problem of finding a
geodesic In geometry, a geodesic () is a curve representing in some sense the locally shortest path ( arc) between two points in a surface, or more generally in a Riemannian manifold. The term also has meaning in any differentiable manifold with a conn ...
(shortest path) between two points on a sphere can be represented as the variational problem with functional J = \int_0^b \sqrtdx. The equator of the sphere, y=0 minimizes this functional with f_ = 1 > 0; for this problem the Jacobi differential equation is h'' + h = 0 which has solutions h = A\sin(x) + B\cos(x). If a solution satisfies h(0)=0, then it must have the form h = A\sin(x). These functions have zeroes at k\pi, k \in \mathbb, and so the equator is only a solution if b < \pi. This makes intuitive sense; if one draws a great circle through two points on the sphere, there are two paths between them, one longer than the other. If b > \pi, then we are going over halfway around the circle to get to the other point, and it would be quicker to get there in the other direction.


References


Further reading

*M. Morse, "The calculus of variations in the large" , Amer. Math. Soc. (1934) *J.W. Milnor, "Morse theory" , Princeton Univ. Press (1963) *Weishi Liu, Chapter 10. The Second Variation, University of Kansa

*Lecture 12: variations and Jacobi field

{{ref end Calculus of variations