HOME

TheInfoList



OR:

In
mathematic Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. There are many ar ...
s, the Hessian matrix, Hessian or (less commonly) Hesse matrix is a square matrix of second-order
partial derivative In mathematics, a partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant (as opposed to the total derivative, in which all variables are allowed to vary). P ...
s of a scalar-valued function, or
scalar field In mathematics and physics, a scalar field is a function associating a single number to each point in a region of space – possibly physical space. The scalar may either be a pure mathematical number ( dimensionless) or a scalar physical ...
. It describes the local
curvature In mathematics, curvature is any of several strongly related concepts in geometry that intuitively measure the amount by which a curve deviates from being a straight line or by which a surface deviates from being a plane. If a curve or su ...
of a function of many variables. The Hessian matrix was developed in the 19th century by the German mathematician Ludwig Otto Hesse and later named after him. Hesse originally used the term "functional determinants". The Hessian is sometimes denoted by H or \nabla\nabla or \nabla^2 or \nabla\otimes\nabla or D^2.


Definitions and properties

Suppose f : \R^n \to \R is a function taking as input a vector \mathbf \in \R^n and outputting a scalar f(\mathbf) \in \R. If all second-order
partial derivative In mathematics, a partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant (as opposed to the total derivative, in which all variables are allowed to vary). P ...
s of f exist, then the Hessian matrix \mathbf of f is a square n \times n matrix, usually defined and arranged as \mathbf H_f= \begin \dfrac & \dfrac & \cdots & \dfrac \\ .2ex \dfrac & \dfrac & \cdots & \dfrac \\ .2ex \vdots & \vdots & \ddots & \vdots \\ .2ex \dfrac & \dfrac & \cdots & \dfrac \end. That is, the entry of the th row and the th column is (\mathbf H_f)_ = \frac. If furthermore the second partial derivatives are all continuous, the Hessian matrix is a
symmetric matrix In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, Because equal matrices have equal dimensions, only square matrices can be symmetric. The entries of a symmetric matrix are symmetric with ...
by the symmetry of second derivatives. The
determinant In mathematics, the determinant is a Scalar (mathematics), scalar-valued function (mathematics), function of the entries of a square matrix. The determinant of a matrix is commonly denoted , , or . Its value characterizes some properties of the ...
of the Hessian matrix is called the . The Hessian matrix of a function f is the
Jacobian matrix In vector calculus, the Jacobian matrix (, ) of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. If this matrix is square, that is, if the number of variables equals the number of component ...
of the
gradient In vector calculus, the gradient of a scalar-valued differentiable function f of several variables is the vector field (or vector-valued function) \nabla f whose value at a point p gives the direction and the rate of fastest increase. The g ...
of the function f; that is: \mathbf(f(\mathbf)) = \mathbf(\nabla f(\mathbf)).


Applications


Inflection points

If f is a homogeneous polynomial in three variables, the equation f = 0 is the
implicit equation In mathematics, an implicit equation is a relation of the form R(x_1, \dots, x_n) = 0, where is a function of several variables (often a polynomial). For example, the implicit equation of the unit circle is x^2 + y^2 - 1 = 0. An implicit func ...
of a plane projective curve. The
inflection point In differential calculus and differential geometry, an inflection point, point of inflection, flex, or inflection (rarely inflexion) is a point on a smooth plane curve at which the curvature changes sign. In particular, in the case of the graph ...
s of the curve are exactly the non-singular points where the Hessian determinant is zero. It follows by
Bézout's theorem In algebraic geometry, Bézout's theorem is a statement concerning the number of common zeros of polynomials in indeterminates. In its original form the theorem states that ''in general'' the number of common zeros equals the product of the de ...
that a
cubic plane curve In mathematics, a cubic plane curve is a plane algebraic curve defined by a cubic equation : applied to homogeneous coordinates for the projective plane; or the inhomogeneous version for the affine space determined by setting in such an ...
has at most 9 inflection points, since the Hessian determinant is a polynomial of degree 3.


Second-derivative test

The Hessian matrix of a
convex function In mathematics, a real-valued function is called convex if the line segment between any two distinct points on the graph of a function, graph of the function lies above or on the graph between the two points. Equivalently, a function is conve ...
is positive semi-definite. Refining this property allows us to test whether a critical point x is a local maximum, local minimum, or a saddle point, as follows: If the Hessian is positive-definite at x, then f attains an isolated local minimum at x. If the Hessian is negative-definite at x, then f attains an isolated local maximum at x. If the Hessian has both positive and negative
eigenvalue In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a ...
s, then x is a
saddle point In mathematics, a saddle point or minimax point is a Point (geometry), point on the surface (mathematics), surface of the graph of a function where the slopes (derivatives) in orthogonal directions are all zero (a Critical point (mathematics), ...
for f. Otherwise the test is inconclusive. This implies that at a local minimum the Hessian is positive-semidefinite, and at a local maximum the Hessian is negative-semidefinite. For positive-semidefinite and negative-semidefinite Hessians the test is inconclusive (a critical point where the Hessian is semidefinite but not definite may be a local extremum or a saddle point). However, more can be said from the point of view of Morse theory. The second-derivative test for functions of one and two variables is simpler than the general case. In one variable, the Hessian contains exactly one second derivative; if it is positive, then x is a local minimum, and if it is negative, then x is a local maximum; if it is zero, then the test is inconclusive. In two variables, the
determinant In mathematics, the determinant is a Scalar (mathematics), scalar-valued function (mathematics), function of the entries of a square matrix. The determinant of a matrix is commonly denoted , , or . Its value characterizes some properties of the ...
can be used, because the determinant is the product of the eigenvalues. If it is positive, then the eigenvalues are both positive, or both negative. If it is negative, then the two eigenvalues have different signs. If it is zero, then the second-derivative test is inconclusive. Equivalently, the second-order conditions that are sufficient for a local minimum or maximum can be expressed in terms of the sequence of principal (upper-leftmost) minors (determinants of sub-matrices) of the Hessian; these conditions are a special case of those given in the next section for bordered Hessians for constrained optimization—the case in which the number of constraints is zero. Specifically, the sufficient condition for a minimum is that all of these principal minors be positive, while the sufficient condition for a maximum is that the minors alternate in sign, with the 1 \times 1 minor being negative.


Critical points

If the
gradient In vector calculus, the gradient of a scalar-valued differentiable function f of several variables is the vector field (or vector-valued function) \nabla f whose value at a point p gives the direction and the rate of fastest increase. The g ...
(the vector of the partial derivatives) of a function f is zero at some point \mathbf, then f has a (or ) at \mathbf. The
determinant In mathematics, the determinant is a Scalar (mathematics), scalar-valued function (mathematics), function of the entries of a square matrix. The determinant of a matrix is commonly denoted , , or . Its value characterizes some properties of the ...
of the Hessian at \mathbf is called, in some contexts, a
discriminant In mathematics, the discriminant of a polynomial is a quantity that depends on the coefficients and allows deducing some properties of the zero of a function, roots without computing them. More precisely, it is a polynomial function of the coef ...
. If this determinant is zero then \mathbf is called a of f, or a of f. Otherwise it is non-degenerate, and called a of f. The Hessian matrix plays an important role in Morse theory and catastrophe theory, because its kernel and
eigenvalue In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a ...
s allow classification of the critical points. The determinant of the Hessian matrix, when evaluated at a critical point of a function, is equal to the
Gaussian curvature In differential geometry, the Gaussian curvature or Gauss curvature of a smooth Surface (topology), surface in three-dimensional space at a point is the product of the principal curvatures, and , at the given point: K = \kappa_1 \kappa_2. For ...
of the function considered as a manifold. The eigenvalues of the Hessian at that point are the principal curvatures of the function, and the eigenvectors are the principal directions of curvature. (See .)


Use in optimization

Hessian matrices are used in large-scale
optimization Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criteria, from some set of available alternatives. It is generally divided into two subfiel ...
problems within Newton-type methods because they are the coefficient of the quadratic term of a local Taylor expansion of a function. That is, y = f(\mathbf + \Delta\mathbf)\approx f(\mathbf) + \nabla f(\mathbf)^\mathsf \Delta\mathbf + \frac \, \Delta\mathbf^\mathsf \mathbf(\mathbf) \, \Delta\mathbf where \nabla f is the
gradient In vector calculus, the gradient of a scalar-valued differentiable function f of several variables is the vector field (or vector-valued function) \nabla f whose value at a point p gives the direction and the rate of fastest increase. The g ...
\left(\frac, \ldots, \frac\right). Computing and storing the full Hessian matrix takes \Theta\left(n^2\right) memory, which is infeasible for high-dimensional functions such as the loss functions of neural nets, conditional random fields, and other
statistical model A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of Sample (statistics), sample data (and similar data from a larger Statistical population, population). A statistical model repre ...
s with large numbers of parameters. For such situations, truncated-Newton and quasi-Newton algorithms have been developed. The latter family of algorithms use approximations to the Hessian; one of the most popular quasi-Newton algorithms is BFGS. Such approximations may use the fact that an optimization algorithm uses the Hessian only as a
linear operator In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that pr ...
\mathbf(\mathbf), and proceed by first noticing that the Hessian also appears in the local expansion of the gradient: \nabla f (\mathbf + \Delta\mathbf) = \nabla f (\mathbf) + \mathbf(\mathbf) \, \Delta\mathbf + \mathcal(\, \Delta\mathbf\, ^2) Letting \Delta \mathbf = r \mathbf for some scalar r, this gives \mathbf(\mathbf) \, \Delta\mathbf = \mathbf(\mathbf)r\mathbf = r\mathbf(\mathbf)\mathbf = \nabla f (\mathbf + r\mathbf) - \nabla f (\mathbf) + \mathcal(r^2), that is, \mathbf(\mathbf)\mathbf = \frac \left nabla f(\mathbf + r \mathbf) - \nabla f(\mathbf)\right+ \mathcal(r) so if the gradient is already computed, the approximate Hessian can be computed by a linear (in the size of the gradient) number of scalar operations. (While simple to program, this approximation scheme is not numerically stable since r has to be made small to prevent error due to the \mathcal(r) term, but decreasing it loses precision in the first term.) Notably regarding Randomized Search Heuristics, the
evolution strategy Evolution strategy (ES) from computer science is a subclass of evolutionary algorithms, which serves as an optimization (mathematics), optimization technique. It uses the major genetic operators mutation (evolutionary algorithm), mutation, recomb ...
's covariance matrix adapts to the inverse of the Hessian matrix,
up to Two Mathematical object, mathematical objects and are called "equal up to an equivalence relation " * if and are related by , that is, * if holds, that is, * if the equivalence classes of and with respect to are equal. This figure of speech ...
a scalar factor and small random fluctuations. This result has been formally proven for a single-parent strategy and a static model, as the population size increases, relying on the quadratic approximation.


Other applications

The Hessian matrix is commonly used for expressing image processing operators in
image processing An image or picture is a visual representation. An image can be two-dimensional, such as a drawing, painting, or photograph, or three-dimensional, such as a carving or sculpture. Images may be displayed through other media, including a pr ...
and
computer vision Computer vision tasks include methods for image sensor, acquiring, Image processing, processing, Image analysis, analyzing, and understanding digital images, and extraction of high-dimensional data from the real world in order to produce numerical ...
(see the
Laplacian of Gaussian In computer vision and image processing, blob detection methods are aimed at detecting regions in a digital image that differ in properties, such as brightness or color, compared to surrounding regions. Informally, a ''blob'' is a region of a ...
(LoG) blob detector, the determinant of Hessian (DoH) blob detector and scale space). It can be used in
normal mode A normal mode of a dynamical system is a pattern of motion in which all parts of the system move sinusoidally with the same frequency and with a fixed phase relation. The free motion described by the normal modes takes place at fixed frequencies ...
analysis to calculate the different molecular frequencies in
infrared spectroscopy Infrared spectroscopy (IR spectroscopy or vibrational spectroscopy) is the measurement of the interaction of infrared radiation with matter by absorption, emission, or reflection. It is used to study and identify chemical substances or functio ...
. It can also be used in local sensitivity and statistical diagnostics.


Generalizations


Bordered Hessian

A is used for the second-derivative test in certain constrained optimization problems. Given the function f considered previously, but adding a constraint function g such that g(\mathbf) = c, the bordered Hessian is the Hessian of the Lagrange function \Lambda(\mathbf, \lambda) = f(\mathbf) + \lambda (\mathbf) - c/math>: \mathbf H(\Lambda) = \begin \dfrac & \dfrac \\ \left(\dfrac\right)^ & \dfrac \end = \begin 0 & \dfrac & \dfrac & \cdots & \dfrac \\ .2ex\dfrac & \dfrac & \dfrac & \cdots & \dfrac \\ .2ex\dfrac & \dfrac & \dfrac & \cdots & \dfrac \\ .2ex\vdots & \vdots & \vdots & \ddots & \vdots \\ .2ex\dfrac & \dfrac & \dfrac & \cdots & \dfrac \end = \begin 0 & \dfrac \\ \left(\dfrac\right)^\mathsf & \dfrac \end If there are, say, m constraints then the zero in the upper-left corner is an m \times m block of zeros, and there are m border rows at the top and m border columns at the left. The above rules stating that extrema are characterized (among critical points with a non-singular Hessian) by a positive-definite or negative-definite Hessian cannot apply here since a bordered Hessian can neither be negative-definite nor positive-definite, as \mathbf^\mathsf \mathbf \mathbf = 0 if \mathbf is any vector whose sole non-zero entry is its first. The second derivative test consists here of sign restrictions of the determinants of a certain set of n - m submatrices of the bordered Hessian. Intuitively, the m constraints can be thought of as reducing the problem to one with n - m free variables. (For example, the maximization of f\left(x_1, x_2, x_3\right) subject to the constraint x_1 + x_2 + x_3 = 1 can be reduced to the maximization of f\left(x_1, x_2, 1 - x_1 - x_2\right) without constraint.) Specifically, sign conditions are imposed on the sequence of leading principal minors (determinants of upper-left-justified sub-matrices) of the bordered Hessian, for which the first 2 m leading principal minors are neglected, the smallest minor consisting of the truncated first 2 m + 1 rows and columns, the next consisting of the truncated first 2 m + 2 rows and columns, and so on, with the last being the entire bordered Hessian; if 2 m + 1 is larger than n + m, then the smallest leading principal minor is the Hessian itself. There are thus n - m minors to consider, each evaluated at the specific point being considered as a candidate maximum or minimum. A sufficient condition for a local is that these minors alternate in sign with the smallest one having the sign of (-1)^. A sufficient condition for a local is that all of these minors have the sign of (-1)^m. (In the unconstrained case of m=0 these conditions coincide with the conditions for the unbordered Hessian to be negative definite or positive definite respectively).


Vector-valued functions

If f is instead a
vector field In vector calculus and physics, a vector field is an assignment of a vector to each point in a space, most commonly Euclidean space \mathbb^n. A vector field on a plane can be visualized as a collection of arrows with given magnitudes and dire ...
\mathbf : \R^n \to \R^m, that is, \mathbf f(\mathbf x) = \left(f_1(\mathbf x), f_2(\mathbf x), \ldots, f_m(\mathbf x)\right), then the collection of second partial derivatives is not a n \times n matrix, but rather a third-order
tensor In mathematics, a tensor is an algebraic object that describes a multilinear relationship between sets of algebraic objects associated with a vector space. Tensors may map between different objects such as vectors, scalars, and even other ...
. This can be thought of as an array of m Hessian matrices, one for each component of \mathbf: \mathbf H(\mathbf f) = \left(\mathbf H(f_1), \mathbf H(f_2), \ldots, \mathbf H(f_m)\right). This tensor degenerates to the usual Hessian matrix when m = 1.


Generalization to the complex case

In the context of several complex variables, the Hessian may be generalized. Suppose f\colon\Complex^n \to \Complex, and write f\left(z_1, \ldots, z_n\right). Identifying ^n with ^, the normal "real" Hessian is a 2n \times 2n matrix. As the object of study in several complex variables are holomorphic functions, that is, solutions to the n-dimensional Cauchy–Riemann conditions, we usually look on the part of the Hessian that contains information invariant under holomorphic changes of coordinates. This "part" is the so-called complex Hessian, which is the matrix \left(\frac\right)_. Note that if f is holomorphic, then its complex Hessian matrix is identically zero, so the complex Hessian is used to study smooth but not holomorphic functions, see for example Levi pseudoconvexity. When dealing with holomorphic functions, we could consider the Hessian matrix \left(\frac\right)_.


Generalizations to Riemannian manifolds

Let (M,g) be a
Riemannian manifold In differential geometry, a Riemannian manifold is a geometric space on which many geometric notions such as distance, angles, length, volume, and curvature are defined. Euclidean space, the N-sphere, n-sphere, hyperbolic space, and smooth surf ...
and \nabla its
Levi-Civita connection In Riemannian or pseudo-Riemannian geometry (in particular the Lorentzian geometry of general relativity), the Levi-Civita connection is the unique affine connection on the tangent bundle of a manifold that preserves the ( pseudo-) Riemannian ...
. Let f : M \to \R be a smooth function. Define the Hessian tensor by \operatorname(f) \in \Gamma\left(T^*M \otimes T^*M\right) \quad \text \quad \operatorname(f) := \nabla \nabla f = \nabla df, where this takes advantage of the fact that the first covariant derivative of a function is the same as its ordinary differential. Choosing local coordinates \left\ gives a local expression for the Hessian as \operatorname(f)=\nabla_i\, \partial_j f \ dx^i \!\otimes\! dx^j = \left(\frac - \Gamma_^k \frac\right) dx^i \otimes dx^j where \Gamma^k_ are the Christoffel symbols of the connection. Other equivalent forms for the Hessian are given by \operatorname(f)(X, Y) = \langle \nabla_X \operatorname f,Y \rangle \quad \text \quad \operatorname(f)(X,Y) = X(Yf)-df(\nabla_XY).


See also

* The determinant of the Hessian matrix is a covariant; see Invariant of a binary form * Polarization identity, useful for rapid calculations involving Hessians. * *


References


Further reading

* *


External links

* * {{Matrix classes Differential operators Matrices (mathematics) Morse theory Multivariable calculus Singularity theory