Gradient
   HOME

TheInfoList



OR:

In vector calculus, the gradient of a scalar-valued differentiable function of several variables is the vector field (or vector-valued function) \nabla f whose value at a point p is the "direction and rate of fastest increase". If the gradient of a function is non-zero at a point , the direction of the gradient is the direction in which the function increases most quickly from , and the
magnitude Magnitude may refer to: Mathematics *Euclidean vector, a quantity defined by both its magnitude and its direction *Magnitude (mathematics), the relative size of an object *Norm (mathematics), a term for the size or length of a vector *Order of ...
of the gradient is the rate of increase in that direction, the greatest absolute directional derivative. Further, a point where the gradient is the zero vector is known as a
stationary point In mathematics, particularly in calculus, a stationary point of a differentiable function of one variable is a point on the graph of the function where the function's derivative is zero. Informally, it is a point where the function "stops" in ...
. The gradient thus plays a fundamental role in
optimization theory Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfi ...
, where it is used to maximize a function by
gradient ascent In mathematics, gradient descent (also often called steepest descent) is a first-order iterative optimization algorithm for finding a local minimum of a differentiable function. The idea is to take repeated steps in the opposite direction of the ...
. In coordinate-free terms, the gradient of a function f(\bf) may be defined by: :df=\nabla f \cdot d\bf where ''df'' is the total infinitesimal change in ''f'' for an infinitesimal displacement d\bf, and is seen to be maximal when d\bf is in the direction of the gradient \nabla f. The
nabla symbol The nabla symbol The nabla is a triangular symbol resembling an inverted Greek delta:Indeed, it is called ( ανάδελτα) in Modern Greek. \nabla or ∇. The name comes, by reason of the symbol's shape, from the Hellenistic Greek word ...
\nabla, written as an upside-down triangle and pronounced "del", denotes the vector differential operator. When a coordinate system is used in which the basis vectors are not functions of position, the gradient is given by the
vector Vector most often refers to: *Euclidean vector, a quantity with a magnitude and a direction *Vector (epidemiology), an agent that carries and transmits an infectious pathogen into another living organism Vector may also refer to: Mathematic ...
whose components are the partial derivatives of f at p. That is, for f \colon \R^n \to \R, its gradient \nabla f \colon \R^n \to \R^n is defined at the point p = (x_1,\ldots,x_n) in ''n-''dimensional space as the vector :\nabla f(p) = \begin \frac(p) \\ \vdots \\ \frac(p) \end. The gradient is dual to the
total derivative In mathematics, the total derivative of a function at a point is the best linear approximation near this point of the function with respect to its arguments. Unlike partial derivatives, the total derivative approximates the function with res ...
df: the value of the gradient at a point is a
tangent vector In mathematics, a tangent vector is a vector that is tangent to a curve or surface at a given point. Tangent vectors are described in the differential geometry of curves in the context of curves in R''n''. More generally, tangent vectors are e ...
– a vector at each point; while the value of the derivative at a point is a ''co''tangent vector – a linear functional on vectors. They are related in that the dot product of the gradient of at a point with another tangent vector equals the
directional derivative In mathematics, the directional derivative of a multivariable differentiable (scalar) function along a given vector v at a given point x intuitively represents the instantaneous rate of change of the function, moving through x with a velocity ...
of at of the function along ; that is, \nabla f(p) \cdot \mathbf v = \frac(p) = df_(\mathbf) . The gradient admits multiple generalizations to more general functions on manifolds; see .


Motivation

Consider a room where the temperature is given by a scalar field, , so at each point the temperature is , independent of time. At each point in the room, the gradient of at that point will show the direction in which the temperature rises most quickly, moving away from . The magnitude of the gradient will determine how fast the temperature rises in that direction. Consider a surface whose height above sea level at point is . The gradient of at a point is a plane vector pointing in the direction of the steepest slope or
grade Grade most commonly refers to: * Grade (education), a measurement of a student's performance * Grade, the number of the year a student has reached in a given educational stage * Grade (slope), the steepness of a slope Grade or grading may also ref ...
at that point. The steepness of the slope at that point is given by the magnitude of the gradient vector. The gradient can also be used to measure how a scalar field changes in other directions, rather than just the direction of greatest change, by taking a dot product. Suppose that the steepest slope on a hill is 40%. A road going directly uphill has slope 40%, but a road going around the hill at an angle will have a shallower slope. For example, if the road is at a 60° angle from the uphill direction (when both directions are projected onto the horizontal plane), then the slope along the road will be the dot product between the gradient vector and a unit vector along the road, namely 40% times the cosine of 60°, or 20%. More generally, if the hill height function is
differentiable In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. In other words, the graph of a differentiable function has a non-vertical tangent line at each interior point in its ...
, then the gradient of dotted with a unit vector gives the slope of the hill in the direction of the vector, the
directional derivative In mathematics, the directional derivative of a multivariable differentiable (scalar) function along a given vector v at a given point x intuitively represents the instantaneous rate of change of the function, moving through x with a velocity ...
of along the unit vector.


Notation

The gradient of a function f at point a is usually written as \nabla f (a). It may also be denoted by any of the following: * \vec f (a) : to emphasize the vector nature of the result. * * \partial_i f and f_ : Einstein notation.


Definition

The gradient (or gradient vector field) of a scalar function is denoted or where (
nabla Nabla may refer to any of the following: * the nabla symbol ∇ ** the vector differential operator, also called del, denoted by the nabla * Nabla, tradename of a type of rail fastening system (of roughly triangular shape) * ''Nabla'' (moth), a ge ...
) denotes the vector differential operator,
del Del, or nabla, is an operator used in mathematics (particularly in vector calculus) as a vector differential operator, usually represented by the nabla symbol ∇. When applied to a function defined on a one-dimensional domain, it denotes ...
. The notation is also commonly used to represent the gradient. The gradient of is defined as the unique vector field whose dot product with any
vector Vector most often refers to: *Euclidean vector, a quantity with a magnitude and a direction *Vector (epidemiology), an agent that carries and transmits an infectious pathogen into another living organism Vector may also refer to: Mathematic ...
at each point is the directional derivative of along . That is, :\big(\nabla f(x)\big)\cdot \mathbf = D_f(x) where the right-side hand is the
directional derivative In mathematics, the directional derivative of a multivariable differentiable (scalar) function along a given vector v at a given point x intuitively represents the instantaneous rate of change of the function, moving through x with a velocity ...
and there are many ways to represent it. Formally, the derivative is ''dual'' to the gradient; see relationship with derivative. When a function also depends on a parameter such as time, the gradient often refers simply to the vector of its spatial derivatives only (see Spatial gradient). The magnitude and direction of the gradient vector are independent of the particular coordinate representation.


Cartesian coordinates

In the three-dimensional Cartesian coordinate system with a
Euclidean metric In mathematics, the Euclidean distance between two points in Euclidean space is the length of a line segment between the two points. It can be calculated from the Cartesian coordinates of the points using the Pythagorean theorem, therefore occ ...
, the gradient, if it exists, is given by: :\nabla f = \frac \mathbf + \frac \mathbf + \frac \mathbf, where , , are the
standard Standard may refer to: Symbols * Colours, standards and guidons, kinds of military signs * Standard (emblem), a type of a large symbol or emblem used for identification Norms, conventions or requirements * Standard (metrology), an object th ...
unit vectors in the directions of the , and coordinates, respectively. For example, the gradient of the function :f(x,y,z)= 2x+3y^2-\sin(z) is :\nabla f = 2\mathbf+ 6y\mathbf -\cos(z)\mathbf. In some applications it is customary to represent the gradient as a
row vector In linear algebra, a column vector with m elements is an m \times 1 matrix consisting of a single column of m entries, for example, \boldsymbol = \begin x_1 \\ x_2 \\ \vdots \\ x_m \end. Similarly, a row vector is a 1 \times n matrix for some n, c ...
or
column vector In linear algebra, a column vector with m elements is an m \times 1 matrix consisting of a single column of m entries, for example, \boldsymbol = \begin x_1 \\ x_2 \\ \vdots \\ x_m \end. Similarly, a row vector is a 1 \times n matrix for some n, c ...
of its components in a rectangular coordinate system; this article follows the convention of the gradient being a column vector, while the derivative is a row vector.


Cylindrical and spherical coordinates

In
cylindrical coordinates A cylindrical coordinate system is a three-dimensional coordinate system that specifies point positions by the distance from a chosen reference axis ''(axis L in the image opposite)'', the direction from the axis relative to a chosen reference d ...
with a Euclidean metric, the gradient is given by:. :\nabla f(\rho, \varphi, z) = \frac\mathbf_\rho + \frac\frac\mathbf_\varphi + \frac\mathbf_z, where is the axial distance, is the azimuthal or azimuth angle, is the axial coordinate, and , and are unit vectors pointing along the coordinate directions. In
spherical coordinates In mathematics, a spherical coordinate system is a coordinate system for three-dimensional space where the position of a point is specified by three numbers: the ''radial distance'' of that point from a fixed origin, its ''polar angle'' meas ...
, the gradient is given by: :\nabla f(r, \theta, \varphi) = \frac\mathbf_r + \frac\frac\mathbf_\theta + \frac\frac\mathbf_\varphi, where is the radial distance, is the azimuthal angle and is the polar angle, and , and are again local unit vectors pointing in the coordinate directions (that is, the normalized covariant basis). For the gradient in other orthogonal coordinate systems, see Orthogonal coordinates (Differential operators in three dimensions).


General coordinates

We consider general coordinates, which we write as , where is the number of dimensions of the domain. Here, the upper index refers to the position in the list of the coordinate or component, so refers to the second component—not the quantity squared. The index variable refers to an arbitrary element . Using Einstein notation, the gradient can then be written as: \nabla f = \fracg^ \mathbf_j (Note that its dual is \mathrmf = \frac\mathbf^i), where \mathbf_i = \partial \mathbf/\partial x^i and \mathbf^i = \mathrmx^i refer to the unnormalized local covariant and contravariant bases respectively, g^ is the inverse metric tensor, and the Einstein summation convention implies summation over ''i'' and ''j''. If the coordinates are orthogonal we can easily express the gradient (and the differential) in terms of the normalized bases, which we refer to as \hat_i and \hat^i, using the scale factors (also known as Lamé coefficients) h_i= \lVert \mathbf_i \rVert = \sqrt = 1\, / \lVert \mathbf^i \rVert : \nabla f = \fracg^ \hat_\sqrt = \sum_^n \, \frac \frac \mathbf_i (and \mathrmf = \sum_^n \, \frac \frac \mathbf^i), where we cannot use Einstein notation, since it is impossible to avoid the repetition of more than two indices. Despite the use of upper and lower indices, \mathbf_i, \mathbf^i, and h_i are neither contravariant nor covariant. The latter expression evaluates to the expressions given above for cylindrical and spherical coordinates.


Relationship with derivative


Relationship with total derivative

The gradient is closely related to the
total derivative In mathematics, the total derivative of a function at a point is the best linear approximation near this point of the function with respect to its arguments. Unlike partial derivatives, the total derivative approximates the function with res ...
(
total differential In calculus, the differential represents the principal part of the change in a function ''y'' = ''f''(''x'') with respect to changes in the independent variable. The differential ''dy'' is defined by :dy = f'(x)\,dx, where f'(x) is the ...
) df: they are
transpose In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal; that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other notations). The tr ...
( dual) to each other. Using the convention that vectors in \R^n are represented by
column vector In linear algebra, a column vector with m elements is an m \times 1 matrix consisting of a single column of m entries, for example, \boldsymbol = \begin x_1 \\ x_2 \\ \vdots \\ x_m \end. Similarly, a row vector is a 1 \times n matrix for some n, c ...
s, and that covectors (linear maps \R^n \to \R) are represented by
row vector In linear algebra, a column vector with m elements is an m \times 1 matrix consisting of a single column of m entries, for example, \boldsymbol = \begin x_1 \\ x_2 \\ \vdots \\ x_m \end. Similarly, a row vector is a 1 \times n matrix for some n, c ...
s, the gradient \nabla f and the derivative df are expressed as a column and row vector, respectively, with the same components, but transpose of each other: :\nabla f(p) = \begin\frac(p) \\ \vdots \\ \frac(p) \end ; :df_p = \begin\frac(p) & \cdots & \frac(p) \end . While these both have the same components, they differ in what kind of mathematical object they represent: at each point, the derivative is a
cotangent vector In differential geometry, the cotangent space is a vector space associated with a point x on a smooth (or differentiable) manifold \mathcal M; one can define a cotangent space for every point on a smooth manifold. Typically, the cotangent space, T ...
, a linear form ( covector) which expresses how much the (scalar) output changes for a given infinitesimal change in (vector) input, while at each point, the gradient is a
tangent vector In mathematics, a tangent vector is a vector that is tangent to a curve or surface at a given point. Tangent vectors are described in the differential geometry of curves in the context of curves in R''n''. More generally, tangent vectors are e ...
, which represents an infinitesimal change in (vector) input. In symbols, the gradient is an element of the tangent space at a point, \nabla f(p) \in T_p \R^n, while the derivative is a map from the tangent space to the real numbers, df_p \colon T_p \R^n \to \R. The tangent spaces at each point of \R^n can be "naturally" identified with the vector space \R^n itself, and similarly the cotangent space at each point can be naturally identified with the
dual vector space In mathematics, any vector space ''V'' has a corresponding dual vector space (or just dual space for short) consisting of all linear forms on ''V'', together with the vector space structure of pointwise addition and scalar multiplication by const ...
(\R^n)^* of covectors; thus the value of the gradient at a point can be thought of a vector in the original \R^n, not just as a tangent vector. Computationally, given a tangent vector, the vector can be ''multiplied'' by the derivative (as matrices), which is equal to taking the dot product with the gradient: : (df_p)(v) = \begin\frac(p) & \cdots & \frac(p) \end \beginv_1 \\ \vdots \\ v_n\end = \sum_^n \frac(p) v_i = \begin\frac(p) \\ \vdots \\ \frac(p) \end \cdot \beginv_1 \\ \vdots \\ v_n\end = \nabla f(p) \cdot v


Differential or (exterior) derivative

The best linear approximation to a differentiable function :f \colon \R^n \to \R at a point in is a linear map from to which is often denoted by or and called the differential or
total derivative In mathematics, the total derivative of a function at a point is the best linear approximation near this point of the function with respect to its arguments. Unlike partial derivatives, the total derivative approximates the function with res ...
of at . The function , which maps to , is called the
total differential In calculus, the differential represents the principal part of the change in a function ''y'' = ''f''(''x'') with respect to changes in the independent variable. The differential ''dy'' is defined by :dy = f'(x)\,dx, where f'(x) is the ...
or exterior derivative of and is an example of a differential 1-form. Much as the derivative of a function of a single variable represents the
slope In mathematics, the slope or gradient of a line is a number that describes both the ''direction'' and the ''steepness'' of the line. Slope is often denoted by the letter ''m''; there is no clear answer to the question why the letter ''m'' is use ...
of the tangent to the
graph Graph may refer to: Mathematics *Graph (discrete mathematics), a structure made of vertices and edges **Graph theory, the study of such graphs and their properties *Graph (topology), a topological space resembling a graph in the sense of discre ...
of the function, the directional derivative of a function in several variables represents the slope of the tangent hyperplane in the direction of the vector. The gradient is related to the differential by the formula :(\nabla f)_x\cdot v = df_x(v) for any , where \cdot is the dot product: taking the dot product of a vector with the gradient is the same as taking the directional derivative along the vector. If is viewed as the space of (dimension ) column vectors (of real numbers), then one can regard as the row vector with components :\left( \frac, \dots, \frac\right), so that is given by
matrix multiplication In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the s ...
. Assuming the standard Euclidean metric on , the gradient is then the corresponding column vector, that is, :(\nabla f)_i = df^\mathsf_i.


Linear approximation to a function

The best
linear approximation In mathematics, a linear approximation is an approximation of a general function using a linear function (more precisely, an affine function). They are widely used in the method of finite differences to produce first order methods for solving o ...
to a function can be expressed in terms of the gradient, rather than the derivative. The gradient of a
function Function or functionality may refer to: Computing * Function key, a type of key on computer keyboards * Function model, a structured representation of processes in a system * Function object or functor or functionoid, a concept of object-oriente ...
from the Euclidean space to at any particular point in characterizes the best
linear approximation In mathematics, a linear approximation is an approximation of a general function using a linear function (more precisely, an affine function). They are widely used in the method of finite differences to produce first order methods for solving o ...
to at . The approximation is as follows: :f(x) \approx f(x_0) + (\nabla f)_\cdot(x-x_0) for close to , where is the gradient of computed at , and the dot denotes the dot product on . This equation is equivalent to the first two terms in the multivariable Taylor series expansion of at .


Relationship with Fréchet derivative

Let be an open set in . If the function is differentiable, then the differential of is the
Fréchet derivative In mathematics, the Fréchet derivative is a derivative defined on normed spaces. Named after Maurice Fréchet, it is commonly used to generalize the derivative of a real-valued function of a single real variable to the case of a vector-valued ...
of . Thus is a function from to the space such that \lim_ \frac = 0, where · is the dot product. As a consequence, the usual properties of the derivative hold for the gradient, though the gradient is not a derivative itself, but rather dual to the derivative: ;
Linearity Linearity is the property of a mathematical relationship ('' function'') that can be graphically represented as a straight line. Linearity is closely related to '' proportionality''. Examples in physics include rectilinear motion, the linear ...
:The gradient is linear in the sense that if and are two real-valued functions differentiable at the point , and and are two constants, then is differentiable at , and moreover \nabla\left(\alpha f+\beta g\right)(a) = \alpha \nabla f(a) + \beta\nabla g (a). ;
Product rule In calculus, the product rule (or Leibniz rule or Leibniz product rule) is a formula used to find the derivatives of products of two or more functions. For two functions, it may be stated in Lagrange's notation as (u \cdot v)' = u ' \cdot v ...
:If and are real-valued functions differentiable at a point , then the product rule asserts that the product is differentiable at , and \nabla (fg)(a) = f(a)\nabla g(a) + g(a)\nabla f(a). ;
Chain rule In calculus, the chain rule is a formula that expresses the derivative of the composition of two differentiable functions and in terms of the derivatives of and . More precisely, if h=f\circ g is the function such that h(x)=f(g(x)) for every , ...
:Suppose that is a real-valued function defined on a subset of , and that is differentiable at a point . There are two forms of the chain rule applying to the gradient. First, suppose that the function is a
parametric curve In mathematics, a parametric equation defines a group of quantities as functions of one or more independent variables called parameters. Parametric equations are commonly used to express the coordinates of the points that make up a geometric obj ...
; that is, a function maps a subset into . If is differentiable at a point such that , then (f\circ g)'(c) = \nabla f(a)\cdot g'(c), where ∘ is the composition operator: . More generally, if instead , then the following holds: \nabla (f\circ g)(c) = \big(Dg(c)\big)^\mathsf \big(\nabla f(a)\big), where T denotes the transpose Jacobian matrix. For the second form of the chain rule, suppose that is a real valued function on a subset of , and that is differentiable at the point . Then \nabla (h\circ f)(a) = h'\big(f(a)\big)\nabla f(a).


Further properties and applications


Level sets

A level surface, or
isosurface An isosurface is a three-dimensional analog of an isoline. It is a surface that represents points of a constant value (e.g. pressure, temperature, velocity, density) within a volume of space; in other words, it is a level set of a continuous f ...
, is the set of all points where some function has a given value. If is differentiable, then the dot product of the gradient at a point with a vector gives the directional derivative of at in the direction . It follows that in this case the gradient of is orthogonal to the
level set In mathematics, a level set of a real-valued function of real variables is a set where the function takes on a given constant value , that is: : L_c(f) = \left\~, When the number of independent variables is two, a level set is calle ...
s of . For example, a level surface in three-dimensional space is defined by an equation of the form . The gradient of is then normal to the surface. More generally, any embedded
hypersurface In geometry, a hypersurface is a generalization of the concepts of hyperplane, plane curve, and surface. A hypersurface is a manifold or an algebraic variety of dimension , which is embedded in an ambient space of dimension , generally a Euclidea ...
in a Riemannian manifold can be cut out by an equation of the form such that is nowhere zero. The gradient of is then normal to the hypersurface. Similarly, an affine algebraic hypersurface may be defined by an equation , where is a polynomial. The gradient of is zero at a singular point of the hypersurface (this is the definition of a singular point). At a non-singular point, it is a nonzero normal vector.


Conservative vector fields and the gradient theorem

The gradient of a function is called a gradient field. A (continuous) gradient field is always a
conservative vector field In vector calculus, a conservative vector field is a vector field that is the gradient of some function. A conservative vector field has the property that its line integral is path independent; the choice of any path between two points does not ...
: its line integral along any path depends only on the endpoints of the path, and can be evaluated by the gradient theorem (the fundamental theorem of calculus for line integrals). Conversely, a (continuous) conservative vector field is always the gradient of a function.


Generalizations


Jacobian

The Jacobian matrix is the generalization of the gradient for vector-valued functions of several variables and differentiable maps between
Euclidean space Euclidean space is the fundamental space of geometry, intended to represent physical space. Originally, that is, in Euclid's ''Elements'', it was the three-dimensional space of Euclidean geometry, but in modern mathematics there are Euclidean ...
s or, more generally, manifolds. A further generalization for a function between Banach spaces is the
Fréchet derivative In mathematics, the Fréchet derivative is a derivative defined on normed spaces. Named after Maurice Fréchet, it is commonly used to generalize the derivative of a real-valued function of a single real variable to the case of a vector-valued ...
. Suppose is a function such that each of its first-order partial derivatives exist on . Then the Jacobian matrix of is defined to be an matrix, denoted by \mathbf_\mathbb(\mathbb) or simply \mathbf. The th entry is \mathbf J_ = \frac. Explicitly \mathbf J = \begin \dfrac & \cdots & \dfrac \end = \begin \nabla^\mathsf f_1 \\ \vdots \\ \nabla^\mathsf f_m \end = \begin \dfrac & \cdots & \dfrac\\ \vdots & \ddots & \vdots\\ \dfrac & \cdots & \dfrac \end.


Gradient of a vector field

Since the total derivative of a vector field is a
linear mapping In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that pre ...
from vectors to vectors, it is a
tensor In mathematics, a tensor is an algebraic object that describes a multilinear relationship between sets of algebraic objects related to a vector space. Tensors may map between different objects such as vectors, scalars, and even other tensor ...
quantity. In rectangular coordinates, the gradient of a vector field is defined by: :\nabla \mathbf=g^\frac \mathbf_i \otimes \mathbf_k, (where the
Einstein summation notation In mathematics, especially the usage of linear algebra in Mathematical physics, Einstein notation (also known as the Einstein summation convention or Einstein summation notation) is a notational convention that implies summation over a set of i ...
is used and the
tensor product In mathematics, the tensor product V \otimes W of two vector spaces and (over the same field) is a vector space to which is associated a bilinear map V\times W \to V\otimes W that maps a pair (v,w),\ v\in V, w\in W to an element of V \otime ...
of the vectors and is a
dyadic tensor In mathematics, specifically multilinear algebra, a dyadic or dyadic tensor is a second Tensor (intrinsic definition)#Definition via tensor products of vector spaces, order tensor, written in a notation that fits in with vector algebra. There are n ...
of type (2,0)). Overall, this expression equals the transpose of the Jacobian matrix: :\frac = \frac. In curvilinear coordinates, or more generally on a curved manifold, the gradient involves
Christoffel symbols In mathematics and physics, the Christoffel symbols are an array of numbers describing a metric connection. The metric connection is a specialization of the affine connection to surfaces or other manifolds endowed with a metric, allowing distanc ...
: :\nabla \mathbf=g^\left(\frac+_f^l\right) \mathbf_i \otimes \mathbf_k, where are the components of the inverse metric tensor and the are the coordinate basis vectors. Expressed more invariantly, the gradient of a vector field can be defined by the Levi-Civita connection and metric tensor:. :\nabla^a f^b = g^ \nabla_c f^b , where is the connection.


Riemannian manifolds

For any smooth function on a Riemannian manifold , the gradient of is the vector field such that for any vector field , :g(\nabla f, X) = \partial_X f, that is, :g_x\big((\nabla f)_x, X_x \big) = (\partial_X f) (x), where denotes the
inner product In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often ...
of tangent vectors at defined by the metric and is the function that takes any point to the directional derivative of in the direction , evaluated at . In other words, in a
coordinate chart In topology, a branch of mathematics, a topological manifold is a topological space that locally resembles real ''n''-dimensional Euclidean space. Topological manifolds are an important class of topological spaces, with applications throughout mathe ...
from an open subset of to an open subset of , is given by: :\sum_^n X^ \big(\varphi(x)\big) \frac(f \circ \varphi^) \Bigg, _, where denotes the th component of in this coordinate chart. So, the local form of the gradient takes the form: :\nabla f = g^ \frac _i . Generalizing the case , the gradient of a function is related to its exterior derivative, since :(\partial_X f) (x) = (df)_x(X_x) . More precisely, the gradient is the vector field associated to the differential 1-form using the musical isomorphism :\sharp=\sharp^g\colon T^*M\to TM (called "sharp") defined by the metric . The relation between the exterior derivative and the gradient of a function on is a special case of this in which the metric is the flat metric given by the dot product.


See also

* Curl *
Divergence In vector calculus, divergence is a vector operator that operates on a vector field, producing a scalar field giving the quantity of the vector field's source at each point. More technically, the divergence represents the volume density of t ...
* Four-gradient *
Hessian matrix In mathematics, the Hessian matrix or Hessian is a square matrix of second-order partial derivatives of a scalar-valued function, or scalar field. It describes the local curvature of a function of many variables. The Hessian matrix was developed ...
* Skew gradient


Notes


References

* * * * * * * * * * * *


Further reading

*


External links

* * . * {{Calculus topics Differential operators Differential calculus Generalizations of the derivative Linear operators in calculus Vector calculus Rates