HOME

TheInfoList



OR:

In
physics Physics is the natural science that studies matter, its fundamental constituents, its motion and behavior through space and time, and the related entities of energy and force. "Physical science is that department of knowledge which r ...
, a covariant transformation is a rule that specifies how certain entities, such as
vector Vector most often refers to: *Euclidean vector, a quantity with a magnitude and a direction *Vector (epidemiology), an agent that carries and transmits an infectious pathogen into another living organism Vector may also refer to: Mathematic ...
s or
tensor In mathematics, a tensor is an algebraic object that describes a multilinear relationship between sets of algebraic objects related to a vector space. Tensors may map between different objects such as vectors, scalars, and even other tensor ...
s, change under a
change of basis In mathematics, an ordered basis of a vector space of finite dimension allows representing uniquely any element of the vector space by a coordinate vector, which is a sequence of scalars called coordinates. If two different bases are consider ...
. The transformation that describes the new
basis vectors In mathematics, a set of vectors in a vector space is called a basis if every element of may be written in a unique way as a finite linear combination of elements of . The coefficients of this linear combination are referred to as componen ...
as a linear combination of the old basis vectors is ''defined'' as a covariant transformation. Conventionally, indices identifying the basis vectors are placed as lower indices and so are all entities that transform in the same way. The inverse of a covariant transformation is a contravariant transformation. Whenever a vector should be ''invariant'' under a change of basis, that is to say it should represent the same geometrical or physical object having the same magnitude and direction as before, its ''components'' must transform according to the contravariant rule. Conventionally, indices identifying the components of a vector are placed as upper indices and so are all indices of entities that transform in the same way. The sum over pairwise matching indices of a product with the same lower and upper indices are invariant under a transformation. A vector itself is a geometrical quantity, in principle, independent (invariant) of the chosen basis. A vector v is given, say, in components ''v''''i'' on a chosen basis e''i''. On another basis, say e′''j'', the same vector v has different components ''v''′''j'' and \mathbf = \sum_i v^i _i = \sum_j ^j \mathbf'_j. As a vector, v should be invariant to the chosen coordinate system and independent of any chosen basis, i.e. its "real world" direction and magnitude should appear the same regardless of the basis vectors. If we perform a change of basis by transforming the vectors e''i'' into the basis vectors e''j'', we must also ensure that the components ''v''''i'' transform into the new components ''v''''j'' to compensate. The needed transformation of v is called the contravariant transformation rule.
Image:Transformation2polar_basis_vectors.svg, A vector v, and local tangent basis vectors and . Image:Transformation2polar contravariant vector.svg, Coordinate representations of v.
In the shown example, a vector \mathbf = \sum_ v^i _i = \sum_ ^j \mathbf'_j is described by two different coordinate systems: a rectangular coordinate system (the black grid), and a radial coordinate system (the red grid). Basis vectors have been chosen for both coordinate systems: ex and ey for the rectangular coordinate system, and er and eφ for the radial coordinate system. The radial basis vectors er and eφ appear rotated anticlockwise with respect to the rectangular basis vectors ex and ey. The covariant transformation, performed to the basis vectors, is thus an anticlockwise rotation, rotating from the first basis vectors to the second basis vectors. The coordinates of v must be transformed into the new coordinate system, but the vector v itself, as a mathematical object, remains independent of the basis chosen, appearing to point in the same direction and with the same magnitude, invariant to the change of coordinates. The contravariant transformation ensures this, by compensating for the rotation between the different bases. If we view v from the context of the radial coordinate system, it appears to be rotated more clockwise from the basis vectors er and eφ. compared to how it appeared relative to the rectangular basis vectors ex and ey. Thus, the needed contravariant transformation to v in this example is a clockwise rotation.


Examples of covariant transformation


The derivative of a function transforms covariantly

The explicit form of a covariant transformation is best introduced with the transformation properties of the derivative of a function. Consider a scalar function ''f'' (like the temperature at a location in a space) defined on a set of points ''p'', identifiable in a given coordinate system x^i,\; i=0,1,\dots (such a collection is called a manifold). If we adopt a new coordinates system ^j, j=0,1,\dots then for each ''i'', the original coordinate ^i can be expressed as a function of the new coordinates, so x^i \left(^j\right), j=0,1,\dots One can express the derivative of ''f'' in old coordinates in terms of the new coordinates, using the
chain rule In calculus, the chain rule is a formula that expresses the derivative of the composition of two differentiable functions and in terms of the derivatives of and . More precisely, if h=f\circ g is the function such that h(x)=f(g(x)) for every , ...
of the derivative, as : \frac = \frac \; \frac This is the explicit form of the covariant transformation rule. The notation of a normal derivative with respect to the coordinates sometimes uses a comma, as follows :f_ \ \stackrel\ \frac where the index ''i'' is placed as a lower index, because of the covariant transformation.


Basis vectors transform covariantly

A vector can be expressed in terms of basis vectors. For a certain coordinate system, we can choose the vectors tangent to the coordinate grid. This basis is called the coordinate basis. To illustrate the transformation properties, consider again the set of points ''p'', identifiable in a given coordinate system x^i where i = 0, 1, \dots ( manifold). A scalar function ''f'', that assigns a real number to every point ''p'' in this space, is a function of the coordinates f\;\left(x^0, x^1, \dots\right). A curve is a one-parameter collection of points ''c'', say with curve parameter λ, ''c''(λ). A tangent vector v to the curve is the derivative dc/d\lambda along the curve with the derivative taken at the point ''p'' under consideration. Note that we can see the
tangent vector In mathematics, a tangent vector is a vector that is tangent to a curve or surface at a given point. Tangent vectors are described in the differential geometry of curves in the context of curves in R''n''. More generally, tangent vectors are e ...
v as an operator (the
directional derivative In mathematics, the directional derivative of a multivariable differentiable (scalar) function along a given vector v at a given point x intuitively represents the instantaneous rate of change of the function, moving through x with a velocity ...
) which can be applied to a function :\mathbf \ \stackrel\ \frac = \frac f(c(\lambda)) The parallel between the tangent vector and the operator can also be worked out in coordinates :\mathbf = \frac \frac or in terms of operators \partial/\partial x^i :\mathbf = \frac \frac = \frac \mathbf_i where we have written \mathbf_i = \partial/\partial x^i, the tangent vectors to the curves which are simply the coordinate grid itself. If we adopt a new coordinates system ^i, \;i=0,1,\dots then for each ''i'', the old coordinate can be expressed as function of the new system, so x^i\left(^j\right), j=0,1,\dots Let \mathbf'_i = / be the basis, tangent vectors in this new coordinates system. We can express \mathbf_i in the new system by applying the
chain rule In calculus, the chain rule is a formula that expresses the derivative of the composition of two differentiable functions and in terms of the derivatives of and . More precisely, if h=f\circ g is the function such that h(x)=f(g(x)) for every , ...
on ''x''. As a function of coordinates we find the following transformation : \mathbf'_i = \frac = \frac \frac = \frac \mathbf_j which indeed is the same as the covariant transformation for the derivative of a function.


Contravariant transformation

The ''components'' of a (tangent) vector transform in a different way, called contravariant transformation. Consider a tangent vector v and call its components v^i on a basis \mathbf_i. On another basis \mathbf'_i we call the components ^i , so :\mathbf = v^i \mathbf_i = ^i \mathbf'_i in which : v^i = \frac \;\mbox\; ^i = \frac If we express the new components in terms of the old ones, then : ^i = \frac = \frac \frac = \frac ^j This is the explicit form of a transformation called the contravariant transformation and we note that it is different and just the inverse of the covariant rule. In order to distinguish them from the covariant (tangent) vectors, the index is placed on top.


Differential forms transform contravariantly

An example of a contravariant transformation is given by a differential form ''df''. For ''f'' as a function of coordinates x^i, ''df'' can be expressed in terms of dx^i. The differentials ''dx'' transform according to the contravariant rule since :d^i = \frac ^j


Dual properties

Entities that transform covariantly (like basis vectors) and the ones that transform contravariantly (like components of a vector and differential forms) are "almost the same" and yet they are different. They have "dual" properties. What is behind this, is mathematically known as the dual space that always goes together with a given linear
vector space In mathematics and physics, a vector space (also called a linear space) is a set whose elements, often called '' vectors'', may be added together and multiplied ("scaled") by numbers called ''scalars''. Scalars are often real numbers, but can ...
. Take any vector space T. A function ''f'' on T is called linear if, for any vectors v, w and scalar α: :\begin f(\mathbf + \mathbf) &= f(\mathbf) + f(\mathbf) \\ f(\alpha \mathbf) &= \alpha f(\mathbf) \end A simple example is the function which assigns a vector the value of one of its components (called a ''projection function''). It has a vector as argument and assigns a real number, the value of a component. All such ''scalar-valued'' linear functions together form a vector space, called the dual space of T. The sum ''f+g'' is again a linear function for linear ''f'' and ''g'', and the same holds for scalar multiplication α''f''. Given a basis \mathbf_i for T, we can define a basis, called the dual basis for the dual space in a natural way by taking the set of linear functions mentioned above: the projection functions. Each projection function (indexed by ω) produces the number 1 when applied to one of the basis vectors \mathbf_i. For example, \omega^0 gives a 1 on \mathbf_0 and zero elsewhere. Applying this linear function ^0 to a vector \mathbf =v^i \mathbf_i, gives (using its linearity) : \omega^0(\mathbf) = \omega^0(v^i \mathbf_i) = v^i \omega^0(\mathbf_i) = v^0 so just the value of the first coordinate. For this reason it is called the projection function. There are as many dual basis vectors \omega^i as there are basis vectors \mathbf_i, so the dual space has the same dimension as the linear space itself. It is "almost the same space", except that the elements of the dual space (called dual vectors) transform covariantly and the elements of the tangent vector space transform contravariantly. Sometimes an extra notation is introduced where the real value of a linear function σ on a tangent vector u is given as :\sigma mathbf:= \langle \sigma, \mathbf\rangle where \langle\sigma, \mathbf\rangle is a real number. This notation emphasizes the bilinear character of the form. It is linear in σ since that is a linear function and it is linear in u since that is an element of a vector space.


Co- and contravariant tensor components


Without coordinates

A
tensor In mathematics, a tensor is an algebraic object that describes a multilinear relationship between sets of algebraic objects related to a vector space. Tensors may map between different objects such as vectors, scalars, and even other tensor ...
of type (''r'', ''s'') may be defined as a real-valued multilinear function of ''r'' dual vectors and ''s'' vectors. Since vectors and dual vectors may be defined without dependence on a coordinate system, a tensor defined in this way is independent of the choice of a coordinate system. The notation of a tensor is :\begin &T\left(\sigma, \ldots ,\rho, \mathbf, \ldots, \mathbf\right) \\ \equiv &_ \end for dual vectors (differential forms) ''ρ'', ''σ'' and tangent vectors \mathbf, \mathbf. In the second notation the distinction between vectors and differential forms is more obvious.


With coordinates

Because a tensor depends linearly on its arguments, it is completely determined if one knows the values on a basis \omega^i \ldots \omega^j and \mathbf_k \ldots \mathbf_l : T(\omega^i,\ldots,\omega^j, \mathbf_k \ldots \mathbf_l) = _ The numbers _ are called the components of the tensor on the chosen basis. If we choose another basis (which are a linear combination of the original basis), we can use the linear properties of the tensor and we will find that the tensor components in the upper indices transform as dual vectors (so contravariant), whereas the lower indices will transform as the basis of tangent vectors and are thus covariant. For a tensor of rank 2, we can verify that : _ = \frac \frac A_ covariant tensor : ^ = \frac \frac A^ contravariant tensor For a mixed co- and contravariant tensor of rank 2 : ^i_j= \frac \frac A^l_m mixed co- and contravariant tensor


See also

* Covariance and contravariance of vectors *
General covariance In theoretical physics, general covariance, also known as diffeomorphism covariance or general invariance, consists of the invariance of the ''form'' of physical laws under arbitrary differentiable coordinate transformations. The essential idea is ...
*
Lorentz covariance In relativistic physics, Lorentz symmetry or Lorentz invariance, named after the Dutch physicist Hendrik Lorentz, is an equivalence of observation or observational symmetry due to special relativity implying that the laws of physics stay the same ...
{{tensors Tensors Differential geometry