Raising and lowering indices
   HOME

TheInfoList



OR:

In
mathematics Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics ...
and
mathematical physics Mathematical physics refers to the development of mathematical methods for application to problems in physics. The '' Journal of Mathematical Physics'' defines the field as "the application of mathematics to problems in physics and the developm ...
, raising and lowering indices are operations on
tensor In mathematics, a tensor is an algebraic object that describes a multilinear relationship between sets of algebraic objects related to a vector space. Tensors may map between different objects such as vectors, scalars, and even other tensor ...
s which change their type. Raising and lowering indices are a form of index manipulation in tensor expressions.


Vectors, covectors and the metric


Mathematical formulation

Mathematically vectors are elements of a
vector space In mathematics and physics, a vector space (also called a linear space) is a set whose elements, often called '' vectors'', may be added together and multiplied ("scaled") by numbers called ''scalars''. Scalars are often real numbers, but can ...
V over a field K, and for use in physics V is usually defined with K=\mathbb or \mathbb. Concretely, if the dimension n=\text(V) of V is finite, then, after making a choice of basis, we can view such vector spaces as \mathbb^n or \mathbb^n. The
dual space In mathematics, any vector space ''V'' has a corresponding dual vector space (or just dual space for short) consisting of all linear forms on ''V'', together with the vector space structure of pointwise addition and scalar multiplication by cons ...
is the space of
linear functional In mathematics, a linear form (also known as a linear functional, a one-form, or a covector) is a linear map from a vector space to its field of scalars (often, the real numbers or the complex numbers). If is a vector space over a field , the ...
s mapping V\rightarrow K. Concretely, in matrix notation these can be thought of as row vectors, which give a number when applied to column vectors. We denote this by V^*:= \text(V,K), so that \alpha \in V^* is a linear map \alpha:V\rightarrow K. Then under a choice of basis \, we can view vectors v\in V as an K^n vector with components v^i (vectors are taken by convention to have indices up). This picks out a choice of basis \ for V^*, defined by the set of relations e^i(e_j) = \delta^i_j. For applications, raising and lowering is done using a structure known as the (pseudo-)
metric tensor In the mathematical field of differential geometry, a metric tensor (or simply metric) is an additional structure on a manifold (such as a surface) that allows defining distances and angles, just as the inner product on a Euclidean space allow ...
(the 'pseudo-' refers to the fact we allow the metric to be indefinite). Formally, this is a non-degenerate, symmetric bilinear form :g:V\times V\rightarrow K \text :g(u,v) = g(v,u) \textu,v\in V \text :\forall v\in V \text v\neq \vec, \exists u\in V \text g(v,u)\neq 0 \text In this basis, it has components g(e_i,e_j) = g_, and can be viewed as a symmetric matrix in \text_(K) with these components. The inverse metric exists due to non-degeneracy and is denoted g^, and as a matrix is the inverse to g_.


Raising and lowering vectors and covectors

Raising and lowering is then done in coordinates. Given a vector with components v^i, we can contract with the metric to obtain a covector: :g_v^j = v_i and this is what we mean by lowering the index. Conversely, contracting a covector with the inverse metric gives a vector: :g^\alpha_j=\alpha^i. This process is called raising the index. Raising and then lowering the same index (or conversely) are inverse operations, which is reflected in the metric and inverse metric tensors being inverse to each other (as is suggested by the terminology): :g^g_=g_g^=_k=^i where \delta^i_j is the
Kronecker delta In mathematics, the Kronecker delta (named after Leopold Kronecker) is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise: \delta_ = \begin 0 &\text i \neq j, \\ 1 & ...
or
identity matrix In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere. Terminology and notation The identity matrix is often denoted by I_n, or simply by I if the size is immaterial or ...
. Finite-dimensional real vector spaces with (pseudo-)metrics are classified up to signature, a coordinate-free property which is well-defined by
Sylvester's law of inertia Sylvester's law of inertia is a theorem in matrix algebra about certain properties of the coefficient matrix of a real quadratic form that remain invariant under a change of basis. Namely, if ''A'' is the symmetric matrix that defines the quadra ...
. Possible metrics on real space are indexed by signature (p,q). This is a metric associated to n=p+q dimensional real space. The metric has signature (p,q) if there exists a basis (referred to as an
orthonormal basis In mathematics, particularly linear algebra, an orthonormal basis for an inner product space ''V'' with finite dimension is a basis for V whose vectors are orthonormal, that is, they are all unit vectors and orthogonal to each other. For examp ...
) such that in this basis, the metric takes the form (g_) = \text(+1, \cdots, +1, -1, \cdots, -1) with p positive ones and q negative ones. The concrete space with elements which are n-vectors and this concrete realization of the metric is denoted \mathbb^=(\mathbb^n,g_), where the 2-tuple (\mathbb^n, g_) is meant to make it clear that the underlying vector space of \mathbb^ is \mathbb^n: equipping this vector space with the metric g_ is what turns the space into \mathbb^. Examples: * \mathbb^3 is a model for 3-dimensional space. The metric is equivalent to the standard
dot product In mathematics, the dot product or scalar productThe term ''scalar product'' means literally "product with a scalar as a result". It is also used sometimes for other symmetric bilinear forms, for example in a pseudo-Euclidean space. is an alg ...
. * \mathbb^ = \mathbb^n, equivalent to n dimensional real space as an inner product space with g_ = \delta_. In Euclidean space, raising and lowering is not necessary due to vectors and covector components being the same. * \mathbb^ is Minkowski space (or rather, Minkowski space in a choice of orthonormal basis), a model for spacetime with weak curvature. It is common convention to use greek indices when writing expressions involving tensors in Minkowski space, while Latin indices are reserved for Euclidean space. Well-formulated expressions are constrained by the rules of Einstein summation: any index may appear at most once and furthermore a raised index must contract with a lowered index. With these rules we can immediately see that an expression such as :g_v^iu^j is well formulated while :g_v_iu_j is not.


Example in Minkowski spacetime

The covariant
4-position In special relativity, a four-vector (or 4-vector) is an object with four components, which transform in a specific way under Lorentz transformations. Specifically, a four-vector is an element of a four-dimensional vector space considered as a ...
is given by :X_\mu = (-ct, x, y, z) with components: :X_0 = -ct, \quad X_1 = x, \quad X_2 = y, \quad X_3 = z (where ,, are the usual
Cartesian coordinates A Cartesian coordinate system (, ) in a plane is a coordinate system that specifies each point uniquely by a pair of numerical coordinates, which are the signed distances to the point from two fixed perpendicular oriented lines, measured in ...
) and the
Minkowski metric In mathematical physics, Minkowski space (or Minkowski spacetime) () is a combination of three-dimensional Euclidean space and time into a four-dimensional manifold where the spacetime interval between any two events is independent of the iner ...
tensor with
metric signature In mathematics, the signature of a metric tensor ''g'' (or equivalently, a real quadratic form thought of as a real symmetric bilinear form on a finite-dimensional vector space) is the number (counted with multiplicity) of positive, negative and ...
(− + + +) is defined as : \eta_ = \eta^ = \begin -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end in components: :\eta_ = -1, \quad \eta_ = \eta_ = 0,\quad \eta_ = \delta_\,(i,j \neq 0). To raise the index, multiply by the tensor and contract: :X^\lambda = \eta^X_\mu = \eta^X_0 + \eta^X_i then for : :X^0 = \eta^X_0 + \eta^X_i = -X_0 and for : :X^j = \eta^X_0 + \eta^X_i = \delta^X_i = X_j \,. So the index-raised contravariant 4-position is: :X^\mu = (ct, x, y, z)\,. This operation is equivalent to the matrix multiplication : \begin -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end \begin -ct \\ x \\ y \\ z \end = \begin ct \\ x \\ y \\ z \end. Given two vectors, X^\mu and Y^\mu, we can write down their (pseudo-)inner product in two ways: :\eta_X^\mu Y^\nu. By lowering indices, we can write this expression as :X_\mu Y^\mu. What is this in matrix notation? The first expression can be written as : \begin X^0 & X^1 & X^2 & X^3 \end \begin -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end \begin Y^0 \\ Y^1 \\ Y^2 \\ Y^3\end while the second is, after lowering the indices of X^\mu, :\begin -X^0 & X^1 & X^2 & X^3 \end\begin Y^0 \\ Y^1 \\ Y^2 \\ Y^3\end.


Coordinate free formalism

It is instructive to consider what raising and lowering means in the abstract linear algebra setting. We first fix definitions: V is a finite-dimensional vector space over a field K. Typically K=\mathbb or \mathbb. \phi is a non-degenerate bilinear form, that is, \phi:V\times V\rightarrow K is a map which is linear in both arguments, making it a bilinear form. By \phi being non-degenerate we mean that for each v\in V, there is a u\in V such that :\phi(v,u)\neq 0. In concrete applications, \phi is often considered a structure on the vector space, for example an
inner product In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often ...
or more generally a
metric tensor In the mathematical field of differential geometry, a metric tensor (or simply metric) is an additional structure on a manifold (such as a surface) that allows defining distances and angles, just as the inner product on a Euclidean space allow ...
which is allowed to have indefinite signature, or a symplectic form \omega. Together these cover the cases where \phi is either symmetric or anti-symmetric, but in full generality \phi need not be either of these cases. There is a partial evaluation map associated to \phi, :\phi(\cdot, - ):V\rightarrow V^*; v\mapsto \phi(v,\cdot) where \cdot denotes an argument which is to be evaluated, and - denotes an argument whose evaluation is deferred. Then \phi(v,\cdot) is an element of V^*, which sends u\mapsto \phi(v,u). We made a choice to define this partial evaluation map as being evaluated on the first argument. We could just as well have defined it on the second argument, and non-degeneracy is also independent of argument chosen. Also, when \phi has well defined (anti-)symmetry, evaluating on either argument is equivalent (up to a minus sign for anti-symmetry). Non-degeneracy shows that the partial evaluation map is injective, or equivalently that the kernel of the map is trivial. In finite dimension, the dual space V^* has equal dimension to V, so non-degeneracy is enough to conclude the map is a linear isomorphism. If \phi is a structure on the vector space sometimes call this the canonical isomorphism V\rightarrow V^*. It therefore has an inverse, \phi^:V^*\rightarrow V, and this is enough to define an associated bilinear form on the dual: :\phi^:V^*\times V^*\rightarrow K, \phi^(\alpha,\beta) = \phi(\phi^(\alpha),\phi^(\beta)). where the repeated use of \phi^ is disambiguated by the argument taken. That is, \phi^(\alpha) is the inverse map, while \phi^(\alpha,\beta) is the bilinear form. Checking these expressions in coordinates makes it evident that this is what raising and lowering indices means abstractly.


Tensors

We will not develop the abstract formalism for tensors straightaway. Formally, an (r,s) tensor is an object described via its components, and has r components up, s components down. A generic (r,s) tensor is written :T^_. We can use the metric tensor to raise and lower tensor indices just as we raised and lowered vector indices and raised covector indices.


Examples

* A (0,0) tensor is a number in the field \mathbb. * A (1,0) tensor is a vector. * A (0,1) tensor is a covector. * A (0,2) tensor is a bilinear form. An example is the metric tensor g_. * A (1,1) tensor is a linear map. An example is the delta, \delta^\mu_\nu, which is the identity map, or a Lorentz transformation \Lambda^\mu_\nu.


Example of raising and lowering

For a (0,2) tensor, twice contracting with the inverse metric tensor and contracting in different indices raises each index: :A^=g^g^A_. Similarly, twice contracting with the metric tensor and contracting in different indices lowers each index: :A_=g_g_A^ Let's apply this to the theory of electromagnetism. The contravariant
electromagnetic tensor In electromagnetism, the electromagnetic tensor or electromagnetic field tensor (sometimes called the field strength tensor, Faraday tensor or Maxwell bivector) is a mathematical object that describes the electromagnetic field in spacetime. ...
in the
signature A signature (; from la, signare, "to sign") is a Handwriting, handwritten (and often Stylization, stylized) depiction of someone's name, nickname, or even a simple "X" or other mark that a person writes on documents as a proof of identity and ...
is given byNB: Some texts, such as: , will show this tensor with an overall factor of −1. This is because they used the negative of the metric tensor used here: , see
metric signature In mathematics, the signature of a metric tensor ''g'' (or equivalently, a real quadratic form thought of as a real symmetric bilinear form on a finite-dimensional vector space) is the number (counted with multiplicity) of positive, negative and ...
. In older texts such as Jackson (2nd edition), there are no factors of since they are using
Gaussian units Gaussian units constitute a metric system of physical units. This system is the most common of the several electromagnetic unit systems based on cgs (centimetre–gram–second) units. It is also called the Gaussian unit system, Gaussian-cgs uni ...
. Here
SI units The International System of Units, known by the international abbreviation SI in all languages and sometimes pleonastically as the SI system, is the modern form of the metric system and the world's most widely used system of measurement. ...
are used.
:F^ = \begin 0 & -\frac & -\frac & -\frac \\ \frac & 0 & -B_z & B_y \\ \frac & B_z & 0 & -B_x \\ \frac & -B_y & B_x & 0 \end. In components, :F^ = -F^ = - \frac ,\quad F^ = - \varepsilon^ B_k To obtain the covariant tensor , contract with the inverse metric tensor: :\begin F_ & = \eta_ \eta_ F^ \\ & = \eta_ \eta_ F^ + \eta_ \eta_ F^ + \eta_ \eta_ F^ + \eta_ \eta_ F^ \end and since and , this reduces to :F_ = \left(\eta_ \eta_ - \eta_ \eta_ \right) F^ + \eta_ \eta_ F^ Now for , : :\begin F_ & = \left(\eta_ \eta_ - \eta_ \eta_ \right) F^ + \eta_ \eta_ F^ \\ & = \bigl(0 - (-\delta_) \bigr) F^ + 0 \\ & = F^ = - F^ \\ \end and by antisymmetry, for , : : F_ = - F^ then finally for , ; :\begin F_ & = \left(\eta_ \eta_ - \eta_ \eta_ \right) F^ + \eta_ \eta_ F^ \\ & = 0 + \delta_ \delta_ F^ \\ & = F^ \\ \end The (covariant) lower indexed tensor is then: :F_ = \begin 0 & \frac & \frac & \frac \\ -\frac & 0 & -B_z & B_y \\ -\frac & B_z & 0 & -B_x \\ -\frac & -B_y & B_x & 0 \end This operation is equivalent to the matrix multiplication : \begin -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end \begin 0 & -\frac & -\frac & -\frac \\ \frac & 0 & -B_z & B_y \\ \frac & B_z & 0 & -B_x \\ \frac & -B_y & B_x & 0 \end \begin -1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end =\begin 0 & \frac & \frac & \frac \\ -\frac & 0 & -B_z & B_y \\ -\frac & B_z & 0 & -B_x \\ -\frac & -B_y & B_x & 0 \end.


General rank

For a tensor of order , indices are raised by (compatible with above): :g^g^\cdots g^A_ = A^ and lowered by: :g_g_\cdots g_A^ = A_ and for a mixed tensor: :g_g_\cdots g_g^g^\cdots g^_ = ^ We need not raise or lower all indices at once: it is perfectly fine to raise or lower a single index. Lowering an index of an (r,s) tensor gives a (r-1,s+1) tensor, while raising an index gives a (r+1,s-1) (where r,s have suitable values, for example we cannot lower the index of a (0,2) tensor.)


See also

*
Ricci calculus In mathematics, Ricci calculus constitutes the rules of index notation and manipulation for tensors and tensor fields on a differentiable manifold, with or without a metric tensor or connection. It is also the modern name for what used to be ...
*
Einstein notation In mathematics, especially the usage of linear algebra in Mathematical physics, Einstein notation (also known as the Einstein summation convention or Einstein summation notation) is a notational convention that implies summation over a set of ...
*
Metric tensor In the mathematical field of differential geometry, a metric tensor (or simply metric) is an additional structure on a manifold (such as a surface) that allows defining distances and angles, just as the inner product on a Euclidean space allow ...
*
Musical isomorphism In mathematics—more specifically, in differential geometry—the musical isomorphism (or canonical isomorphism) is an isomorphism between the tangent bundle \mathrmM and the cotangent bundle \mathrm^* M of a pseudo-Riemannian manifold induced b ...
*


References

{{tensors *