Transposed
   HOME

TheInfoList



OR:

In linear algebra, the transpose of a
matrix Matrix most commonly refers to: * ''The Matrix'' (franchise), an American media franchise ** '' The Matrix'', a 1999 science-fiction action film ** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchi ...
is an operator which flips a matrix over its diagonal; that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other notations). The transpose of a matrix was introduced in 1858 by the British mathematician
Arthur Cayley Arthur Cayley (; 16 August 1821 – 26 January 1895) was a prolific United Kingdom of Great Britain and Ireland, British mathematician who worked mostly on algebra. He helped found the modern British school of pure mathematics. As a child, C ...
. In the case of a logical matrix representing a
binary relation In mathematics, a binary relation associates elements of one set, called the ''domain'', with elements of another set, called the ''codomain''. A binary relation over Set (mathematics), sets and is a new set of ordered pairs consisting of ele ...
R, the transpose corresponds to the converse relation RT.


Transpose of a matrix


Definition

The transpose of a matrix , denoted by , , , A^, , , or , may be constructed by any one of the following methods: # Reflect over its main diagonal (which runs from top-left to bottom-right) to obtain #Write the rows of as the columns of #Write the columns of as the rows of Formally, the -th row, -th column element of is the -th row, -th column element of : :\left mathbf^\operatorname\right = \left mathbf\right. If is an matrix, then is an matrix. In the case of square matrices, may also denote the th power of the matrix . For avoiding a possible confusion, many authors use left upperscripts, that is, they denote the transpose as . An advantage of this notation is that no parentheses are needed when exponents are involved: as , notation is not ambiguous. In this article this confusion is avoided by never using the symbol as a variable name.


Matrix definitions involving transposition

A square matrix whose transpose is equal to itself is called a '' symmetric matrix''; that is, is symmetric if :\mathbf^ = \mathbf. A square matrix whose transpose is equal to its negative is called a '' skew-symmetric matrix''; that is, is skew-symmetric if :\mathbf^ = -\mathbf. A square complex matrix whose transpose is equal to the matrix with every entry replaced by its complex conjugate (denoted here with an overline) is called a '' Hermitian matrix'' (equivalent to the matrix being equal to its conjugate transpose); that is, is Hermitian if :\mathbf^ = \overline. A square complex matrix whose transpose is equal to the negation of its complex conjugate is called a '' skew-Hermitian matrix''; that is, is skew-Hermitian if :\mathbf^ = -\overline. A square matrix whose transpose is equal to its
inverse Inverse or invert may refer to: Science and mathematics * Inverse (logic), a type of conditional sentence which is an immediate inference made from another conditional sentence * Additive inverse (negation), the inverse of a number that, when ad ...
is called an '' orthogonal matrix''; that is, is orthogonal if :\mathbf^ = \mathbf^. A square complex matrix whose transpose is equal to its conjugate inverse is called a '' unitary matrix''; that is, is unitary if :\mathbf^ = \overline.


Examples

*\begin 1 & 2 \end^ = \, \begin 1 \\ 2 \end * \begin 1 & 2 \\ 3 & 4 \end^ = \begin 1 & 3 \\ 2 & 4 \end * \begin 1 & 2 \\ 3 & 4 \\ 5 & 6 \end^ = \begin 1 & 3 & 5\\ 2 & 4 & 6 \end


Properties

Let and be matrices and be a scalar.


Products

If is an matrix and is its transpose, then the result of matrix multiplication with these two matrices gives two square matrices: is and is . Furthermore, these products are symmetric matrices. Indeed, the matrix product has entries that are the inner product of a row of with a column of . But the columns of are the rows of , so the entry corresponds to the inner product of two rows of . If is the entry of the product, it is obtained from rows and in . The entry is also obtained from these rows, thus , and the product matrix () is symmetric. Similarly, the product is a symmetric matrix. A quick proof of the symmetry of results from the fact that it is its own transpose: :\left(\mathbf \mathbf^\operatorname\right)^\operatorname = \left(\mathbf^\operatorname\right)^\operatorname \mathbf^\operatorname= \mathbf \mathbf^\operatorname .


Implementation of matrix transposition on computers

On a
computer A computer is a machine that can be programmed to Execution (computing), carry out sequences of arithmetic or logical operations (computation) automatically. Modern digital electronic computers can perform generic sets of operations known as C ...
, one can often avoid explicitly transposing a matrix in memory by simply accessing the same data in a different order. For example, software libraries for linear algebra, such as BLAS, typically provide options to specify that certain matrices are to be interpreted in transposed order to avoid the necessity of data movement. However, there remain a number of circumstances in which it is necessary or desirable to physically reorder a matrix in memory to its transposed ordering. For example, with a matrix stored in
row-major order In computing, row-major order and column-major order are methods for storing multidimensional arrays in linear storage such as random access memory. The difference between the orders lies in which elements of an array are contiguous in memory. In ...
, the rows of the matrix are contiguous in memory and the columns are discontiguous. If repeated operations need to be performed on the columns, for example in a
fast Fourier transform A fast Fourier transform (FFT) is an algorithm that computes the discrete Fourier transform (DFT) of a sequence, or its inverse (IDFT). Fourier analysis converts a signal from its original domain (often time or space) to a representation in th ...
algorithm, transposing the matrix in memory (to make the columns contiguous) may improve performance by increasing memory locality. Ideally, one might hope to transpose a matrix with minimal additional storage. This leads to the problem of transposing an ''n'' × ''m'' matrix in-place, with
O(1) Big ''O'' notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. Big O is a member of a family of notations invented by Paul Bachmann, Edmund Land ...
additional storage or at most storage much less than ''mn''. For ''n'' ≠ ''m'', this involves a complicated
permutation In mathematics, a permutation of a set is, loosely speaking, an arrangement of its members into a sequence or linear order, or if the set is already ordered, a rearrangement of its elements. The word "permutation" also refers to the act or proc ...
of the data elements that is non-trivial to implement in-place. Therefore, efficient
in-place matrix transposition In-place matrix transposition, also called in-situ matrix transposition, is the problem of transposing an ''N''×''M'' matrix in-place in computer memory, ideally with ''O''(1) (bounded) additional storage, or at most with additional storage muc ...
has been the subject of numerous research publications in computer science, starting in the late 1950s, and several algorithms have been developed.


Transposes of linear maps and bilinear forms

As the main use of matrices is to represent linear maps between finite-dimensional vector spaces, the transpose is an operation on matrices that may be seen as the representation of some operation on linear maps. This leads to a much more general definition of the transpose that works on every linear map, even when linear maps cannot be represented by matrices (such as in the case of infinite dimensional vector spaces). In the finite dimensional case, the matrix representing the transpose of a linear map is the transpose of the matrix representing the linear map, independently of the basis choice.


Transpose of a linear map

Let denote the algebraic dual space of an - module . Let and be -modules. If is a linear map, then its algebraic adjoint or dual, is the map defined by . The resulting functional is called the pullback of by . The following relation characterizes the algebraic adjoint of : for all and where is the
natural pairing In mathematics, a dual system, dual pair, or duality over a field \mathbb is a triple (X, Y, b) consisting of two vector spaces X and Y over \mathbb and a non-degenerate bilinear map b : X \times Y \to \mathbb. Duality theory, the study of dual ...
(i.e. defined by ). This definition also applies unchanged to left modules and to vector spaces. The definition of the transpose may be seen to be independent of any bilinear form on the modules, unlike the adjoint (
below Below may refer to: *Earth *Ground (disambiguation) *Soil *Floor *Bottom (disambiguation) Bottom may refer to: Anatomy and sex * Bottom (BDSM), the partner in a BDSM who takes the passive, receiving, or obedient role, to that of the top or ...
). The continuous dual space of a topological vector space (TVS) is denoted by . If and are TVSs then a linear map is weakly continuous if and only if , in which case we let denote the restriction of to . The map is called the transpose of . If the matrix describes a linear map with respect to bases of and , then the matrix describes the transpose of that linear map with respect to the dual bases.


Transpose of a bilinear form

Every linear map to the dual space defines a bilinear form , with the relation . By defining the transpose of this bilinear form as the bilinear form defined by the transpose i.e. , we find that . Here, is the natural homomorphism into the double dual.


Adjoint

If the vector spaces and have respectively nondegenerate
bilinear form In mathematics, a bilinear form is a bilinear map on a vector space (the elements of which are called '' vectors'') over a field ''K'' (the elements of which are called ''scalars''). In other words, a bilinear form is a function that is linear i ...
s and , a concept known as the adjoint, which is closely related to the transpose, may be defined: If is a linear map between vector spaces and , we define as the adjoint of if satisfies :B_X\big(x, g(y)\big) = B_Y\big(u(x), y\big) for all and . These bilinear forms define an isomorphism between and , and between and , resulting in an isomorphism between the transpose and adjoint of . The matrix of the adjoint of a map is the transposed matrix only if the bases are orthonormal with respect to their bilinear forms. In this context, many authors however, use the term transpose to refer to the adjoint as defined here. The adjoint allows us to consider whether is equal to . In particular, this allows the
orthogonal group In mathematics, the orthogonal group in dimension , denoted , is the Group (mathematics), group of isometry, distance-preserving transformations of a Euclidean space of dimension that preserve a fixed point, where the group operation is given by ...
over a vector space with a quadratic form to be defined without reference to matrices (nor the components thereof) as the set of all linear maps for which the adjoint equals the inverse. Over a complex vector space, one often works with sesquilinear forms (conjugate-linear in one argument) instead of bilinear forms. The Hermitian adjoint of a map between such spaces is defined similarly, and the matrix of the Hermitian adjoint is given by the conjugate transpose matrix if the bases are orthonormal.


See also

* Adjugate matrix, the transpose of the
cofactor matrix In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix, cut down from A by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square matrices (first minors ...
* Conjugate transpose * Moore–Penrose pseudoinverse * Projection (linear algebra)


References


Further reading

* * . * * * *


External links

* Gilbert Strang (Spring 2010
Linear Algebra
from MIT Open Courseware {{Tensors Matrices Abstract algebra Linear algebra