Vectorization (math)
   HOME

TheInfoList



OR:

In
mathematics Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. There are many ar ...
, especially in
linear algebra Linear algebra is the branch of mathematics concerning linear equations such as :a_1x_1+\cdots +a_nx_n=b, linear maps such as :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrix (mathemat ...
and
matrix theory In mathematics, a matrix (: matrices) is a rectangular array or table of numbers, symbols, or expressions, with elements or entries arranged in rows and columns, which is used to represent a mathematical object or property of such an object. ...
, the vectorization of a
matrix Matrix (: matrices or matrixes) or MATRIX may refer to: Science and mathematics * Matrix (mathematics), a rectangular array of numbers, symbols or expressions * Matrix (logic), part of a formula in prenex normal form * Matrix (biology), the m ...
is a
linear transformation In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that pr ...
which converts the matrix into a
vector Vector most often refers to: * Euclidean vector, a quantity with a magnitude and a direction * Disease vector, an agent that carries and transmits an infectious pathogen into another living organism Vector may also refer to: Mathematics a ...
. Specifically, the vectorization of a matrix ''A'', denoted vec(''A''), is the column vector obtained by stacking the columns of the matrix ''A'' on top of one another: \operatorname(A) = _, \ldots, a_, a_, \ldots, a_, \ldots, a_, \ldots, a_\mathrm Here, a_ represents the element in the ''i''-th row and ''j''-th column of ''A'', and the superscript ^\mathrm denotes the
transpose In linear algebra, the transpose of a Matrix (mathematics), matrix is an operator which flips a matrix over its diagonal; that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other ...
. Vectorization expresses, through coordinates, the
isomorphism In mathematics, an isomorphism is a structure-preserving mapping or morphism between two structures of the same type that can be reversed by an inverse mapping. Two mathematical structures are isomorphic if an isomorphism exists between the ...
\mathbf^ := \mathbf^m \otimes \mathbf^n \cong \mathbf^ between these (i.e., of matrices and vectors) as vector spaces. For example, for the 2×2 matrix A = \begin a & b \\ c & d \end, the vectorization is \operatorname(A) = \begin a \\ c \\ b \\ d \end. The connection between the vectorization of ''A'' and the vectorization of its transpose is given by the
commutation matrix In mathematics, especially in linear algebra and matrix theory, the commutation matrix is used for transforming the vectorized form of a matrix into the vectorized form of its transpose. Specifically, the commutation matrix K(''m'',''n'') is the ...
.


Compatibility with Kronecker products

The vectorization is frequently used together with the
Kronecker product In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a specialization of the tensor product (which is denoted by the same symbol) from vector ...
to express
matrix multiplication In mathematics, specifically in linear algebra, matrix multiplication is a binary operation that produces a matrix (mathematics), matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the n ...
as a linear transformation on matrices. In particular, \operatorname(ABC) = (C^\mathrm\otimes A) \operatorname(B) for matrices ''A'', ''B'', and ''C'' of dimensions ''k''×''l'', ''l''×''m'', and ''m''×''n''.The identity for row-major vectorization is \operatorname(ABC) = (A \otimes C^\mathrm)\operatorname(B). For example, if \operatorname_A(X) = AX-XA (the
adjoint endomorphism In mathematics, the adjoint representation (or adjoint action) of a Lie group ''G'' is a way of representing the elements of the group as linear transformations of the group's Lie algebra, considered as a vector space. For example, if ''G'' is \m ...
of the
Lie algebra In mathematics, a Lie algebra (pronounced ) is a vector space \mathfrak g together with an operation called the Lie bracket, an alternating bilinear map \mathfrak g \times \mathfrak g \rightarrow \mathfrak g, that satisfies the Jacobi ident ...
of all ''n''×''n'' matrices with
complex Complex commonly refers to: * Complexity, the behaviour of a system whose components interact in multiple ways so possible interactions are difficult to describe ** Complex system, a system composed of many components which may interact with each ...
entries), then \operatorname(\operatorname_A(X)) = (A \otimes I_n - I_n \otimes A^\mathrm) \text(X), where I_n is the ''n''×''n''
identity matrix In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere. It has unique properties, for example when the identity matrix represents a geometric transformation, the obje ...
. There are two other useful formulations: \begin \operatorname(ABC) &= (I_n\otimes AB)\operatorname(C) = (C^\mathrmB^\mathrm\otimes I_k) \operatorname(A) \\ \operatorname(AB) &= (I_m \otimes A) \operatorname(B) = (B^\mathrm\otimes I_k) \operatorname(A) \end If ''B'' is a
diagonal matrix In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagon ...
(i.e., B = \operatorname(b_1, \dots, b_n)), the vectorization can be written using the column-wise Kronecker product \ast (see Khatri-Rao product) and the
main diagonal In linear algebra, the main diagonal (sometimes principal diagonal, primary diagonal, leading diagonal, major diagonal, or good diagonal) of a matrix A is the list of entries a_ where i = j. All off-diagonal elements are zero in a diagonal matrix ...
b = \begin b_1, \dots, b_n \end^\mathrm T of ''B'': \operatorname(ABC) = (C^\mathrm \ast A) b More generally, it has been shown that vectorization is a self-adjunction in the monoidal closed structure of any category of matrices.


Compatibility with Hadamard products

Vectorization is an
algebra homomorphism In mathematics, an algebra over a field (often simply called an algebra) is a vector space equipped with a bilinear product. Thus, an algebra is an algebraic structure consisting of a set together with operations of multiplication and addition ...
from the space of matrices with the
Hadamard Jacques Salomon Hadamard (; 8 December 1865 – 17 October 1963) was a French mathematician who made major contributions in number theory, complex analysis, differential geometry, and partial differential equations. Biography The son of a tea ...
(entrywise) product to C''n''2 with its Hadamard product: \operatorname(A \circ B) = \operatorname(A) \circ \operatorname(B) .


Compatibility with inner products

Vectorization is a
unitary transformation In mathematics, a unitary transformation is a linear isomorphism that preserves the inner product: the inner product of two vectors before the transformation is equal to their inner product after the transformation. Formal definition More precise ...
from the space of ''n''×''n'' matrices with the
Frobenius Frobenius is a surname. Notable people with the surname include: * Ferdinand Georg Frobenius (1849–1917), mathematician ** Frobenius algebra ** Frobenius endomorphism ** Frobenius inner product ** Frobenius norm ** Frobenius method ** Frobenius g ...
(or Hilbert–Schmidt)
inner product In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, ofte ...
to C''n''2: \operatorname(A^\dagger B) = \operatorname(A)^\dagger \operatorname(B), where the superscript denotes the
conjugate transpose In mathematics, the conjugate transpose, also known as the Hermitian transpose, of an m \times n complex matrix \mathbf is an n \times m matrix obtained by transposing \mathbf and applying complex conjugation to each entry (the complex conjugate ...
.


Vectorization as a linear sum

The matrix vectorization operation can be written in terms of a linear sum. Let X be an matrix that we want to vectorize, and let e''i'' be the ''i''-th canonical basis vector for the ''n''-dimensional space, that is \mathbf_i=\left ,\dots,0,1,0,\dots,0\right\mathrm. Let B''i'' be a block matrix defined as follows: \mathbf_i = \begin \mathbf \\ \vdots \\ \mathbf \\ \mathbf_m \\ \mathbf \\ \vdots \\ \mathbf \end = \mathbf_i \otimes \mathbf_m B''i'' consists of ''n'' block matrices of size , stacked column-wise, and all these matrices are all-zero except for the ''i''-th one, which is a identity matrix I''m''. Then the vectorized version of X can be expressed as follows: \operatorname(\mathbf) = \sum_^n \mathbf_i \mathbf \mathbf_i Multiplication of X by e''i'' extracts the ''i''-th column, while multiplication by B''i'' puts it into the desired position in the final vector. Alternatively, the linear sum can be expressed using the
Kronecker product In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a specialization of the tensor product (which is denoted by the same symbol) from vector ...
: \operatorname(\mathbf) = \sum_^n \mathbf_i \otimes \mathbf \mathbf_i


Half-vectorization

For a
symmetric matrix In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, Because equal matrices have equal dimensions, only square matrices can be symmetric. The entries of a symmetric matrix are symmetric with ...
''A'', the vector vec(''A'') contains more information than is strictly necessary, since the matrix is completely determined by the symmetry together with the
lower triangular In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called if all the entries ''above'' the main diagonal are zero. Similarly, a square matrix is called if all the entries ''below'' the main diagonal are z ...
portion, that is, the entries on and below the
main diagonal In linear algebra, the main diagonal (sometimes principal diagonal, primary diagonal, leading diagonal, major diagonal, or good diagonal) of a matrix A is the list of entries a_ where i = j. All off-diagonal elements are zero in a diagonal matrix ...
. For such matrices, the half-vectorization is sometimes more useful than the vectorization. The half-vectorization, vech(''A''), of a symmetric matrix ''A'' is the column vector obtained by vectorizing only the lower triangular part of ''A'': \operatorname(A) = _, \ldots, A_, A_, \ldots, A_, \ldots, A_, A_, A_\mathrm. For example, for the 2×2 matrix A = \begin a & b \\ b & d \end, the half-vectorization is \operatorname(A) = \begin a \\ b \\ d \end. There exist unique matrices transforming the half-vectorization of a matrix to its vectorization and vice versa called, respectively, the
duplication matrix In mathematics, especially in linear algebra and matrix theory, the duplication matrix and the elimination matrix are linear transformations used for transforming half-vectorizations of matrices into vectorizations or (respectively) vice versa. D ...
and the
elimination matrix In mathematics, especially in linear algebra and matrix theory, the duplication matrix and the elimination matrix are linear transformations used for transforming half-vectorizations of matrices into vectorizations or (respectively) vice versa. D ...
.


Programming language

Programming languages that implement matrices may have easy means for vectorization. In
Matlab MATLAB (an abbreviation of "MATrix LABoratory") is a proprietary multi-paradigm programming language and numeric computing environment developed by MathWorks. MATLAB allows matrix manipulations, plotting of functions and data, implementat ...
/
GNU Octave GNU Octave is a scientific programming language for scientific computing and numerical computation. Octave helps in solving linear and nonlinear problems numerically, and for performing other numerical experiments using a language that is mostly ...
a matrix A can be vectorized by A(:).
GNU Octave GNU Octave is a scientific programming language for scientific computing and numerical computation. Octave helps in solving linear and nonlinear problems numerically, and for performing other numerical experiments using a language that is mostly ...
also allows vectorization and half-vectorization with vec(A) and vech(A) respectively.
Julia Julia may refer to: People *Julia (given name), including a list of people with the name *Julia (surname), including a list of people with the name *Julia gens, a patrician family of Ancient Rome *Julia (clairvoyant) (fl. 1689), lady's maid of Qu ...
has the vec(A) function as well. In
Python Python may refer to: Snakes * Pythonidae, a family of nonvenomous snakes found in Africa, Asia, and Australia ** ''Python'' (genus), a genus of Pythonidae found in Africa and Asia * Python (mythology), a mythical serpent Computing * Python (prog ...
NumPy NumPy (pronounced ) is a library for the Python programming language, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays. The predeces ...
arrays implement the flatten method, while in R the desired effect can be achieved via the c() or as.vector() functions or, more efficiently, by removing the dimensions attribute of a matrix A with dim(A) <- NULL. In R, function vec() of package 'ks' allows vectorization and function vech() implemented in both packages 'ks' and 'sn' allows half-vectorization.


Applications

Vectorization is used in
matrix calculus In mathematics, matrix calculus is a specialized notation for doing multivariable calculus, especially over spaces of matrix (mathematics), matrices. It collects the various partial derivatives of a single Function (mathematics), function with ...
and its applications in establishing e.g., moments of random vectors and matrices, asymptotics, as well as Jacobian and Hessian matrices. It is also used in local sensitivity and statistical diagnostics.


Notes


See also

*
Duplication and elimination matrices In mathematics, especially in linear algebra and matrix theory, the duplication matrix and the elimination matrix are linear transformations used for transforming half-vectorizations of matrices into vectorizations or (respectively) vice versa. D ...
*
Voigt notation In mathematics, Voigt notation or Voigt form in multilinear algebra is a way to represent a symmetric tensor by reducing its order. There are a few variants and associated names for this idea: Mandel notation, Mandel–Voigt notation and Nye notat ...
*
Packed storage matrix A packed storage matrix, also known as packed matrix, is a term used in programming for representing an m\times n matrix. It is a more compact way than an m-by-n rectangular array by exploiting a special structure of the matrix. Typical examples ...
* Column-major order * Matricization


References

{{Reflist Linear algebra Matrices (mathematics)