Diagonal matrix
   HOME

TheInfoList



OR:

In
linear algebra Linear algebra is the branch of mathematics concerning linear equations such as: :a_1x_1+\cdots +a_nx_n=b, linear maps such as: :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrice ...
, a diagonal matrix is a
matrix Matrix most commonly refers to: * ''The Matrix'' (franchise), an American media franchise ** '' The Matrix'', a 1999 science-fiction action film ** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchi ...
in which the entries outside the
main diagonal In linear algebra, the main diagonal (sometimes principal diagonal, primary diagonal, leading diagonal, major diagonal, or good diagonal) of a matrix A is the list of entries a_ where i = j. All off-diagonal elements are zero in a diagonal matri ...
are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is \left begin 3 & 0 \\ 0 & 2 \end\right/math>, while an example of a 3×3 diagonal matrix is \left begin 6 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end\right/math>. An
identity matrix In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere. Terminology and notation The identity matrix is often denoted by I_n, or simply by I if the size is immaterial or ...
of any size, or any multiple of it (a scalar matrix), is a diagonal matrix. A diagonal matrix is sometimes called a scaling matrix, since matrix multiplication with it results in changing scale (size). Its determinant is the product of its diagonal values.


Definition

As stated above, a diagonal matrix is a matrix in which all off-diagonal entries are zero. That is, the matrix with ''n'' columns and ''n'' rows is diagonal if \forall i,j \in \, i \ne j \implies d_ = 0. However, the main diagonal entries are unrestricted. The term ''diagonal matrix'' may sometimes refer to a , which is an ''m''-by-''n'' matrix with all the entries not of the form ''d''''i'',''i'' being zero. For example: :\begin 1 & 0 & 0\\ 0 & 4 & 0\\ 0 & 0 & -3\\ 0 & 0 & 0\\ \end or \begin 1 & 0 & 0 & 0 & 0\\ 0 & 4 & 0& 0 & 0\\ 0 & 0 & -3& 0 & 0 \end More often, however, ''diagonal matrix'' refers to square matrices, which can be specified explicitly as a . A square diagonal matrix is a symmetric matrix, so this can also be called a . The following matrix is square diagonal matrix: \begin 1 & 0 & 0\\ 0 & 4 & 0\\ 0 & 0 & -2 \end If the entries are
real numbers In mathematics, a real number is a number that can be used to measure a ''continuous'' one-dimensional quantity such as a distance, duration or temperature. Here, ''continuous'' means that values can have arbitrarily small variations. Every re ...
or complex numbers, then it is a normal matrix as well. In the remainder of this article we will consider only square diagonal matrices, and refer to them simply as "diagonal matrices".


Vector-to-matrix diag operator

A diagonal matrix D can be constructed from a vector \mathbf = \begina_1 & \dotsm & a_n\end^\textsf using the \operatorname operator: D = \operatorname(a_1, \dots, a_n) This may be written more compactly as D = \operatorname(\mathbf). The same operator is also used to represent block diagonal matrices as A = \operatorname(A_1, \dots, A_n) where each argument A_i is a matrix. The \operatorname operator may be written as: \operatorname(\mathbf) = \left(\mathbf \mathbf^\textsf\right) \circ I where \circ represents the Hadamard product and \mathbf is a constant vector with elements 1.


Matrix-to-vector diag operator

The inverse matrix-to-vector \operatorname operator is sometimes denoted by the identically named \operatorname(D) = \begina_1 & \dotsm & a_n\end^\textsf where the argument is now a matrix and the result is a vector of its diagonal entries. The following property holds: \operatorname(AB) = \sum_j \left(A \circ B^\textsf\right)_


Scalar matrix

A diagonal matrix with equal diagonal entries is a scalar matrix; that is, a scalar multiple ''λ'' of the
identity matrix In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere. Terminology and notation The identity matrix is often denoted by I_n, or simply by I if the size is immaterial or ...
. Its effect on a vector is scalar multiplication by ''λ''. For example, a 3×3 scalar matrix has the form: \begin \lambda & 0 & 0 \\ 0 & \lambda & 0 \\ 0 & 0 & \lambda \end \equiv \lambda \boldsymbol_3 The scalar matrices are the center of the algebra of matrices: that is, they are precisely the matrices that
commute Commute, commutation or commutative may refer to: * Commuting, the process of travelling between a place of residence and a place of work Mathematics * Commutative property, a property of a mathematical operation whose result is insensitive to th ...
with all other square matrices of the same size. By contrast, over a field (like the real numbers), a diagonal matrix with all diagonal elements distinct only commutes with diagonal matrices (its centralizer is the set of diagonal matrices). That is because if a diagonal matrix D = \operatorname(a_1, \dots, a_n) has a_i \neq a_j, then given a matrix M with m_ \neq 0, the (i, j) term of the products are: (DM)_ = a_im_ and (MD)_ = m_a_j, and a_jm_ \neq m_a_i (since one can divide by m_), so they do not commute unless the off-diagonal terms are zero. Diagonal matrices where the diagonal entries are not all equal or all distinct have centralizers intermediate between the whole space and only diagonal matrices. For an abstract vector space ''V'' (rather than the concrete vector space K^n), the analog of scalar matrices are scalar transformations. This is true more generally for a
module Module, modular and modularity may refer to the concept of modularity. They may also refer to: Computing and engineering * Modular design, the engineering discipline of designing complex devices using separately designed sub-components * Modul ...
''M'' over a ring ''R'', with the endomorphism algebra End(''M'') (algebra of linear operators on ''M'') replacing the algebra of matrices. Formally, scalar multiplication is a linear map, inducing a map R \to \operatorname(M), (from a scalar ''λ'' to its corresponding scalar transformation, multiplication by ''λ'') exhibiting End(''M'') as a ''R''-
algebra Algebra () is one of the broad areas of mathematics. Roughly speaking, algebra is the study of mathematical symbols and the rules for manipulating these symbols in formulas; it is a unifying thread of almost all of mathematics. Elementary ...
. For vector spaces, the scalar transforms are exactly the center of the endomorphism algebra, and, similarly, invertible transforms are the center of the general linear group GL(''V''). The former is more generally true
free module In mathematics, a free module is a module that has a basis – that is, a generating set consisting of linearly independent elements. Every vector space is a free module, but, if the ring of the coefficients is not a division ring (not a fiel ...
s M \cong R^n, for which the endomorphism algebra is isomorphic to a matrix algebra.


Vector operations

Multiplying a vector by a diagonal matrix multiplies each of the terms by the corresponding diagonal entry. Given a diagonal matrix D = \operatorname(a_1, \dots, a_n) and a vector \mathbf = \begin x_1 & \dotsm & x_n \end^\textsf, the product is: D\mathbf = \operatorname(a_1, \dots, a_n)\beginx_1 \\ \vdots \\ x_n\end = \begin a_1 \\ & \ddots \\ & & a_n \end \beginx_1 \\ \vdots \\ x_n\end = \begina_1 x_1 \\ \vdots \\ a_n x_n\end. This can be expressed more compactly by using a vector instead of a diagonal matrix, \mathbf = \begin a_1 & \dotsm & a_n \end^\textsf, and taking the Hadamard product of the vectors (entrywise product), denoted \mathbf \circ \mathbf: D\mathbf = \mathbf \circ \mathbf = \begin a_1 \\ \vdots \\ a_n \end \circ \begin x_1 \\ \vdots \\ x_n \end = \begin a_1 x_1 \\ \vdots \\ a_n x_n \end. This is mathematically equivalent, but avoids storing all the zero terms of this sparse matrix. This product is thus used in
machine learning Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence. Machine ...
, such as computing products of derivatives in backpropagation or multiplying IDF weights in TF-IDF, since some
BLAS Basic Linear Algebra Subprograms (BLAS) is a specification that prescribes a set of low-level routines for performing common linear algebra operations such as vector addition, scalar multiplication, dot products, linear combinations, and mat ...
frameworks, which multiply matrices efficiently, do not include Hadamard product capability directly.


Matrix operations

The operations of matrix addition and matrix multiplication are especially simple for diagonal matrices. Write for a diagonal matrix whose diagonal entries starting in the upper left corner are ''a''1, ..., ''a''''n''. Then, for addition, we have : + = and for matrix multiplication, : = . The diagonal matrix is invertible
if and only if In logic and related fields such as mathematics and philosophy, "if and only if" (shortened as "iff") is a biconditional logical connective between statements, where either both statements are true or both are false. The connective is bic ...
the entries ''a''1, ..., ''a''''n'' are all nonzero. In this case, we have : = . In particular, the diagonal matrices form a
subring In mathematics, a subring of ''R'' is a subset of a ring that is itself a ring when binary operations of addition and multiplication on ''R'' are restricted to the subset, and which shares the same multiplicative identity as ''R''. For those ...
of the ring of all ''n''-by-''n'' matrices. Multiplying an ''n''-by-''n'' matrix from the ''left'' with amounts to multiplying the -th ''row'' of by for all ; multiplying the matrix from the ''right'' with amounts to multiplying the -th ''column'' of by for all .


Operator matrix in eigenbasis

As explained in determining coefficients of operator matrix, there is a special basis, , for which the matrix A takes the diagonal form. Hence, in the defining equation A \mathbf e_j = \sum_i a_ \mathbf e_i, all coefficients a_ with are zero, leaving only one term per sum. The surviving diagonal elements, a_, are known as eigenvalues and designated with \lambda_i in the equation, which reduces to A \mathbf e_i = \lambda_i \mathbf e_i. The resulting equation is known as eigenvalue equation and used to derive the
characteristic polynomial In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The c ...
and, further, eigenvalues and eigenvectors. In other words, the
eigenvalue In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denote ...
s of are with associated eigenvectors of .


Properties

* The
determinant In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. It characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if a ...
of is the product . * The adjugate of a diagonal matrix is again diagonal. * Where all matrices are square, ** A matrix is diagonal if and only if it is triangular and normal. ** A matrix is diagonal if and only if it is both upper- and lower-triangular. ** A diagonal matrix is symmetric. * The
identity matrix In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere. Terminology and notation The identity matrix is often denoted by I_n, or simply by I if the size is immaterial or ...
''I''''n'' and zero matrix are diagonal. * A 1×1 matrix is always diagonal.


Applications

Diagonal matrices occur in many areas of linear algebra. Because of the simple description of the matrix operation and eigenvalues/eigenvectors given above, it is typically desirable to represent a given matrix or
linear map In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that ...
by a diagonal matrix. In fact, a given ''n''-by-''n'' matrix is similar to a diagonal matrix (meaning that there is a matrix such that is diagonal) if and only if it has linearly independent eigenvectors. Such matrices are said to be diagonalizable. Over the field of real or complex numbers, more is true. The
spectral theorem In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized (that is, represented as a diagonal matrix in some basis). This is extremely useful be ...
says that every normal matrix is unitarily similar to a diagonal matrix (if then there exists a unitary matrix such that is diagonal). Furthermore, the
singular value decomposition In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any \ m \times n\ matrix. It is re ...
implies that for any matrix , there exist unitary matrices and such that is diagonal with positive entries.


Operator theory

In operator theory, particularly the study of PDEs, operators are particularly easy to understand and PDEs easy to solve if the operator is diagonal with respect to the basis with which one is working; this corresponds to a separable partial differential equation. Therefore, a key technique to understanding operators is a change of coordinates—in the language of operators, an integral transform—which changes the basis to an eigenbasis of
eigenfunction In mathematics, an eigenfunction of a linear operator ''D'' defined on some function space is any non-zero function f in that space that, when acted upon by ''D'', is only multiplied by some scaling factor called an eigenvalue. As an equation, th ...
s: which makes the equation separable. An important example of this is the
Fourier transform A Fourier transform (FT) is a mathematical transform that decomposes functions into frequency components, which are represented by the output of the transform as a function of frequency. Most commonly functions of time or space are transformed ...
, which diagonalizes constant coefficient differentiation operators (or more generally translation invariant operators), such as the Laplacian operator, say, in the
heat equation In mathematics and physics, the heat equation is a certain partial differential equation. Solutions of the heat equation are sometimes known as caloric functions. The theory of the heat equation was first developed by Joseph Fourier in 1822 for ...
. Especially easy are multiplication operators, which are defined as multiplication by (the values of) a fixed function–the values of the function at each point correspond to the diagonal entries of a matrix.


See also

*
Anti-diagonal matrix In mathematics, an anti-diagonal matrix is a square matrix where all the entries are zero except those on the diagonal going from the lower left corner to the upper right corner (↗), known as the anti-diagonal (sometimes Harrison diagonal, secon ...
* Banded matrix *
Bidiagonal matrix In mathematics, a bidiagonal matrix is a banded matrix with non-zero entries along the main diagonal and ''either'' the diagonal above or the diagonal below. This means there are exactly two non-zero diagonals in the matrix. When the diagonal abo ...
* Diagonally dominant matrix *
Diagonalizable matrix In linear algebra, a square matrix A is called diagonalizable or non-defective if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix P and a diagonal matrix D such that or equivalently (Such D are not unique. ...
* Jordan normal form * Multiplication operator *
Tridiagonal matrix In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal (the first diagonal below this), and the supradiagonal/upper diagonal (the first diagonal above the main ...
*
Toeplitz matrix In linear algebra, a Toeplitz matrix or diagonal-constant matrix, named after Otto Toeplitz, is a matrix in which each descending diagonal from left to right is constant. For instance, the following matrix is a Toeplitz matrix: :\qquad\begin a & b ...
*
Toral Lie algebra In mathematics, a toral subalgebra is a Lie subalgebra of a general linear Lie algebra all of whose elements are semisimple (or diagonalizable over an algebraically closed field). Equivalently, a Lie algebra is toral if it contains no nonzero nilp ...
* Circulant matrix


Notes


References


Sources

* {{Matrix classes Matrix normal forms Sparse matrices