HOME

TheInfoList



OR:

In
linear algebra Linear algebra is the branch of mathematics concerning linear equations such as: :a_1x_1+\cdots +a_nx_n=b, linear maps such as: :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrice ...
, a symmetric matrix is a
square matrix In mathematics, a square matrix is a matrix with the same number of rows and columns. An ''n''-by-''n'' matrix is known as a square matrix of order Any two square matrices of the same order can be added and multiplied. Square matrices are often ...
that is equal to its
transpose In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal; that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other notations). The tr ...
. Formally, Because equal matrices have equal dimensions, only square matrices can be symmetric. The entries of a symmetric matrix are symmetric with respect to the
main diagonal In linear algebra, the main diagonal (sometimes principal diagonal, primary diagonal, leading diagonal, major diagonal, or good diagonal) of a matrix A is the list of entries a_ where i = j. All off-diagonal elements are zero in a diagonal matri ...
. So if a_ denotes the entry in the ith row and jth column then for all indices i and j. Every square
diagonal matrix In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal m ...
is symmetric, since all off-diagonal elements are zero. Similarly in characteristic different from 2, each diagonal element of a
skew-symmetric matrix In mathematics, particularly in linear algebra, a skew-symmetric (or antisymmetric or antimetric) matrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition In terms of the entries of the matrix, i ...
must be zero, since each is its own negative. In linear algebra, a real symmetric matrix represents a
self-adjoint operator In mathematics, a self-adjoint operator on an infinite-dimensional complex vector space ''V'' with inner product \langle\cdot,\cdot\rangle (equivalently, a Hermitian operator in the finite-dimensional case) is a linear map ''A'' (from ''V'' to its ...
represented in an
orthonormal basis In mathematics, particularly linear algebra, an orthonormal basis for an inner product space ''V'' with finite dimension is a basis for V whose vectors are orthonormal, that is, they are all unit vectors and orthogonal to each other. For examp ...
over a real
inner product space In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often ...
. The corresponding object for a complex inner product space is a
Hermitian matrix In mathematics, a Hermitian matrix (or self-adjoint matrix) is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the -th row and -th column is equal to the complex conjugate of the element in the -t ...
with complex-valued entries, which is equal to its
conjugate transpose In mathematics, the conjugate transpose, also known as the Hermitian transpose, of an m \times n complex matrix \boldsymbol is an n \times m matrix obtained by transposing \boldsymbol and applying complex conjugate on each entry (the complex c ...
. Therefore, in linear algebra over the complex numbers, it is often assumed that a symmetric matrix refers to one which has real-valued entries. Symmetric matrices appear naturally in a variety of applications, and typical numerical linear algebra software makes special accommodations for them.


Example

The following 3 \times 3 matrix is symmetric: A = \begin 1 & 7 & 3 \\ 7 & 4 & 5 \\ 3 & 5 & 1 \end


Properties


Basic properties

* The sum and difference of two symmetric matrices is symmetric. * This is not always true for the product: given symmetric matrices A and B, then AB is symmetric if and only if A and B
commute Commute, commutation or commutative may refer to: * Commuting, the process of travelling between a place of residence and a place of work Mathematics * Commutative property, a property of a mathematical operation whose result is insensitive to th ...
, i.e., if AB=BA. * For any integer n, A^n is symmetric if A is symmetric. * If A^ exists, it is symmetric if and only if A is symmetric. * Rank of a symmetric matrix A is equal to the number of non-zero eigenvalues of A.


Decomposition into symmetric and skew-symmetric

Any square matrix can uniquely be written as sum of a symmetric and a skew-symmetric matrix. This decomposition is known as the Toeplitz decomposition. Let \mbox_n denote the space of n \times n matrices. If \mbox_n denotes the space of n \times n symmetric matrices and \mbox_n the space of n \times n skew-symmetric matrices then \mbox_n = \mbox_n + \mbox_n and \mbox_n \cap \mbox_n = \, i.e. \mbox_n = \mbox_n \oplus \mbox_n , where \oplus denotes the
direct sum The direct sum is an operation between structures in abstract algebra, a branch of mathematics. It is defined differently, but analogously, for different kinds of structures. To see how the direct sum is used in abstract algebra, consider a mor ...
. Let X \in \mbox_n then X = \frac\left(X + X^\textsf\right) + \frac\left(X - X^\textsf\right). Notice that \frac\left(X + X^\textsf\right) \in \mbox_n and \frac \left(X - X^\textsf\right) \in \mathrm_n. This is true for every
square matrix In mathematics, a square matrix is a matrix with the same number of rows and columns. An ''n''-by-''n'' matrix is known as a square matrix of order Any two square matrices of the same order can be added and multiplied. Square matrices are often ...
X with entries from any field whose characteristic is different from 2. A symmetric n \times n matrix is determined by \tfracn(n+1) scalars (the number of entries on or above the
main diagonal In linear algebra, the main diagonal (sometimes principal diagonal, primary diagonal, leading diagonal, major diagonal, or good diagonal) of a matrix A is the list of entries a_ where i = j. All off-diagonal elements are zero in a diagonal matri ...
). Similarly, a
skew-symmetric matrix In mathematics, particularly in linear algebra, a skew-symmetric (or antisymmetric or antimetric) matrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition In terms of the entries of the matrix, i ...
is determined by \tfracn(n-1) scalars (the number of entries above the main diagonal).


Matrix congruent to a symmetric matrix

Any matrix congruent to a symmetric matrix is again symmetric: if X is a symmetric matrix, then so is A X A^ for any matrix A.


Symmetry implies normality

A (real-valued) symmetric matrix is necessarily a
normal matrix In mathematics, a complex square matrix is normal if it commutes with its conjugate transpose : The concept of normal matrices can be extended to normal operators on infinite dimensional normed spaces and to normal elements in C*-algebras. As ...
.


Real symmetric matrices

Denote by \langle \cdot,\cdot \rangle the standard
inner product In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often ...
on \mathbb^n. The real n \times n matrix A is symmetric if and only if \langle Ax, y \rangle = \langle x, Ay \rangle \quad \forall x, y \in \mathbb^n. Since this definition is independent of the choice of basis, symmetry is a property that depends only on the
linear operator In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a Map (mathematics), mapping V \to W between two vect ...
A and a choice of
inner product In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often ...
. This characterization of symmetry is useful, for example, in
differential geometry Differential geometry is a mathematical discipline that studies the geometry of smooth shapes and smooth spaces, otherwise known as smooth manifolds. It uses the techniques of differential calculus, integral calculus, linear algebra and mult ...
, for each
tangent space In mathematics, the tangent space of a manifold generalizes to higher dimensions the notion of '' tangent planes'' to surfaces in three dimensions and ''tangent lines'' to curves in two dimensions. In the context of physics the tangent space to a ...
to a
manifold In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point. More precisely, an n-dimensional manifold, or ''n-manifold'' for short, is a topological space with the property that each point has a n ...
may be endowed with an inner product, giving rise to what is called a
Riemannian manifold In differential geometry, a Riemannian manifold or Riemannian space , so called after the German mathematician Bernhard Riemann, is a real, smooth manifold ''M'' equipped with a positive-definite inner product ''g'p'' on the tangent space ...
. Another area where this formulation is used is in
Hilbert space In mathematics, Hilbert spaces (named after David Hilbert) allow generalizing the methods of linear algebra and calculus from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. Hilbert spaces arise natural ...
s. The finite-dimensional
spectral theorem In mathematics, particularly linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized (that is, represented as a diagonal matrix in some basis). This is extremely useful be ...
says that any symmetric matrix whose entries are real can be diagonalized by an
orthogonal matrix In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors. One way to express this is Q^\mathrm Q = Q Q^\mathrm = I, where is the transpose of and is the identity m ...
. More explicitly: For every real symmetric matrix A there exists a real orthogonal matrix Q such that D = Q^ A Q is a
diagonal matrix In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal m ...
. Every real symmetric matrix is thus,
up to Two mathematical objects ''a'' and ''b'' are called equal up to an equivalence relation ''R'' * if ''a'' and ''b'' are related by ''R'', that is, * if ''aRb'' holds, that is, * if the equivalence classes of ''a'' and ''b'' with respect to ''R'' ...
choice of an
orthonormal basis In mathematics, particularly linear algebra, an orthonormal basis for an inner product space ''V'' with finite dimension is a basis for V whose vectors are orthonormal, that is, they are all unit vectors and orthogonal to each other. For examp ...
, a diagonal matrix. If A and B are n \times n real symmetric matrices that commute, then they can be simultaneously diagonalized: there exists a basis of \mathbb^n such that every element of the basis is an
eigenvector In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted ...
for both A and B. Every real symmetric matrix is
Hermitian {{Short description, none Numerous things are named after the French mathematician Charles Hermite (1822–1901): Hermite * Cubic Hermite spline, a type of third-degree spline * Gauss–Hermite quadrature, an extension of Gaussian quadrature m ...
, and therefore all its
eigenvalues In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denote ...
are real. (In fact, the eigenvalues are the entries in the diagonal matrix D (above), and therefore D is uniquely determined by A up to the order of its entries.) Essentially, the property of being symmetric for real matrices corresponds to the property of being Hermitian for complex matrices.


Complex symmetric matrices

A complex symmetric matrix can be 'diagonalized' using a
unitary matrix In linear algebra, a complex square matrix is unitary if its conjugate transpose is also its inverse, that is, if U^* U = UU^* = UU^ = I, where is the identity matrix. In physics, especially in quantum mechanics, the conjugate transpose is ...
: thus if A is a complex symmetric matrix, there is a unitary matrix U such that U A U^ is a real diagonal matrix with non-negative entries. This result is referred to as the Autonne–Takagi factorization. It was originally proved by
Léon Autonne Léon César Autonne (28 July 1859, Odessa – 12 January 1916) was a French engineer and mathematician, specializing in algebraic geometry, differential equations, and linear algebra. Education and career Autonne studied from 1878 to 1880 at l'É ...
(1915) and
Teiji Takagi Teiji Takagi (高木 貞治 ''Takagi Teiji'', April 21, 1875 – February 28, 1960) was a Japanese mathematician, best known for proving the Takagi existence theorem in class field theory. The Blancmange curve, the graph of a nowhere-differentiabl ...
(1925) and rediscovered with different proofs by several other mathematicians. In fact, the matrix B=A^ A is Hermitian and positive semi-definite, so there is a unitary matrix V such that V^ B V is diagonal with non-negative real entries. Thus C=V^ A V is complex symmetric with C^C real. Writing C=X+iY with X and Y real symmetric matrices, C^C=X^2+Y^2+i(XY-YX). Thus XY=YX. Since X and Y commute, there is a real orthogonal matrix W such that both W X W^ and W Y W^ are diagonal. Setting U=W V^ (a unitary matrix), the matrix UAU^ is complex diagonal. Pre-multiplying U by a suitable diagonal unitary matrix (which preserves unitarity of U), the diagonal entries of UAU^ can be made to be real and non-negative as desired. To construct this matrix, we express the diagonal matrix as UAU^\mathrm T = \operatorname(r_1 e^,r_2 e^, \dots, r_n e^). The matrix we seek is simply given by D = \operatorname(e^,e^, \dots, e^). Clearly DUAU^\mathrm TD = \operatorname(r_1, r_2, \dots, r_n) as desired, so we make the modification U' = DU. Since their squares are the eigenvalues of A^ A, they coincide with the
singular value In mathematics, in particular functional analysis, the singular values, or ''s''-numbers of a compact operator T: X \rightarrow Y acting between Hilbert spaces X and Y, are the square roots of the (necessarily non-negative) eigenvalues of the self ...
s of A. (Note, about the eigen-decomposition of a complex symmetric matrix A, the Jordan normal form of A may not be diagonal, therefore A may not be diagonalized by any similarity transformation.)


Decomposition

Using the Jordan normal form, one can prove that every square real matrix can be written as a product of two real symmetric matrices, and every square complex matrix can be written as a product of two complex symmetric matrices. Every real non-singular matrix can be uniquely factored as the product of an
orthogonal matrix In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors. One way to express this is Q^\mathrm Q = Q Q^\mathrm = I, where is the transpose of and is the identity m ...
and a symmetric
positive definite matrix In mathematics, a symmetric matrix M with real entries is positive-definite if the real number z^\textsfMz is positive for every nonzero real column vector z, where z^\textsf is the transpose of More generally, a Hermitian matrix (that is, a ...
, which is called a polar decomposition. Singular matrices can also be factored, but not uniquely. Cholesky decomposition states that every real positive-definite symmetric matrix A is a product of a lower-triangular matrix L and its transpose, A = LL^\textsf. If the matrix is symmetric indefinite, it may be still decomposed as PAP^\textsf = LDL^\textsf where P is a permutation matrix (arising from the need to pivot), L a lower unit triangular matrix, and D is a direct sum of symmetric 1 \times 1 and 2 \times 2 blocks, which is called Bunch–Kaufman decomposition A general (complex) symmetric matrix may be defective and thus not be diagonalizable. If A is diagonalizable it may be decomposed as A = Q \Lambda Q^\textsf where Q is an orthogonal matrix Q Q^\textsf = I, and \Lambda is a diagonal matrix of the eigenvalues of A. In the special case that A is real symmetric, then Q and \Lambda are also real. To see orthogonality, suppose \mathbf x and \mathbf y are eigenvectors corresponding to distinct eigenvalues \lambda_1, \lambda_2. Then \lambda_1 \langle \mathbf x, \mathbf y \rangle = \langle A \mathbf x, \mathbf y \rangle = \langle \mathbf x, A \mathbf y \rangle = \lambda_2 \langle \mathbf x, \mathbf y \rangle. Since \lambda_1 and \lambda_2 are distinct, we have \langle \mathbf x, \mathbf y \rangle = 0.


Hessian

Symmetric n \times n matrices of real functions appear as the Hessians of twice differentiable functions of n real variables (the continuity of the second derivative is not needed, despite common belief to the opposite). Every
quadratic form In mathematics, a quadratic form is a polynomial with terms all of degree two ("form" is another name for a homogeneous polynomial). For example, :4x^2 + 2xy - 3y^2 is a quadratic form in the variables and . The coefficients usually belong to ...
q on \mathbb^n can be uniquely written in the form q(\mathbf) = \mathbf^\textsf A \mathbf with a symmetric n \times n matrix A. Because of the above spectral theorem, one can then say that every quadratic form, up to the choice of an orthonormal basis of \R^n, "looks like" q\left(x_1, \ldots, x_n\right) = \sum_^n \lambda_i x_i^2 with real numbers \lambda_i. This considerably simplifies the study of quadratic forms, as well as the study of the level sets \left\ which are generalizations of
conic section In mathematics, a conic section, quadratic curve or conic is a curve obtained as the intersection of the surface of a cone with a plane. The three types of conic section are the hyperbola, the parabola, and the ellipse; the circle is a ...
s. This is important partly because the second-order behavior of every smooth multi-variable function is described by the quadratic form belonging to the function's Hessian; this is a consequence of
Taylor's theorem In calculus, Taylor's theorem gives an approximation of a ''k''-times differentiable function around a given point by a polynomial of degree ''k'', called the ''k''th-order Taylor polynomial. For a smooth function, the Taylor polynomial is th ...
.


Symmetrizable matrix

An n \times n matrix A is said to be symmetrizable if there exists an invertible
diagonal matrix In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal m ...
D and symmetric matrix S such that A = DS. The transpose of a symmetrizable matrix is symmetrizable, since A^=(DS)^=SD=D^(DSD) and DSD is symmetric. A matrix A=(a_) is symmetrizable if and only if the following conditions are met: # a_ = 0 implies a_ = 0 for all 1 \le i \le j \le n. # a_ a_ \dots a_ = a_ a_ \dots a_ for any finite sequence \left(i_1, i_2, \dots, i_k\right).


See also

Other types of
symmetry Symmetry (from grc, συμμετρία "agreement in dimensions, due proportion, arrangement") in everyday language refers to a sense of harmonious and beautiful proportion and balance. In mathematics, "symmetry" has a more precise definiti ...
or pattern in square matrices have special names; see for example: *
Skew-symmetric matrix In mathematics, particularly in linear algebra, a skew-symmetric (or antisymmetric or antimetric) matrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition In terms of the entries of the matrix, i ...
(also called ''antisymmetric'' or ''antimetric'') * Centrosymmetric matrix *
Circulant matrix In linear algebra, a circulant matrix is a square matrix in which all row vectors are composed of the same elements and each row vector is rotated one element to the right relative to the preceding row vector. It is a particular kind of Toeplit ...
*
Covariance matrix In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of ...
*
Coxeter matrix In mathematics, a Coxeter group, named after H. S. M. Coxeter, is an abstract group that admits a formal description in terms of reflections (or kaleidoscopic mirrors). Indeed, the finite Coxeter groups are precisely the finite Euclidean refle ...
*
GCD matrix In mathematics, a greatest common divisor matrix (sometimes abbreviated as GCD matrix) is a matrix Matrix most commonly refers to: * ''The Matrix'' (franchise), an American media franchise ** '' The Matrix'', a 1999 science-fiction action fi ...
* Hankel matrix * Hilbert matrix * Persymmetric matrix *
Sylvester's law of inertia Sylvester's law of inertia is a theorem in matrix algebra about certain properties of the coefficient matrix of a real quadratic form that remain invariant under a change of basis. Namely, if ''A'' is the symmetric matrix that defines the quadra ...
*
Toeplitz matrix In linear algebra, a Toeplitz matrix or diagonal-constant matrix, named after Otto Toeplitz, is a matrix in which each descending diagonal from left to right is constant. For instance, the following matrix is a Toeplitz matrix: :\qquad\begin a & b ...
* Transpositions matrix See also
symmetry in mathematics Symmetry occurs not only in geometry, but also in other branches of mathematics. Symmetry is a type of invariance: the property that a mathematical object remains unchanged under a set of operations or transformations. Given a structured obje ...
.


Notes


References

*


External links

*
A brief introduction and proof of eigenvalue properties of the real symmetric matrix

How to implement a Symmetric Matrix in C++
{{Authority control Matrices