HOME

TheInfoList



OR:

In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called if all the entries ''above'' the main diagonal are zero. Similarly, a square matrix is called if all the entries ''below'' the main diagonal are zero. Because matrix equations with triangular matrices are easier to solve, they are very important in
numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of numerical methods t ...
. By the LU decomposition algorithm, an
invertible matrix In linear algebra, an -by- square matrix is called invertible (also nonsingular or nondegenerate), if there exists an -by- square matrix such that :\mathbf = \mathbf = \mathbf_n \ where denotes the -by- identity matrix and the multiplicati ...
may be written as the product of a lower triangular matrix ''L'' and an upper triangular matrix ''U'' if and only if all its leading principal minors are non-zero.


Description

A matrix of the form :L = \begin \ell_ & & & & 0 \\ \ell_ & \ell_ & & & \\ \ell_ & \ell_ & \ddots & & \\ \vdots & \vdots & \ddots & \ddots & \\ \ell_ & \ell_ & \ldots & \ell_ & \ell_ \end is called a lower triangular matrix or left triangular matrix, and analogously a matrix of the form :U = \begin u_ & u_ & u_ & \ldots & u_ \\ & u_ & u_ & \ldots & u_ \\ & & \ddots & \ddots & \vdots \\ & & & \ddots & u_ \\ 0 & & & & u_ \end is called an upper triangular matrix or right triangular matrix. A lower or left triangular matrix is commonly denoted with the variable ''L'', and an upper or right triangular matrix is commonly denoted with the variable ''U'' or ''R''. A matrix that is both upper and lower triangular is diagonal. Matrices that are similar to triangular matrices are called triangularisable. A non-square (or sometimes any) matrix with zeros above (below) the diagonal is called a lower (upper) trapezoidal matrix. The non-zero entries form the shape of a trapezoid.


Examples

This matrix :\begin 1 & 4 & 1 \\ 0 & 6 & 4 \\ 0 & 0 & 1 \\ \end is upper triangular and this matrix :\begin 1 & 0 & 0 \\ 2 & 96 & 0 \\ 4 & 9 & 69 \\ \end is lower triangular.


Forward and back substitution

A matrix equation in the form L\mathbf = \mathbf or U\mathbf = \mathbf is very easy to solve by an iterative process called forward substitution for lower triangular matrices and analogously back substitution for upper triangular matrices. The process is so called because for lower triangular matrices, one first computes x_1, then substitutes that ''forward'' into the ''next'' equation to solve for x_2, and repeats through to x_n. In an upper triangular matrix, one works ''backwards,'' first computing x_n, then substituting that ''back'' into the ''previous'' equation to solve for x_, and repeating through x_1. Notice that this does not require inverting the matrix.


Forward substitution

The matrix equation ''L''x = b can be written as a system of linear equations :\begin \ell_ x_1 & & & & & & & = & b_1 \\ \ell_ x_1 & + & \ell_ x_2 & & & & & = & b_2 \\ \vdots & & \vdots & & \ddots & & & & \vdots \\ \ell_ x_1 & + & \ell_ x_2 & + & \dotsb & + & \ell_ x_m & = & b_m \\ \end Observe that the first equation (\ell_ x_1 = b_1) only involves x_1, and thus one can solve for x_1 directly. The second equation only involves x_1 and x_2, and thus can be solved once one substitutes in the already solved value for x_1. Continuing in this way, the k-th equation only involves x_1,\dots,x_k, and one can solve for x_k using the previously solved values for x_1,\dots,x_. The resulting formulas are: :\begin x_1 &= \frac, \\ x_2 &= \frac, \\ &\ \ \vdots \\ x_m &= \frac. \end A matrix equation with an upper triangular matrix ''U'' can be solved in an analogous way, only working backwards.


Applications

Forward substitution is used in financial
bootstrapping In general, bootstrapping usually refers to a self-starting process that is supposed to continue or grow without external input. Etymology Tall boots may have a tab, loop or handle at the top known as a bootstrap, allowing one to use fingers ...
to construct a yield curve.


Properties

The transpose of an upper triangular matrix is a lower triangular matrix and vice versa. A matrix which is both symmetric and triangular is diagonal. In a similar vein, a matrix which is both normal (meaning ''A''*''A'' = ''AA''*, where ''A''* is the
conjugate transpose In mathematics, the conjugate transpose, also known as the Hermitian transpose, of an m \times n complex matrix \boldsymbol is an n \times m matrix obtained by transposing \boldsymbol and applying complex conjugate on each entry (the complex c ...
) and triangular is also diagonal. This can be seen by looking at the diagonal entries of ''A''*''A'' and ''AA''*. The determinant and permanent of a triangular matrix equal the product of the diagonal entries, as can be checked by direct computation. In fact more is true: the
eigenvalue In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted ...
s of a triangular matrix are exactly its diagonal entries. Moreover, each eigenvalue occurs exactly ''k'' times on the diagonal, where ''k'' is its
algebraic multiplicity In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted b ...
, that is, its multiplicity as a root of the
characteristic polynomial In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The char ...
p_A(x)=\det(xI-A) of ''A''. In other words, the characteristic polynomial of a triangular ''n''×''n'' matrix ''A'' is exactly : p_A(x) = (x-a_)(x-a_)\cdots(x-a_), that is, the unique degree ''n'' polynomial whose roots are the diagonal entries of ''A'' (with multiplicities). To see this, observe that xI-A is also triangular and hence its determinant \det(xI-A) is the product of its diagonal entries (x-a_)(x-a_)\cdots(x-a_).


Special forms


Unitriangular matrix

If the entries on the main diagonal of a (upper or lower) triangular matrix are all 1, the matrix is called (upper or lower) unitriangular. Other names used for these matrices are unit (upper or lower) triangular, or very rarely normed (upper or lower) triangular. However, a ''unit'' triangular matrix is not the same as the ''
unit matrix In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere. Terminology and notation The identity matrix is often denoted by I_n, or simply by I if the size is immaterial or ...
'', and a ''normed'' triangular matrix has nothing to do with the notion of matrix norm. All finite unitriangular matrices are unipotent.


Strictly triangular matrix

If all of the entries on the main diagonal of a (upper or lower) triangular matrix are also 0, the matrix is called strictly (upper or lower) triangular. All finite strictly triangular matrices are
nilpotent In mathematics, an element x of a ring R is called nilpotent if there exists some positive integer n, called the index (or sometimes the degree), such that x^n=0. The term was introduced by Benjamin Peirce in the context of his work on the cla ...
of index at most ''n'' as a consequence of the Cayley-Hamilton theorem.


Atomic triangular matrix

An atomic (upper or lower) triangular matrix is a special form of unitriangular matrix, where all of the
off-diagonal element In geometry, a diagonal is a line segment joining two vertices of a polygon or polyhedron, when those vertices are not on the same edge. Informally, any sloping line is called diagonal. The word ''diagonal'' derives from the ancient Greek δ ...
s are zero, except for the entries in a single column. Such a matrix is also called a Frobenius matrix, a Gauss matrix, or a Gauss transformation matrix.


Triangularisability

A matrix that is similar to a triangular matrix is referred to as triangularizable. Abstractly, this is equivalent to stabilizing a flag: upper triangular matrices are precisely those that preserve the standard flag, which is given by the standard ordered basis (e_1,\ldots,e_n) and the resulting flag 0 < \left\langle e_1\right\rangle < \left\langle e_1,e_2\right\rangle < \cdots < \left\langle e_1,\ldots,e_n \right\rangle = K^n. All flags are conjugate (as the general linear group acts transitively on bases), so any matrix that stabilises a flag is similar to one that stabilizes the standard flag. Any complex square matrix is triangularizable. In fact, a matrix ''A'' over a
field Field may refer to: Expanses of open ground * Field (agriculture), an area of land used for agricultural purposes * Airfield, an aerodrome that lacks the infrastructure of an airport * Battlefield * Lawn, an area of mowed grass * Meadow, a grass ...
containing all of the eigenvalues of ''A'' (for example, any matrix over an
algebraically closed field In mathematics, a field is algebraically closed if every non-constant polynomial in (the univariate polynomial ring with coefficients in ) has a root in . Examples As an example, the field of real numbers is not algebraically closed, because ...
) is similar to a triangular matrix. This can be proven by using induction on the fact that ''A'' has an eigenvector, by taking the quotient space by the eigenvector and inducting to show that ''A'' stabilizes a flag, and is thus triangularizable with respect to a basis for that flag. A more precise statement is given by the Jordan normal form theorem, which states that in this situation, ''A'' is similar to an upper triangular matrix of a very particular form. The simpler triangularization result is often sufficient however, and in any case used in proving the Jordan normal form theorem. In the case of complex matrices, it is possible to say more about triangularization, namely, that any square matrix ''A'' has a Schur decomposition. This means that ''A'' is unitarily equivalent (i.e. similar, using a
unitary matrix In linear algebra, a complex square matrix is unitary if its conjugate transpose is also its inverse, that is, if U^* U = UU^* = UU^ = I, where is the identity matrix. In physics, especially in quantum mechanics, the conjugate transpose ...
as change of basis) to an upper triangular matrix; this follows by taking an Hermitian basis for the flag.


Simultaneous triangularisability

A set of matrices A_1, \ldots, A_k are said to be if there is a basis under which they are all upper triangular; equivalently, if they are upper triangularizable by a single similarity matrix ''P.'' Such a set of matrices is more easily understood by considering the algebra of matrices it generates, namely all polynomials in the A_i, denoted K _1,\ldots,A_k Simultaneous triangularizability means that this algebra is conjugate into the Lie subalgebra of upper triangular matrices, and is equivalent to this algebra being a Lie subalgebra of a Borel subalgebra. The basic result is that (over an algebraically closed field), the
commuting matrices In linear algebra, two matrices A and B are said to commute if AB=BA, or equivalently if their commutator ,B AB-BA is zero. A set of matrices A_1, \ldots, A_k is said to commute if they commute pairwise, meaning that every pair of matrices in the ...
A,B or more generally A_1,\ldots,A_k are simultaneously triangularizable. This can be proven by first showing that commuting matrices have a common eigenvector, and then inducting on dimension as before. This was proven by Frobenius, starting in 1878 for a commuting pair, as discussed at
commuting matrices In linear algebra, two matrices A and B are said to commute if AB=BA, or equivalently if their commutator ,B AB-BA is zero. A set of matrices A_1, \ldots, A_k is said to commute if they commute pairwise, meaning that every pair of matrices in the ...
. As for a single matrix, over the complex numbers these can be triangularized by unitary matrices. The fact that commuting matrices have a common eigenvector can be interpreted as a result of Hilbert's Nullstellensatz: commuting matrices form a commutative algebra K _1,\ldots,A_k/math> over K _1,\ldots,x_k/math> which can be interpreted as a variety in ''k''-dimensional affine space, and the existence of a (common) eigenvalue (and hence a common eigenvector) corresponds to this variety having a point (being non-empty), which is the content of the (weak) Nullstellensatz. In algebraic terms, these operators correspond to an
algebra representation In abstract algebra, a representation of an associative algebra is a module for that algebra. Here an associative algebra is a (not necessarily unital) ring. If the algebra is not unital, it may be made so in a standard way (see the adjoint funct ...
of the polynomial algebra in ''k'' variables. This is generalized by Lie's theorem, which shows that any representation of a
solvable Lie algebra In mathematics, a Lie algebra \mathfrak is solvable if its derived series terminates in the zero subalgebra. The ''derived Lie algebra'' of the Lie algebra \mathfrak is the subalgebra of \mathfrak, denoted : mathfrak,\mathfrak/math> that consis ...
is simultaneously upper triangularizable, the case of commuting matrices being the abelian Lie algebra case, abelian being a fortiori solvable. More generally and precisely, a set of matrices A_1,\ldots,A_k is simultaneously triangularisable if and only if the matrix p(A_1,\ldots,A_k) _i,A_j/math> is
nilpotent In mathematics, an element x of a ring R is called nilpotent if there exists some positive integer n, called the index (or sometimes the degree), such that x^n=0. The term was introduced by Benjamin Peirce in the context of his work on the cla ...
for all polynomials ''p'' in ''k'' ''non''-commuting variables, where _i,A_j/math> is the
commutator In mathematics, the commutator gives an indication of the extent to which a certain binary operation fails to be commutative. There are different definitions used in group theory and ring theory. Group theory The commutator of two elements, ...
; for commuting A_i the commutator vanishes so this holds. This was proven by Drazin, Dungey, and Gruenberg in 1951; a brief proof is given by Prasolov in 1994. One direction is clear: if the matrices are simultaneously triangularisable, then _i, A_j/math> is ''strictly'' upper triangularizable (hence nilpotent), which is preserved by multiplication by any A_k or combination thereof – it will still have 0s on the diagonal in the triangularizing basis.


Algebras of triangular matrices

Upper triangularity is preserved by many operations: * The sum of two upper triangular matrices is upper triangular. * The product of two upper triangular matrices is upper triangular. * The inverse of an upper triangular matrix, if it exists, is upper triangular. * The product of an upper triangular matrix and a scalar is upper triangular. Together these facts mean that the upper triangular matrices form a subalgebra of the
associative algebra In mathematics, an associative algebra ''A'' is an algebraic structure with compatible operations of addition, multiplication (assumed to be associative), and a scalar multiplication by elements in some field ''K''. The addition and multiplica ...
of square matrices for a given size. Additionally, this also shows that the upper triangular matrices can be viewed as a Lie subalgebra of the Lie algebra of square matrices of a fixed size, where the Lie bracket 'a'', ''b''given by the
commutator In mathematics, the commutator gives an indication of the extent to which a certain binary operation fails to be commutative. There are different definitions used in group theory and ring theory. Group theory The commutator of two elements, ...
. The Lie algebra of all upper triangular matrices is a
solvable Lie algebra In mathematics, a Lie algebra \mathfrak is solvable if its derived series terminates in the zero subalgebra. The ''derived Lie algebra'' of the Lie algebra \mathfrak is the subalgebra of \mathfrak, denoted : mathfrak,\mathfrak/math> that consis ...
. It is often referred to as a Borel subalgebra of the Lie algebra of all square matrices. All these results hold if ''upper triangular'' is replaced by ''lower triangular'' throughout; in particular the lower triangular matrices also form a Lie algebra. However, operations mixing upper and lower triangular matrices do not in general produce triangular matrices. For instance, the sum of an upper and a lower triangular matrix can be any matrix; the product of a lower triangular with an upper triangular matrix is not necessarily triangular either. The set of unitriangular matrices forms a
Lie group In mathematics, a Lie group (pronounced ) is a group that is also a differentiable manifold. A manifold is a space that locally resembles Euclidean space, whereas groups define the abstract concept of a binary operation along with the ad ...
. The set of strictly upper (or lower) triangular matrices forms a nilpotent Lie algebra, denoted \mathfrak. This algebra is the derived Lie algebra of \mathfrak, the Lie algebra of all upper triangular matrices; in symbols, \mathfrak = mathfrak,\mathfrak In addition, \mathfrak is the Lie algebra of the Lie group of unitriangular matrices. In fact, by Engel's theorem, any finite-dimensional nilpotent Lie algebra is conjugate to a subalgebra of the strictly upper triangular matrices, that is to say, a finite-dimensional nilpotent Lie algebra is simultaneously strictly upper triangularizable. Algebras of upper triangular matrices have a natural generalization in functional analysis which yields nest algebras on Hilbert spaces.


Borel subgroups and Borel subalgebras

The set of invertible triangular matrices of a given kind (upper or lower) forms a
group A group is a number of persons or things that are located, gathered, or classed together. Groups of people * Cultural group, a group whose members share the same cultural identity * Ethnic group, a group whose members share the same ethnic ide ...
, indeed a
Lie group In mathematics, a Lie group (pronounced ) is a group that is also a differentiable manifold. A manifold is a space that locally resembles Euclidean space, whereas groups define the abstract concept of a binary operation along with the ad ...
, which is a subgroup of the general linear group of all invertible matrices. A triangular matrix is invertible precisely when its diagonal entries are invertible (non-zero). Over the real numbers, this group is disconnected, having 2^n components accordingly as each diagonal entry is positive or negative. The identity component is invertible triangular matrices with positive entries on the diagonal, and the group of all invertible triangular matrices is a
semidirect product In mathematics, specifically in group theory, the concept of a semidirect product is a generalization of a direct product. There are two closely related concepts of semidirect product: * an ''inner'' semidirect product is a particular way in ...
of this group and the group of diagonal matrices with \pm 1 on the diagonal, corresponding to the components. The Lie algebra of the Lie group of invertible upper triangular matrices is the set of all upper triangular matrices, not necessarily invertible, and is a
solvable Lie algebra In mathematics, a Lie algebra \mathfrak is solvable if its derived series terminates in the zero subalgebra. The ''derived Lie algebra'' of the Lie algebra \mathfrak is the subalgebra of \mathfrak, denoted : mathfrak,\mathfrak/math> that consis ...
. These are, respectively, the standard
Borel subgroup In the theory of algebraic groups, a Borel subgroup of an algebraic group ''G'' is a maximal Zariski closed and connected solvable algebraic subgroup. For example, in the general linear group ''GLn'' (''n x n'' invertible matrices), the subgroup ...
''B'' of the Lie group GL''n'' and the standard Borel subalgebra \mathfrak of the Lie algebra gl''n''. The upper triangular matrices are precisely those that stabilize the standard flag. The invertible ones among them form a subgroup of the general linear group, whose conjugate subgroups are those defined as the stabilizer of some (other) complete flag. These subgroups are
Borel subgroup In the theory of algebraic groups, a Borel subgroup of an algebraic group ''G'' is a maximal Zariski closed and connected solvable algebraic subgroup. For example, in the general linear group ''GLn'' (''n x n'' invertible matrices), the subgroup ...
s. The group of invertible lower triangular matrices is such a subgroup, since it is the stabilizer of the standard flag associated to the standard basis in reverse order. The stabilizer of a partial flag obtained by forgetting some parts of the standard flag can be described as a set of block upper triangular matrices (but its elements are ''not'' all triangular matrices). The conjugates of such a group are the subgroups defined as the stabilizer of some partial flag. These subgroups are called
parabolic subgroup In the theory of algebraic groups, a Borel subgroup of an algebraic group ''G'' is a maximal Zariski closed and connected solvable algebraic subgroup. For example, in the general linear group ''GLn'' (''n x n'' invertible matrices), the subgroup ...
s.


Examples

The group of 2×2 upper unitriangular matrices is isomorphic to the
additive group An additive group is a group of which the group operation is to be thought of as ''addition'' in some sense. It is usually abelian, and typically written using the symbol + for its binary operation. This terminology is widely used with structure ...
of the field of scalars; in the case of complex numbers it corresponds to a group formed of parabolic Möbius transformations; the 3×3 upper unitriangular matrices form the Heisenberg group.


See also

*
Gaussian elimination In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of operations performed on the corresponding matrix of coefficients. This method can also be used ...
* QR decomposition *
Cholesky decomposition In linear algebra, the Cholesky decomposition or Cholesky factorization (pronounced ) is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for effic ...
* Hessenberg matrix * Tridiagonal matrix * Invariant subspace


References

{{Matrix classes Numerical linear algebra Matrices