In
mathematics, a block matrix or a partitioned matrix is a
matrix
Matrix most commonly refers to:
* ''The Matrix'' (franchise), an American media franchise
** '' The Matrix'', a 1999 science-fiction action film
** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchi ...
that is ''
interpreted'' as having been broken into sections called blocks or submatrices. Intuitively, a matrix interpreted as a block matrix can be visualized as the original matrix with a collection of horizontal and vertical lines, which break it up, or
partition it, into a collection of smaller matrices. Any matrix may be interpreted as a block matrix in one or more ways, with each interpretation defined by how its rows and columns are partitioned.
This notion can be made more precise for an
by
matrix
by partitioning
into a collection
, and then partitioning
into a collection
. The original matrix is then considered as the "total" of these groups, in the sense that the
entry of the original matrix corresponds in a
1-to-1 way with some
offset
Offset or Off-Set may refer to:
Arts, entertainment, and media
* "Off-Set", a song by T.I. and Young Thug from the '' Furious 7: Original Motion Picture Soundtrack''
* ''Offset'' (EP), a 2018 EP by singer Kim Chung-ha
* ''Offset'' (film), a 200 ...
entry of some
, where
and
.
Block matrix algebra arises in general from
biproduct
In category theory and its applications to mathematics, a biproduct of a finite collection of objects, in a category with zero objects, is both a product and a coproduct. In a preadditive category the notions of product and coproduct coincide f ...
s in
categories
Category, plural categories, may refer to:
Philosophy and general uses
*Categorization, categories in cognitive science, information science and generally
*Category of being
* ''Categories'' (Aristotle)
* Category (Kant)
*Categories (Peirce)
* ...
of matrices.
Example
The matrix
:
can be partitioned into four 2×2 blocks
:
The partitioned matrix can then be written as
:
Block matrix multiplication
It is possible to use a block partitioned matrix product that involves only algebra on submatrices of the factors. The partitioning of the factors is not arbitrary, however, and requires "
conformable partitions" between two matrices
and
such that all submatrix products that will be used are defined. Given an
matrix
with
row partitions and
column partitions
:
and a
matrix
with
row partitions and
column partitions
:
that are compatible with the partitions of
, the matrix product
:
can be performed blockwise, yielding
as an
matrix with
row partitions and
column partitions. The matrices in the resulting matrix
are calculated by multiplying:
:
Or, using the
Einstein notation that implicitly sums over repeated indices:
:
Block matrix inversion
If a matrix is partitioned into four blocks, it can be
inverted blockwise as follows:
:
where A and D are square blocks of arbitrary size, and B and C are
conformable with them for partitioning. Furthermore, A and the Schur complement of A in P: must be invertible.
Equivalently, by permuting the blocks:
:
Here, D and the Schur complement of D in P: must be invertible.
If A and D are both invertible, then:
:
By the
Weinstein–Aronszajn identity, one of the two matrices in the block-diagonal matrix is invertible exactly when the other is.
Block matrix determinant
The formula for the determinant of a
-matrix above continues to hold, under appropriate further assumptions, for a matrix composed of four submatrices
. The easiest such formula, which can be proven using either the Leibniz formula or a factorization involving the
Schur complement, is
:
If
is
invertible (and similarly if
is invertible), one has
:
If
is a
-matrix, this simplifies to
.
If the blocks are square matrices of the ''same'' size further formulas hold. For example, if
and
commute
Commute, commutation or commutative may refer to:
* Commuting, the process of travelling between a place of residence and a place of work
Mathematics
* Commutative property, a property of a mathematical operation whose result is insensitive to th ...
(i.e.,
), then
:
This formula has been generalized to matrices composed of more than
blocks, again under appropriate commutativity conditions among the individual blocks.
For
and
, the following formula holds (even if
and
do not commute)
:
Block diagonal matrices
A block diagonal matrix is a block matrix that is a
square matrix such that the main-diagonal blocks are square matrices and all off-diagonal blocks are zero matrices. That is, a block diagonal matrix A has the form
:
where A
''k'' is a square matrix for all ''k'' = 1, ..., ''n''. In other words, matrix A is the
direct sum
The direct sum is an operation between structures in abstract algebra, a branch of mathematics. It is defined differently, but analogously, for different kinds of structures. To see how the direct sum is used in abstract algebra, consider a more ...
of A
1, ..., A
''n''. It can also be indicated as A
1 ⊕ A
2 ⊕ ... ⊕ A
''n'' or diag(A
1, A
2, ..., A
''n'') (the latter being the same formalism used for a
diagonal matrix). Any square matrix can trivially be considered a block diagonal matrix with only one block.
For the
determinant and
trace, the following properties hold
:
A block diagonal matrix is invertible
if and only if each of its main-diagonal blocks are invertible, and in this case its inverse is another block diagonal matrix given by
:
The
eigenvalues and eigenvectors of
are simply those of the
s combined.
Block tridiagonal matrices
A block tridiagonal matrix is another special block matrix, which is just like the block diagonal matrix a
square matrix, having square matrices (blocks) in the lower diagonal,
main diagonal and upper diagonal, with all other blocks being zero matrices. It is essentially a
tridiagonal matrix
In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal (the first diagonal below this), and the supradiagonal/upper diagonal (the first diagonal above the main ...
but has submatrices in places of scalars. A block tridiagonal matrix A has the form
:
where A
''k'', B
''k'' and C
''k'' are square sub-matrices of the lower, main and upper diagonal respectively.
Block tridiagonal matrices are often encountered in numerical solutions of engineering problems (e.g.,
computational fluid dynamics
Computational fluid dynamics (CFD) is a branch of fluid mechanics that uses numerical analysis and data structures to analyze and solve problems that involve fluid flows. Computers are used to perform the calculations required to simulate the ...
). Optimized numerical methods for
LU factorization are available and hence efficient solution algorithms for equation systems with a block tridiagonal matrix as coefficient matrix. The
Thomas algorithm, used for efficient solution of equation systems involving a
tridiagonal matrix
In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal (the first diagonal below this), and the supradiagonal/upper diagonal (the first diagonal above the main ...
can also be applied using matrix operations to block tridiagonal matrices (see also
Block LU decomposition).
Block Toeplitz matrices
A block Toeplitz matrix is another special block matrix, which contains blocks that are repeated down the diagonals of the matrix, as a
Toeplitz matrix has elements repeated down the diagonal.
A block Toeplitz matrix A has the form
:
Block transpose
A special form of matrix
transpose can also be defined for block matrices, where individual blocks are reordered but not transposed. Let
be a
block matrix with
blocks
, the block transpose of
is the
block matrix
with
blocks
.
As with the conventional trace operator, the block transpose is a
linear mapping such that
. However, in general the property
does not hold unless the blocks of
and
commute.
Direct sum
For any arbitrary matrices A (of size ''m'' × ''n'') and B (of size ''p'' × ''q''), we have the direct sum of A and B, denoted by A
B and defined as
:
For instance,
:
This operation generalizes naturally to arbitrary dimensioned arrays (provided that A and B have the same number of dimensions).
Note that any element in the
direct sum
The direct sum is an operation between structures in abstract algebra, a branch of mathematics. It is defined differently, but analogously, for different kinds of structures. To see how the direct sum is used in abstract algebra, consider a more ...
of two
vector spaces of matrices could be represented as a direct sum of two matrices.
Application
In
linear algebra terms, the use of a block matrix corresponds to having a
linear mapping thought of in terms of corresponding 'bunches' of
basis vector
In mathematics, a set of vectors in a vector space is called a basis if every element of may be written in a unique way as a finite linear combination of elements of . The coefficients of this linear combination are referred to as componen ...
s. That again matches the idea of having distinguished direct sum decompositions of the
domain and
range
Range may refer to:
Geography
* Range (geographic), a chain of hills or mountains; a somewhat linear, complex mountainous or hilly area (cordillera, sierra)
** Mountain range, a group of mountains bordered by lowlands
* Range, a term used to i ...
. It is always particularly significant if a block is the
zero matrix In mathematics, particularly linear algebra, a zero matrix or null matrix is a matrix all of whose entries are zero. It also serves as the additive identity of the additive group of m \times n matrices, and is denoted by the symbol O or 0 followed ...
; that carries the information that a summand maps into a sub-sum.
Given the interpretation ''via'' linear mappings and direct sums, there is a special type of block matrix that occurs for square matrices (the case ''m'' = ''n''). For those we can assume an interpretation as an
endomorphism of an ''n''-dimensional space ''V''; the block structure in which the bunching of rows and columns is the same is of importance because it corresponds to having a single direct sum decomposition on ''V'' (rather than two). In that case, for example, the
diagonal blocks in the obvious sense are all square. This type of structure is required to describe the
Jordan normal form.
This technique is used to cut down calculations of matrices, column-row expansions, and many
computer science applications, including
VLSI chip design. An example is the
Strassen algorithm for fast
matrix multiplication
In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the ...
, as well as the
Hamming(7,4) encoding for error detection and recovery in data transmissions.
The technique can also be used where the elements of the A,B,C, and D matrices do not all require the same field for their elements. For example, The matrix A can be over the field of complex numbers, while the matrix D can be over the field of real numbers. This can lead to valid operations involving the matrices, while simplifying the operations within one of the matrices. For example, if D has only real elements finding its inverse takes less calculations than if complex elements must be considered. But the reals is a subfield of the complex numbers (further it can be considered a projection), so the matrices operations can be well defined.
See also
*
Kronecker product
In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a generalization of the outer product (which is denoted by the same symbol) from vectors to ...
(matrix direct product resulting in a block matrix)
Notes
References
*
{{Matrix classes
Matrices
Sparse matrices