In the
mathematical
Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics ...
discipline of
numerical linear algebra
Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematics ...
, a matrix splitting is an expression which represents a given
matrix
Matrix most commonly refers to:
* ''The Matrix'' (franchise), an American media franchise
** '' The Matrix'', a 1999 science-fiction action film
** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchi ...
as a sum or difference of matrices. Many
iterative method
In computational mathematics, an iterative method is a mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the ''n''-th approximation is derived from the pre ...
s (for example, for systems of
differential equation
In mathematics, a differential equation is an equation that relates one or more unknown functions and their derivatives. In applications, the functions generally represent physical quantities, the derivatives represent their rates of change, a ...
s) depend upon the direct solution of matrix equations involving matrices more general than
tridiagonal matrices. These matrix equations can often be solved directly and efficiently when written as a matrix splitting. The technique was devised by
Richard S. Varga in 1960.
Regular splittings
We seek to solve the
matrix equation
In mathematics, a matrix (plural matrices) is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object.
For example,
\beg ...
where A is a given ''n'' × ''n''
non-singular
In the mathematical field of algebraic geometry, a singular point of an algebraic variety is a point that is 'special' (so, singular), in the geometric sense that at this point the tangent space at the variety may not be regularly defined. In ...
matrix, and k is a given
column vector
In linear algebra, a column vector with m elements is an m \times 1 matrix consisting of a single column of m entries, for example,
\boldsymbol = \begin x_1 \\ x_2 \\ \vdots \\ x_m \end.
Similarly, a row vector is a 1 \times n matrix for some n, ...
with ''n'' components. We split the matrix A into
where B and C are ''n'' × ''n'' matrices. If, for an arbitrary ''n'' × ''n'' matrix M, M has nonnegative entries, we write M ≥ 0. If M has only positive entries, we write M > 0. Similarly, if the matrix M
1 − M
2 has nonnegative entries, we write M
1 ≥ M
2.
Definition: A = B − C is a regular splitting of A if B
−1 ≥ 0 and C ≥ 0.
We assume that matrix equations of the form
where g is a given column vector, can be solved directly for the vector x. If () represents a regular splitting of A, then the iterative method
where x
(0) is an arbitrary vector, can be carried out. Equivalently, we write () in the form
The matrix D = B
−1C has nonnegative entries if () represents a regular splitting of A.
It can be shown that if A
−1 > 0, then
< 1, where
represents the
spectral radius
In mathematics, the spectral radius of a square matrix is the maximum of the absolute values of its eigenvalues. More generally, the spectral radius of a bounded linear operator is the supremum of the absolute values of the elements of its spect ...
of D, and thus D is a
convergent matrix. As a consequence, the iterative method () is necessarily
convergent.
If, in addition, the splitting () is chosen so that the matrix B is a
diagonal matrix
In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal ...
(with the diagonal entries all non-zero, since B must be
invertible
In mathematics, the concept of an inverse element generalises the concepts of opposite () and reciprocal () of numbers.
Given an operation denoted here , and an identity element denoted , if , one says that is a left inverse of , and that ...
), then B can be inverted in linear time (see
Time complexity
In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by ...
).
Matrix iterative methods
Many iterative methods can be described as a matrix splitting. If the diagonal entries of the matrix A are all nonzero, and we express the matrix A as the matrix sum
where D is the diagonal part of A, and U and L are respectively strictly upper and lower
triangular
A triangle is a polygon with three edges and three vertices. It is one of the basic shapes in geometry. A triangle with vertices ''A'', ''B'', and ''C'' is denoted \triangle ABC.
In Euclidean geometry, any three points, when non-collinear, ...
''n'' × ''n'' matrices, then we have the following.
The
Jacobi method can be represented in matrix form as a splitting
The
Gauss–Seidel method can be represented in matrix form as a splitting
The method of
successive over-relaxation In numerical linear algebra, the method of successive over-relaxation (SOR) is a variant of the Gauss–Seidel method for solving a linear system of equations, resulting in faster convergence. A similar method can be used for any slowly convergin ...
can be represented in matrix form as a splitting
Example
Regular splitting
In equation (), let
Let us apply the splitting () which is used in the Jacobi method: we split A in such a way that B consists of ''all'' of the diagonal elements of A, and C consists of ''all'' of the off-diagonal elements of A, negated. (Of course this is not the only useful way to split a matrix into two matrices.) We have
:
:
Since B
−1 ≥ 0 and C ≥ 0, the splitting () is a regular splitting. Since A
−1 > 0, the spectral radius
< 1. (The approximate
eigenvalues
In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted b ...
of D are
) Hence, the matrix D is convergent and the method () necessarily converges for the problem (). Note that the diagonal elements of A are all greater than zero, the off-diagonal elements of A are all less than zero and A is
strictly diagonally dominant.
The method () applied to the problem () then takes the form
The exact solution to equation () is
The first few iterates for equation () are listed in the table below, beginning with . From the table one can see that the method is evidently converging to the solution (), albeit rather slowly.
Jacobi method
As stated above, the Jacobi method () is the same as the specific regular splitting () demonstrated above.
Gauss–Seidel method
Since the diagonal entries of the matrix A in problem () are all nonzero, we can express the matrix A as the splitting (), where
We then have
:
:
The Gauss–Seidel method () applied to the problem () takes the form
The first few iterates for equation () are listed in the table below, beginning with . From the table one can see that the method is evidently converging to the solution (), somewhat faster than the Jacobi method described above.
Successive over-relaxation method
Let ''ω'' = 1.1. Using the splitting () of the matrix A in problem () for the successive over-relaxation method, we have
:
:
:
The successive over-relaxation method () applied to the problem () takes the form
The first few iterates for equation () are listed in the table below, beginning with . From the table one can see that the method is evidently converging to the solution (), slightly faster than the Gauss–Seidel method described above.
See also
*
List of operator splitting topics
*
Matrix decomposition
In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of ...
*
M-matrix In mathematics, especially linear algebra, an ''M''-matrix is a ''Z''-matrix with eigenvalues whose real parts are nonnegative. The set of non-singular ''M''-matrices are a subset of the class of ''P''-matrices, and also of the class of inverse ...
*
Stieltjes matrix
Notes
References
* .
*
* .
{{Authority control
Matrices
Numerical linear algebra
Relaxation (iterative methods)