HOME

TheInfoList



OR:

In the
mathematical Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics ...
discipline of
numerical linear algebra Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematic ...
, a matrix splitting is an expression which represents a given
matrix Matrix most commonly refers to: * ''The Matrix'' (franchise), an American media franchise ** '' The Matrix'', a 1999 science-fiction action film ** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchi ...
as a sum or difference of matrices. Many
iterative method In computational mathematics, an iterative method is a mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the ''n''-th approximation is derived from the pre ...
s (for example, for systems of
differential equation In mathematics, a differential equation is an equation that relates one or more unknown functions and their derivatives. In applications, the functions generally represent physical quantities, the derivatives represent their rates of change, ...
s) depend upon the direct solution of matrix equations involving matrices more general than tridiagonal matrices. These matrix equations can often be solved directly and efficiently when written as a matrix splitting. The technique was devised by Richard S. Varga in 1960.


Regular splittings

We seek to solve the matrix equation where A is a given ''n'' × ''n'' non-singular matrix, and k is a given
column vector In linear algebra, a column vector with m elements is an m \times 1 matrix consisting of a single column of m entries, for example, \boldsymbol = \begin x_1 \\ x_2 \\ \vdots \\ x_m \end. Similarly, a row vector is a 1 \times n matrix for some n, c ...
with ''n'' components. We split the matrix A into where B and C are ''n'' × ''n'' matrices. If, for an arbitrary ''n'' × ''n'' matrix M, M has nonnegative entries, we write M ≥ 0. If M has only positive entries, we write M > 0. Similarly, if the matrix M1 − M2 has nonnegative entries, we write M1 ≥ M2. Definition: A = B − C is a regular splitting of A if B−1 ≥ 0 and C ≥ 0. We assume that matrix equations of the form where g is a given column vector, can be solved directly for the vector x. If () represents a regular splitting of A, then the iterative method where x(0) is an arbitrary vector, can be carried out. Equivalently, we write () in the form The matrix D = B−1C has nonnegative entries if () represents a regular splitting of A. It can be shown that if A−1 > 0, then \rho (\mathbf D) < 1, where \rho (\mathbf D) represents the
spectral radius In mathematics, the spectral radius of a square matrix is the maximum of the absolute values of its eigenvalues. More generally, the spectral radius of a bounded linear operator is the supremum of the absolute values of the elements of its spectru ...
of D, and thus D is a
convergent matrix In linear algebra, a convergent matrix is a matrix that converges to the zero matrix under matrix exponentiation. Background When successive powers of a matrix T become small (that is, when all of the entries of T approach zero, upon raising T t ...
. As a consequence, the iterative method () is necessarily convergent. If, in addition, the splitting () is chosen so that the matrix B is a
diagonal matrix In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal m ...
(with the diagonal entries all non-zero, since B must be invertible), then B can be inverted in linear time (see
Time complexity In computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by t ...
).


Matrix iterative methods

Many iterative methods can be described as a matrix splitting. If the diagonal entries of the matrix A are all nonzero, and we express the matrix A as the matrix sum where D is the diagonal part of A, and U and L are respectively strictly upper and lower
triangular A triangle is a polygon with three edges and three vertices. It is one of the basic shapes in geometry. A triangle with vertices ''A'', ''B'', and ''C'' is denoted \triangle ABC. In Euclidean geometry, any three points, when non- collinea ...
''n'' × ''n'' matrices, then we have the following. The
Jacobi method In numerical linear algebra, the Jacobi method is an iterative algorithm for determining the solutions of a strictly diagonally dominant system of linear equations. Each diagonal element is solved for, and an approximate value is plugged in. The ...
can be represented in matrix form as a splitting The Gauss–Seidel method can be represented in matrix form as a splitting The method of
successive over-relaxation In numerical linear algebra, the method of successive over-relaxation (SOR) is a variant of the Gauss–Seidel method for solving a linear system of equations, resulting in faster convergence. A similar method can be used for any slowly converging ...
can be represented in matrix form as a splitting


Example


Regular splitting

In equation (), let Let us apply the splitting () which is used in the Jacobi method: we split A in such a way that B consists of ''all'' of the diagonal elements of A, and C consists of ''all'' of the off-diagonal elements of A, negated. (Of course this is not the only useful way to split a matrix into two matrices.) We have :\begin & \mathbf = \frac \begin 18 & 13 & 16 \\ 11 & 21 & 15 \\ 13 & 12 & 22 \end, \quad \mathbf = \begin \frac & 0 & 0 \\ pt0 & \frac & 0 \\ pt0 & 0 & \frac \end, \end :\begin \mathbf = \mathbf = \begin 0 & \frac & \frac \\ pt\frac & 0 & \frac \\ pt\frac & \frac & 0 \end, \quad \mathbf = \begin \frac \\ pt-3 \\ pt2 \end. \end Since B−1 ≥ 0 and C ≥ 0, the splitting () is a regular splitting. Since A−1 > 0, the spectral radius \rho (\mathbf D) < 1. (The approximate
eigenvalues In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denote ...
of D are \lambda_i \approx -0.4599820, -0.3397859, 0.7997679.) Hence, the matrix D is convergent and the method () necessarily converges for the problem (). Note that the diagonal elements of A are all greater than zero, the off-diagonal elements of A are all less than zero and A is strictly diagonally dominant. The method () applied to the problem () then takes the form The exact solution to equation () is The first few iterates for equation () are listed in the table below, beginning with . From the table one can see that the method is evidently converging to the solution (), albeit rather slowly.


Jacobi method

As stated above, the Jacobi method () is the same as the specific regular splitting () demonstrated above.


Gauss–Seidel method

Since the diagonal entries of the matrix A in problem () are all nonzero, we can express the matrix A as the splitting (), where We then have :\begin & \mathbf = \frac \begin 20 & 0 & 0 \\ 5 & 30 & 0 \\ 13 & 6 & 24 \end, \end :\begin & \mathbf = \frac \begin 0 & 40 & 60 \\ 0 & 10 & 75 \\ 0 & 26 & 51 \end, \quad \mathbf = \frac \begin 100 \\ -335 \\ 233 \end. \end The Gauss–Seidel method () applied to the problem () takes the form The first few iterates for equation () are listed in the table below, beginning with . From the table one can see that the method is evidently converging to the solution (), somewhat faster than the Jacobi method described above.


Successive over-relaxation method

Let ''ω'' = 1.1. Using the splitting () of the matrix A in problem () for the successive over-relaxation method, we have :\begin & \mathbf = \frac \begin 2 & 0 & 0 \\ 0.55 & 3 & 0 \\ 1.441 & 0.66 & 2.4 \end, \end :\begin & \mathbf = \frac \begin -1.2 & 4.4 & 6.6 \\ -0.33 & 0.01 & 8.415 \\ -0.8646 & 2.9062 & 5.0073 \end, \end :\begin & \mathbf = \frac \begin 11 \\ -36.575 \\ 25.6135 \end. \end The successive over-relaxation method () applied to the problem () takes the form The first few iterates for equation () are listed in the table below, beginning with . From the table one can see that the method is evidently converging to the solution (), slightly faster than the Gauss–Seidel method described above.


See also

* List of operator splitting topics *
Matrix decomposition In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of ...
* M-matrix * Stieltjes matrix


Notes


References

* . * * . {{Authority control Matrices Numerical linear algebra Relaxation (iterative methods)