Definitions and example of algorithm
The process of row reduction makes use of elementary row operations, and can be divided into two parts. The first part (sometimes called forward elimination) reduces a given system to row echelon form, from which one can tell whether there are no solutions, a unique solution, or infinitely many solutions. The second part (sometimes called back substitution) continues to use row operations until the solution is found; in other words, it puts the matrix into reduced row echelon form. Another point of view, which turns out to be very useful to analyze the algorithm, is that row reduction produces a matrix decomposition of the original matrix. The elementary row operations may be viewed as the multiplication on the left of the original matrix by elementary matrices. Alternatively, a sequence of elementary operations that reduces a single row may be viewed as multiplication by a Frobenius matrix. Then the first part of the algorithm computes an LU decomposition, while the second part writes the original matrix as the product of a uniquely determined invertible matrix and a uniquely determined reduced row echelon matrix.Row operations
There are three types of elementary row operations which may be performed on the rows of a matrix: # Interchanging two rows. # Multiplying a row by a non-zero scalar. # Adding a scalar multiple of one row to another. If the matrix is associated to a system of linear equations, then these operations do not change the solution set. Therefore, if one's goal is to solve a system of linear equations, then using these row operations could make the problem easier.Echelon form
For each row in a matrix, if the row does not consist of only zeros, then the leftmost nonzero entry is called the '' leading coefficient'' (or ''pivot'') of that row. So if two leading coefficients are in the same column, then a row operation of type 3 could be used to make one of those coefficients zero. Then by using the row swapping operation, one can always order the rows so that for every non-zero row, the leading coefficient is to the right of the leading coefficient of the row above. If this is the case, then matrix is said to be in row echelon form. So the lower left part of the matrix contains only zeros, and all of the zero rows are below the non-zero rows. The word "echelon" is used here because one can roughly think of the rows being ranked by their size, with the largest being at the top and the smallest being at the bottom. For example, the following matrix is in row echelon form, and its leading coefficients are shown in red: It is in echelon form because the zero row is at the bottom, and the leading coefficient of the second row (in the third column), is to the right of the leading coefficient of the first row (in the second column). A matrix is said to be in reduced row echelon form if furthermore all of the leading coefficients are equal to 1 (which can be achieved by using the elementary row operation of type 2), and in every column containing a leading coefficient, all of the other entries in that column are zero (which can be achieved by using elementary row operations of type 3).Example of the algorithm
Suppose the goal is to find and describe the set of solutions to the following system of linear equations: The table below is the row reduction process applied simultaneously to the system of equations and its associated augmented matrix. In practice, one does not usually deal with the systems in terms of equations, but instead makes use of the augmented matrix, which is more suitable for computer manipulations. The row reduction procedure may be summarized as follows: eliminate from all equations below , and then eliminate from all equations below . This will put the system into triangular form. Then, using back-substitution, each unknown can be solved for. : The second column describes which row operations have just been performed. So for the first step, the is eliminated from by adding to . Next, is eliminated from by adding to . These row operations are labelled in the table as Once is also eliminated from the third row, the result is a system of linear equations in triangular form, and so the first part of the algorithm is complete. From a computational point of view, it is faster to solve the variables in reverse order, a process known as back-substitution. One sees the solution is , , and . So there is a unique solution to the original system of equations. Instead of stopping once the matrix is in echelon form, one could continue until the matrix is in ''reduced'' row echelon form, as it is done in the table. The process of row reducing until the matrix is reduced is sometimes referred to as Gauss–Jordan elimination, to distinguish it from stopping after reaching echelon form.History
The method of Gaussian elimination appears – albeit without proof – in the Chinese mathematical text Chapter Eight: ''Rectangular Arrays'' of '' The Nine Chapters on the Mathematical Art''. Its use is illustrated in eighteen problems, with two to five equations. The first reference to the book by this title is dated to 179 AD, but parts of it were written as early as approximately 150 BC. It was commented on byApplications
Historically, the first application of the row reduction method is for solving systems of linear equations. Below are some other important applications of the algorithm.Computing determinants
To explain how Gaussian elimination allows the computation of the determinant of a square matrix, we have to recall how the elementary row operations change the determinant: * Swapping two rows multiplies the determinant by −1 * Multiplying a row by a nonzero scalar multiplies the determinant by the same scalar * Adding to one row a scalar multiple of another does not change the determinant. If Gaussian elimination applied to a square matrix produces a row echelon matrix , let be the product of the scalars by which the determinant has been multiplied, using the above rules. Then the determinant of is the quotient by of the product of the elements of the diagonal of : Computationally, for an matrix, this method needs only arithmetic operations, while using Leibniz formula for determinants requires operations (number of summands in the formula times the number of multiplications in each summand), and recursive Laplace expansion requires operations if the sub-determinants are memorized for being computed only once (number of operations in a linear combination times the number of sub-determinants to compute, which are determined by their columns). Even on the fastest computers, these two methods are impractical or almost impracticable for above 20.Finding the inverse of a matrix
A variant of Gaussian elimination called Gauss–Jordan elimination can be used for finding the inverse of a matrix, if it exists. If is an square matrix, then one can use row reduction to compute its inverse matrix, if it exists. First, the identity matrix is augmented to the right of , forming an block matrix . Now through application of elementary row operations, find the reduced echelon form of this matrix. The matrix is invertible if and only if the left block can be reduced to the identity matrix ; in this case the right block of the final matrix is . If the algorithm is unable to reduce the left block to , then is not invertible. For example, consider the following matrix: To find the inverse of this matrix, one takes the following matrix augmented by the identity and row-reduces it as a 3 × 6 matrix: By performing row operations, one can check that the reduced row echelon form of this augmented matrix is One can think of each row operation as the left product by an elementary matrix. Denoting by the product of these elementary matrices, we showed, on the left, that , and therefore, . On the right, we kept a record of , which we know is the inverse desired. This procedure for finding the inverse works for square matrices of any size.Computing ranks and bases
The Gaussian elimination algorithm can be applied to any matrix . In this way, for example, some 6 × 9 matrices can be transformed to a matrix that has a row echelon form like where the stars are arbitrary entries, and are nonzero entries. This echelon matrix contains a wealth of information about : the rank of is 5, since there are 5 nonzero rows in ; theComputational efficiency
The number ofBareiss algorithm
The first strongly-polynomial time algorithm for Gaussian elimination was published by Jack Edmonds in 1967. Independently, and almost simultaneously, Erwin Bareiss discovered another algorithm, based on the following remark, which applies to a division-free variant of Gaussian elimination. In standard Gaussian elimination, one subtracts from each row below the pivot row a multiple of by where and are the entries in the pivot column of and respectively. Bareiss variant consists, instead, of replacing with This produces a row echelon form that has the same zero entries as with the standard Gaussian elimination. Bareiss' main remark is that each matrix entry generated by this variant is the determinant of a submatrix of the original matrix. In particular, if one starts with integer entries, the divisions occurring in the algorithm are exact divisions resulting in integers. So, all intermediate entries and final entries are integers. Moreover, Hadamard's inequality provides an upper bound on the absolute values of the intermediate and final entries, and thus a bit complexity of using soft O notation. Moreover, as an upper bound on the size of final entries is known, a complexity can be obtained with modular computation followed either by Chinese remaindering or Hensel lifting. As a corollary, the following problems can be solved in strongly polynomial time with the same bit complexity: * Testing whether ''m'' given rational vectors are linearly independent * Computing theNumeric instability
One possible problem is numerical instability, caused by the possibility of dividing by very small numbers. If, for example, the leading coefficient of one of the rows is very close to zero, then to row-reduce the matrix, one would need to divide by that number. This means that any error which existed for the number that was close to zero would be amplified. Gaussian elimination is numerically stable for diagonally dominant or positive-definite matrices. For general matrices, Gaussian elimination is usually considered to be stable, when using partial pivoting, even though there are examples of stable matrices for which it is unstable.Generalizations
Gaussian elimination can be performed over any field, not just the real numbers. Buchberger's algorithm is a generalization of Gaussian elimination to systems of polynomial equations. This generalization depends heavily on the notion of a monomial order. The choice of an ordering on the variables is already implicit in Gaussian elimination, manifesting as the choice to work from left to right when selecting pivot positions. Computing the rank of a tensor of order greater than 2 is NP-hard. Therefore, if , there cannot be a polynomial time analog of Gaussian elimination for higher-order tensors (matrices are array representations of order-2 tensors).Pseudocode
As explained above, Gaussian elimination transforms a given matrix into a matrix in row-echelon form. In the following pseudocode,A , j/code> denotes the entry of the matrix in row and column with the indices starting from 1. The transformation is performed ''in place'', meaning that the original matrix is lost for being eventually replaced by its row-echelon form.
h := 1 /* ''Initialization of the pivot row'' */
k := 1 /* ''Initialization of the pivot column'' */
while h ≤ m and k ≤ n
/* ''Find the k-th pivot:'' */
i_max := argmax (i = h ... m, abs(A , k)
if A _max, k= 0
/* ''No pivot in this column, pass to next column'' */
k := k + 1
else
swap rows(h, i_max)
/* ''Do for all rows below pivot:'' */
for i = h + 1 ... m:
f := A , k/ A , k /* ''Fill with zeros the lower part of pivot column:'' */
A , k:= 0
/* ''Do for all remaining elements in current row:'' */
for j = k + 1 ... n:
A , j:= A , j- A , j* f
/* ''Increase pivot row and column'' */
h := h + 1
k := k + 1
This algorithm differs slightly from the one discussed earlier, by choosing a pivot with largest absolute value
In mathematics, the absolute value or modulus of a real number x, is the non-negative value without regard to its sign. Namely, , x, =x if x is a positive number, and , x, =-x if x is negative (in which case negating x makes -x positive), ...
. Such a ''partial pivoting'' may be required if, at the pivot place, the entry of the matrix is zero. In any case, choosing the largest possible absolute value of the pivot improves the numerical stability of the algorithm, when floating point is used for representing numbers.
Upon completion of this procedure the matrix will be in row echelon form and the corresponding system may be solved by back substitution.
See also
* Fangcheng (mathematics)
* Gram–Schmidt process - another process for bringing a matrix into some canonical form.
* Fourier–Motzkin elimination - an algorithm for eliminating variables of a system of linear inequalities, rather than equations.
References
Works cited
* .
* .
* .
* .
* .
* .
*
*
* .
* .
*
*
*
External links
Interactive didactic tool
{{Authority control
Numerical linear algebra
Articles with example pseudocode
Exchange algorithms