
In mathematics, Gaussian elimination, also known as row reduction, is an
algorithm
In mathematics and computer science, an algorithm () is a finite sequence of Rigour#Mathematics, mathematically rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algo ...
for solving
systems of linear equations
In mathematics, a system of linear equations (or linear system) is a collection of two or more linear equations involving the same variables.
For example,
: \begin
3x+2y-z=1\\
2x-2y+4z=-2\\
-x+\fracy-z=0
\end
is a system of three equations in ...
. It consists of a sequence of row-wise operations performed on the corresponding
matrix
Matrix (: matrices or matrixes) or MATRIX may refer to:
Science and mathematics
* Matrix (mathematics), a rectangular array of numbers, symbols or expressions
* Matrix (logic), part of a formula in prenex normal form
* Matrix (biology), the m ...
of coefficients. This method can also be used to compute the
rank of a matrix, the
determinant
In mathematics, the determinant is a Scalar (mathematics), scalar-valued function (mathematics), function of the entries of a square matrix. The determinant of a matrix is commonly denoted , , or . Its value characterizes some properties of the ...
of a
square matrix, and the inverse of an
invertible matrix
In linear algebra, an invertible matrix (''non-singular'', ''non-degenarate'' or ''regular'') is a square matrix that has an inverse. In other words, if some other matrix is multiplied by the invertible matrix, the result can be multiplied by a ...
. The method is named after
Carl Friedrich Gauss
Johann Carl Friedrich Gauss (; ; ; 30 April 177723 February 1855) was a German mathematician, astronomer, geodesist, and physicist, who contributed to many fields in mathematics and science. He was director of the Göttingen Observatory and ...
(1777–1855). To perform row reduction on a matrix, one uses a sequence of
elementary row operations
In mathematics, an elementary matrix is a square matrix obtained from the application of a single elementary row operation to the identity matrix. The elementary matrices generate the general linear group when is a field. Left multiplication ...
to modify the matrix until the lower left-hand corner of the matrix is filled with zeros, as much as possible. There are three types of elementary row operations:
* Swapping two rows,
* Multiplying a row by a nonzero number,
* Adding a multiple of one row to another row.
Using these operations, a matrix can always be transformed into an
upper triangular matrix (possibly bordered by rows or columns of zeros), and in fact one that is in
row echelon form. Once all of the leading coefficients (the leftmost nonzero entry in each row) are 1, and every column containing a leading coefficient has zeros elsewhere, the matrix is said to be in
reduced row echelon form. This final form is unique; in other words, it is independent of the sequence of row operations used. For example, in the following sequence of row operations (where two elementary operations on different rows are done at the first and third steps), the third and fourth matrices are the ones in row echelon form, and the final matrix is the unique reduced row echelon form.
Using row operations to convert a matrix into reduced row echelon form is sometimes called . In this case, the term ''Gaussian elimination'' refers to the process until it has reached its upper triangular, or (unreduced) row echelon form. For computational reasons, when solving systems of linear equations, it is sometimes preferable to stop row operations before the matrix is completely reduced.
Definitions and example of algorithm
The process of row reduction makes use of
elementary row operations
In mathematics, an elementary matrix is a square matrix obtained from the application of a single elementary row operation to the identity matrix. The elementary matrices generate the general linear group when is a field. Left multiplication ...
, and can be divided into two parts. The first part (sometimes called forward elimination) reduces a given system to row echelon form, from which one can tell whether there are no solutions, a unique solution, or infinitely many solutions. The second part (sometimes called
back substitution) continues to use row operations until the solution is found; in other words, it puts the matrix into reduced row echelon form.
Another point of view, which turns out to be very useful to analyze the algorithm, is that row reduction produces a
matrix decomposition of the original matrix. The elementary row operations may be viewed as the multiplication on the left of the original matrix by
elementary matrices. Alternatively, a sequence of elementary operations that reduces a single row may be viewed as multiplication by a
Frobenius matrix. Then the first part of the algorithm computes an
LU decomposition, while the second part writes the original matrix as the product of a uniquely determined invertible matrix and a uniquely determined reduced row echelon matrix.
Row operations
There are three types of elementary row operations which may be performed on the rows of a matrix:
# Interchanging two rows.
# Multiplying a row by a non-zero
scalar.
# Adding a scalar multiple of one row to another.
If the matrix is associated to a system of linear equations, then these operations do not change the solution set. Therefore, if one's goal is to solve a system of linear equations, then using these row operations could make the problem easier.
Echelon form
For each row in a matrix, if the row does not consist of only zeros, then the leftmost nonzero entry is called the ''
leading coefficient'' (or ''pivot'') of that row. So if two leading coefficients are in the same column, then a row operation of
type 3 could be used to make one of those coefficients zero. Then by using the row swapping operation, one can always order the rows so that for every non-zero row, the leading coefficient is to the right of the leading coefficient of the row above. If this is the case, then matrix is said to be in row echelon form. So the lower left part of the matrix contains only zeros, and all of the zero rows are below the non-zero rows. The word "echelon" is used here because one can roughly think of the rows being ranked by their size, with the largest being at the top and the smallest being at the bottom.
For example, the following matrix is in row echelon form, and its leading coefficients are shown in red:
It is in echelon form because the zero row is at the bottom, and the leading coefficient of the second row (in the third column), is to the right of the leading coefficient of the first row (in the second column).
A matrix is said to be in reduced row echelon form if furthermore all of the leading coefficients are equal to 1 (which can be achieved by using the elementary row operation of type 2), and in every column containing a leading coefficient, all of the other entries in that column are zero (which can be achieved by using elementary row operations of type 3).
Example of the algorithm
Suppose the goal is to find and describe the set of solutions to the following system of linear equations:
The table below is the row reduction process applied simultaneously to the system of equations and its associated
augmented matrix
In linear algebra, an augmented matrix (A \vert B) is a k \times (n+1) matrix obtained by appending a k-dimensional column vector B, on the right, as a further column to a k \times n-dimensional matrix A. This is usually done for the purpose of p ...
. In practice, one does not usually deal with the systems in terms of equations, but instead makes use of the augmented matrix, which is more suitable for computer manipulations. The row reduction procedure may be summarized as follows: eliminate from all equations below , and then eliminate from all equations below . This will put the system into
triangular form. Then, using back-substitution, each unknown can be solved for.
:
The second column describes which row operations have just been performed. So for the first step, the is eliminated from by adding to . Next, is eliminated from by adding to . These row operations are labelled in the table as
Once is also eliminated from the third row, the result is a system of linear equations in triangular form, and so the first part of the algorithm is complete. From a computational point of view, it is faster to solve the variables in reverse order, a process known as back-substitution. One sees the solution is , , and . So there is a unique solution to the original system of equations.
Instead of stopping once the matrix is in echelon form, one could continue until the matrix is in ''reduced'' row echelon form, as it is done in the table. The process of row reducing until the matrix is reduced is sometimes referred to as Gauss–Jordan elimination, to distinguish it from stopping after reaching echelon form.
History
The method of Gaussian elimination appears – albeit without proof – in the Chinese mathematical text
Chapter Eight: ''Rectangular Arrays'' of ''
The Nine Chapters on the Mathematical Art
''The Nine Chapters on the Mathematical Art'' is a Chinese mathematics book, composed by several generations of scholars from the 10th–2nd century BCE, its latest stage being from the 1st century CE. This book is one of the earliest surviving ...
''. Its use is illustrated in eighteen problems, with two to five equations. The first reference to the book by this title is dated to 179 AD, but parts of it were written as early as approximately 150 BC.
It was commented on by
Liu Hui
Liu Hui () was a Chinese mathematician who published a commentary in 263 CE on ''Jiu Zhang Suan Shu ( The Nine Chapters on the Mathematical Art).'' He was a descendant of the Marquis of Zixiang of the Eastern Han dynasty and lived in the state ...
in the 3rd century.
According to Grcar solution of linear equations by elimination was invented independently in several cultures in Eurasia starting from antiquity and in Europe definite examples of procedure were published already by late Renaissance (in 1550's). It is quite possible that already then the procedure was considered by mathematicians elementary and in no need to explanation for professionals, so we may never learn its detailed history except that by then it was practiced in at least several places in Europe.
The method in Europe stems from the notes of
Isaac Newton
Sir Isaac Newton () was an English polymath active as a mathematician, physicist, astronomer, alchemist, theologian, and author. Newton was a key figure in the Scientific Revolution and the Age of Enlightenment, Enlightenment that followed ...
. In 1670, he wrote that all the algebra books known to him lacked a lesson for solving simultaneous equations, which Newton then supplied. Cambridge University eventually published the notes as ''Arithmetica Universalis'' in 1707 long after Newton had left academic life. The notes were widely imitated, which made (what is now called) Gaussian elimination a standard lesson in algebra textbooks by the end of the 18th century.
Carl Friedrich Gauss
Johann Carl Friedrich Gauss (; ; ; 30 April 177723 February 1855) was a German mathematician, astronomer, geodesist, and physicist, who contributed to many fields in mathematics and science. He was director of the Göttingen Observatory and ...
in 1810 devised a notation for symmetric elimination that was adopted in the 19th century by professional
hand computers to solve the normal equations of least-squares problems. The algorithm that is taught in high school was named for Gauss only in the 1950s as a result of confusion over the history of the subject.
Some authors use the term ''Gaussian elimination'' to refer only to the procedure until the matrix is in echelon form, and use the term Gauss–Jordan elimination to refer to the procedure which ends in reduced echelon form. The name is used because it is a variation of Gaussian elimination as described by
Wilhelm Jordan in 1888. However, the method also appears in an article by Clasen published in the same year. Jordan and Clasen probably discovered Gauss–Jordan elimination independently.
Applications
Historically, the first application of the row reduction method is for solving systems of linear equations. Below are some other important applications of the algorithm.
Computing determinants
To explain how Gaussian elimination allows the computation of the determinant of a square matrix, we have to recall how the elementary row operations change the determinant:
* Swapping two rows multiplies the determinant by −1
* Multiplying a row by a nonzero scalar multiplies the determinant by the same scalar
* Adding to one row a scalar multiple of another does not change the determinant.
If Gaussian elimination applied to a square matrix produces a row echelon matrix , let be the product of the scalars by which the determinant has been multiplied, using the above rules. Then the determinant of is the quotient by of the product of the elements of the diagonal of :
Computationally, for an matrix, this method needs only arithmetic operations, while using
Leibniz formula for determinants requires
operations
(number of summands in the formula times the number of multiplications in each summand), and
recursive
Laplace expansion requires operations if the sub-determinants are memorized for being computed only once
(number of operations in a linear combination times the number of sub-determinants to compute, which are determined by their columns). Even on the fastest computers, these two methods are impractical or almost impracticable for above 20.
Finding the inverse of a matrix
A variant of Gaussian elimination called Gauss–Jordan elimination can be used for finding the inverse of a matrix, if it exists. If is an square matrix, then one can use row reduction to compute its
inverse matrix, if it exists. First, the
identity matrix
In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere. It has unique properties, for example when the identity matrix represents a geometric transformation, the obje ...
is augmented to the right of , forming an
block matrix
In mathematics, a block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into sections called blocks or submatrices.
Intuitively, a matrix interpreted as a block matrix can be visualized as the original matrix w ...
. Now through application of elementary row operations, find the reduced echelon form of this matrix. The matrix is invertible if and only if the left block can be reduced to the identity matrix ; in this case the right block of the final matrix is . If the algorithm is unable to reduce the left block to , then is not invertible.
For example, consider the following matrix:
To find the inverse of this matrix, one takes the following matrix augmented by the identity and row-reduces it as a 3 × 6 matrix:
By performing row operations, one can check that the reduced row echelon form of this augmented matrix is
One can think of each row operation as the left product by an
elementary matrix. Denoting by the product of these elementary matrices, we showed, on the left, that , and therefore, . On the right, we kept a record of , which we know is the inverse desired. This procedure for finding the inverse works for square matrices of any size.
Computing ranks and bases
The Gaussian elimination algorithm can be applied to any matrix . In this way, for example, some 6 × 9 matrices can be transformed to a matrix that has a row echelon form like
where the stars are arbitrary entries, and are nonzero entries. This echelon matrix contains a wealth of information about : the
rank of is 5, since there are 5 nonzero rows in ; the
vector space
In mathematics and physics, a vector space (also called a linear space) is a set (mathematics), set whose elements, often called vector (mathematics and physics), ''vectors'', can be added together and multiplied ("scaled") by numbers called sc ...
spanned by the columns of has a basis consisting of its columns 1, 3, 4, 7 and 9 (the columns with in ), and the stars show how the other columns of can be written as linear combinations of the basis columns.
All of this applies also to the reduced row echelon form, which is a particular row echelon format.
Computational efficiency
The number of
arithmetic operations
Arithmetic is an elementary branch of mathematics that deals with numerical operations like addition, subtraction, multiplication, and Division (mathematics), division. In a wider sense, it also includes exponentiation, extraction of nth root, ...
required to perform row reduction is one way of measuring the algorithm's computational efficiency. For example, to solve a system of equations for unknowns by performing row operations on the matrix until it is in echelon form, and then solving for each unknown in reverse order, requires divisions, multiplications, and subtractions, for a total of approximately operations. Thus it has a ''arithmetic complexity'' (
time complexity
In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations ...
, where each arithmetic operation take a unit of time, independently of the size of the inputs) of .
This complexity is a good measure of the time needed for the whole computation when the time for each arithmetic operation is approximately constant. This is the case when the coefficients are represented by
floating-point numbers or when they belong to a
finite field
In mathematics, a finite field or Galois field (so-named in honor of Évariste Galois) is a field (mathematics), field that contains a finite number of Element (mathematics), elements. As with any field, a finite field is a Set (mathematics), s ...
. If the coefficients are
integer
An integer is the number zero (0), a positive natural number (1, 2, 3, ...), or the negation of a positive natural number (−1, −2, −3, ...). The negations or additive inverses of the positive natural numbers are referred to as negative in ...
s or
rational number
In mathematics, a rational number is a number that can be expressed as the quotient or fraction of two integers, a numerator and a non-zero denominator . For example, is a rational number, as is every integer (for example,
The set of all ...
s exactly represented, the intermediate entries can grow exponentially large, so the
bit complexity is exponential.
However,
Bareiss' algorithm is a variant of Gaussian elimination that avoids this exponential growth of the intermediate entries; with the same arithmetic complexity of , it has a bit complexity of , and has therefore a
strongly-polynomial time complexity.
Gaussian elimination and its variants can be used on computers for systems with thousands of equations and unknowns. However, the cost becomes prohibitive for systems with millions of equations. These large systems are generally solved using
iterative method
In computational mathematics, an iterative method is a Algorithm, mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the ''i''-th approximation (called an " ...
s. Specific methods exist for systems whose coefficients follow a regular pattern (see
system of linear equations
In mathematics, a system of linear equations (or linear system) is a collection of two or more linear equations involving the same variable (math), variables.
For example,
: \begin
3x+2y-z=1\\
2x-2y+4z=-2\\
-x+\fracy-z=0
\end
is a system of th ...
).
Bareiss algorithm
The first
strongly-polynomial time algorithm for Gaussian elimination was published by
Jack Edmonds in 1967.
Independently, and almost simultaneously, Erwin Bareiss discovered another algorithm, based on the following remark, which applies to a division-free variant of Gaussian elimination.
In standard Gaussian elimination, one subtracts from each row
below the pivot row
a multiple of
by
where
and
are the entries in the pivot column of
and
respectively.
Bareiss variant consists, instead, of replacing
with
This produces a row echelon form that has the same zero entries as with the standard Gaussian elimination.
Bareiss' main remark is that each matrix entry generated by this variant is the determinant of a submatrix of the original matrix.
In particular, if one starts with integer entries, the divisions occurring in the algorithm are exact divisions resulting in integers. So, all intermediate entries and final entries are integers. Moreover,
Hadamard's inequality provides an upper bound on the absolute values of the intermediate and final entries, and thus a bit complexity of
using
soft O notation.
Moreover, as an upper bound on the size of final entries is known, a complexity
can be obtained with
modular computation followed either by
Chinese remaindering or
Hensel lifting.
As a corollary, the following problems can be solved in strongly polynomial time with the same bit complexity:
* Testing whether ''m'' given rational vectors are
linearly independent
In the theory of vector spaces, a set of vectors is said to be if there exists no nontrivial linear combination of the vectors that equals the zero vector. If such a linear combination exists, then the vectors are said to be . These concep ...
* Computing the
determinant
In mathematics, the determinant is a Scalar (mathematics), scalar-valued function (mathematics), function of the entries of a square matrix. The determinant of a matrix is commonly denoted , , or . Its value characterizes some properties of the ...
of a rational matrix
* Computing a solution of a rational equation system ''Ax'' = ''b''
* Computing the
inverse matrix of a nonsingular rational matrix
* Computing the
rank of a rational matrix
Numeric instability
One possible problem is
numerical instability, caused by the possibility of dividing by very small numbers. If, for example, the leading coefficient of one of the rows is very close to zero, then to row-reduce the matrix, one would need to divide by that number. This means that any error which existed for the number that was close to zero would be amplified. Gaussian elimination is numerically stable for
diagonally dominant or
positive-definite matrices. For general matrices, Gaussian elimination is usually considered to be stable, when using
partial pivoting, even though there are examples of stable matrices for which it is unstable.
Generalizations
Gaussian elimination can be performed over any
field, not just the real numbers.
Buchberger's algorithm is a generalization of Gaussian elimination to
systems of polynomial equations. This generalization depends heavily on the notion of a
monomial order
In mathematics, a monomial order (sometimes called a term order or an admissible order) is a total order on the set of all ( monic) monomials in a given polynomial ring, satisfying the property of respecting multiplication, i.e.,
* If u \leq v an ...
. The choice of an ordering on the variables is already implicit in Gaussian elimination, manifesting as the choice to work from left to right when selecting pivot positions.
Computing the rank of a tensor of order greater than 2 is
NP-hard
In computational complexity theory, a computational problem ''H'' is called NP-hard if, for every problem ''L'' which can be solved in non-deterministic polynomial-time, there is a polynomial-time reduction from ''L'' to ''H''. That is, assumi ...
. Therefore, if , there cannot be a polynomial time analog of Gaussian elimination for higher-order
tensors (matrices are
array representations of order-2 tensors).
Pseudocode
As explained above, Gaussian elimination transforms a given matrix into a matrix in
row-echelon form.
In the following
pseudocode,
A , j/code> denotes the entry of the matrix in row and column with the indices starting from 1. The transformation is performed ''in place'', meaning that the original matrix is lost for being eventually replaced by its row-echelon form.
h := 1 /* ''Initialization of the pivot row'' */
k := 1 /* ''Initialization of the pivot column'' */
while h ≤ m and k ≤ n
/* ''Find the k-th pivot:'' */
i_max := argmax (i = h ... m, abs(A, k
The comma is a punctuation mark that appears in several variants in different languages. Some typefaces render it as a small line, slightly curved or straight, but inclined from the vertical; others give it the appearance of a miniature fille ...
)
if A _max, k= 0
/* ''No pivot in this column, pass to next column'' */
k := k + 1
else
swap rows(h, i_max)
/* ''Do for all rows below pivot:'' */
for i = h + 1 ... m:
f := A, k
The comma is a punctuation mark that appears in several variants in different languages. Some typefaces render it as a small line, slightly curved or straight, but inclined from the vertical; others give it the appearance of a miniature fille ...
/ A, k
The comma is a punctuation mark that appears in several variants in different languages. Some typefaces render it as a small line, slightly curved or straight, but inclined from the vertical; others give it the appearance of a miniature fille ...
/* ''Fill with zeros the lower part of pivot column:'' */
A, k
The comma is a punctuation mark that appears in several variants in different languages. Some typefaces render it as a small line, slightly curved or straight, but inclined from the vertical; others give it the appearance of a miniature fille ...
:= 0
/* ''Do for all remaining elements in current row:'' */
for j = k + 1 ... n:
A , j:= A , j- A , j* f
/* ''Increase pivot row and column'' */
h := h + 1
k := k + 1
This algorithm differs slightly from the one discussed earlier, by choosing a pivot with largest absolute value
In mathematics, the absolute value or modulus of a real number x, is the non-negative value without regard to its sign. Namely, , x, =x if x is a positive number, and , x, =-x if x is negative (in which case negating x makes -x positive), ...
. Such a ''partial pivoting'' may be required if, at the pivot place, the entry of the matrix is zero. In any case, choosing the largest possible absolute value of the pivot improves the numerical stability of the algorithm, when floating point is used for representing numbers.
Upon completion of this procedure the matrix will be in row echelon form and the corresponding system may be solved by back substitution.
See also
* Fangcheng (mathematics)
*Gram–Schmidt process
In mathematics, particularly linear algebra and numerical analysis, the Gram–Schmidt process or Gram-Schmidt algorithm is a way of finding a set of two or more vectors that are perpendicular to each other.
By technical definition, it is a metho ...
- another process for bringing a matrix into some canonical form.
* Fourier–Motzkin elimination - an algorithm for eliminating variables of a system of linear inequalities, rather than equations.
References
Works cited
* .
* .
* .
* .
* .
* .
*
*
* .
* .
*
*
*
External links
Interactive didactic tool
{{Authority control
Numerical linear algebra
Articles with example pseudocode
Exchange algorithms