In

_{1} and v_{2} is any vector of the form
$$c\_1\; \backslash begin\; 1\; \backslash \backslash \; 0\; \backslash \backslash \; 2\; \backslash end\; +\; c\_2\; \backslash begin\; 0\; \backslash \backslash \; 1\; \backslash \backslash \; 0\; \backslash end\; =\; \backslash begin\; c\_1\; \backslash \backslash \; c\_2\; \backslash \backslash \; 2c\_1\; \backslash end$$
The set of all such vectors is the column space of . In this case, the column space is precisely the set of vectors satisfying the equation (using Cartesian coordinates, this set is a plane (mathematics), plane through the origin in three-dimensional space).

image
An SAR radar image acquired by the SIR-C/X-SAR radar on board the Space Shuttle Endeavour shows the Teide volcano. The city of Santa Cruz de Tenerife is visible as the purple and white area on the lower right edge of the island. Lava flows ...

of the corresponding matrix transformation, the rank of a matrix is the same as the dimension of the image. For example, the transformation $\backslash R^4\; \backslash to\; \backslash R^4$ described by the matrix above maps all of $\backslash R^4$ to some three-dimensional Euclidean subspace, subspace.
The nullity of a matrix is the dimension of the kernel (matrix), null space, and is equal to the number of columns in the reduced row echelon form that do not have pivots. The rank and nullity of a matrix with columns are related by the equation:
:$\backslash operatorname(A)\; +\; \backslash operatorname(A)\; =\; n.\backslash ,$
This is known as the rank–nullity theorem.

image
An SAR radar image acquired by the SIR-C/X-SAR radar on board the Space Shuttle Endeavour shows the Teide volcano. The city of Santa Cruz de Tenerife is visible as the purple and white area on the lower right edge of the island. Lava flows ...

of .
When is not an inner product space, the coimage of can be defined as the quotient space (linear algebra), quotient space .

MIT Linear Algebra Lecture on the Four Fundamental Subspaces

at Google Video, from MIT OpenCourseWare

Khan Academy video tutorial

Lecture on column space and nullspace by Gilbert Strang of MIT

{{linear algebra Abstract algebra Linear algebra Matrices

linear algebra
Linear algebra is the branch of mathematics concerning linear equations such as:
:a_1x_1+\cdots +a_nx_n=b,
linear maps such as:
:(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n,
and their representations in vector spaces and through matrix (math ...

, the column space (also called the range or image) of a matrix
Matrix or MATRIX may refer to:
Science and mathematics
* Matrix (mathematics), a rectangular array of numbers, symbols, or expressions
* Matrix (logic), part of a formula in prenex normal form
* Matrix (biology), the material in between a eukaryoti ...

''A'' is the span (set of all possible linear combination
In mathematics
Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It h ...

s) of its column vector
In linear algebra
Linear algebra is the branch of mathematics concerning linear equations such as:
:a_1x_1+\cdots +a_nx_n=b,
linear maps such as:
:(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n,
and their representations in vector spaces and th ...

s. The column space of a matrix is the image
An SAR radar image acquired by the SIR-C/X-SAR radar on board the Space Shuttle Endeavour shows the Teide volcano. The city of Santa Cruz de Tenerife is visible as the purple and white area on the lower right edge of the island. Lava flows ...

or range
Range may refer to:
Geography
* Range (geographic)A range, in geography, is a chain of hill
A hill is a landform
A landform is a natural or artificial feature of the solid surface of the Earth or other planetary body. Landforms together ...

of the corresponding matrix transformation.
Let $\backslash mathbb$ be a field (mathematics), field. The column space of an matrix with components from $\backslash mathbb$ is a linear subspace of the Examples of vector spaces#Coordinate space, ''m''-space $\backslash mathbb^m$. The dimension (linear algebra), dimension of the column space is called the rank (linear algebra), rank of the matrix and is at most .Linear algebra, as discussed in this article, is a very well established mathematical discipline for which there are many sources. Almost all of the material in this article can be found in Lay 2005, Meyer 2001, and Strang 2005. A definition for matrices over a ring (mathematics), ring $\backslash mathbb$ #For matrices over a ring, is also possible.
The row space is defined similarly.
The row space and the column space of a matrix are sometimes denoted as and respectively.
This article considers matrices of real numbers. The row and column spaces are subspaces of the real coordinate space, real spaces $\backslash R^n$ and $\backslash R^m$ respectively.
Overview

Let be an -by- matrix. Then # , # = number of Pivot element, pivots in any echelon form of , # = the maximum number of linearly independent rows or columns of . If one considers the matrix as a linear transformation from $\backslash mathbb^n$ to $\backslash mathbb^m$, then the column space of the matrix equals theimage
An SAR radar image acquired by the SIR-C/X-SAR radar on board the Space Shuttle Endeavour shows the Teide volcano. The city of Santa Cruz de Tenerife is visible as the purple and white area on the lower right edge of the island. Lava flows ...

of this linear transformation.
The column space of a matrix is the set of all linear combinations of the columns in . If , then .
The concept of row space generalizes to matrices over the field of complex numbers, or over any field (mathematics), field.
Intuitively, given a matrix , the action of the matrix on a vector will return a linear combination of the columns of weighted by the coordinates of as coefficients. Another way to look at this is that it will (1) first project into the row space of , (2) perform an invertible transformation, and (3) place the resulting vector in the column space of . Thus the result must reside in the column space of . See singular value decomposition for more details on this second interpretation.
Example

Given a matrix : :$J\; =\; \backslash begin\; 2\; \&\; 4\; \&\; 1\; \&\; 3\; \&\; 2\backslash \backslash \; -1\; \&\; -2\; \&\; 1\; \&\; 0\; \&\; 5\backslash \backslash \; 1\; \&\; 6\; \&\; 2\; \&\; 2\; \&\; 2\backslash \backslash \; 3\; \&\; 6\; \&\; 2\; \&\; 5\; \&\; 1\; \backslash end$ the rows are $\backslash mathbf\_1\; =\; \backslash begin\; 2\; \&\; 4\; \&\; 1\; \&\; 3\; \&\; 2\; \backslash end$, $\backslash mathbf\_2\; =\; \backslash begin\; -1\; \&\; -2\; \&\; 1\; \&\; 0\; \&\; 5\; \backslash end$, $\backslash mathbf\_3\; =\; \backslash begin\; 1\; \&\; 6\; \&\; 2\; \&\; 2\; \&\; 2\; \backslash end$, $\backslash mathbf\_4\; =\; \backslash begin\; 3\; \&\; 6\; \&\; 2\; \&\; 5\; \&\; 1\; \backslash end$. Consequently, the row space of is the subspace of $\backslash R^5$ linear span, spanned by . Since these four row vectors are Linear independence, linearly independent, the row space is 4-dimensional. Moreover, in this case it can be seen that they are all orthogonality, orthogonal to the vector , so it can be deduced that the row space consists of all vectors in $\backslash R^5$ that are orthogonal to .Column space

Definition

Let be a field (mathematics), field of scalar (mathematics), scalars. Let be an matrix, with column vectors . Alinear combination
In mathematics
Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It h ...

of these vectors is any vector of the form
:$c\_1\; \backslash mathbf\_1\; +\; c\_2\; \backslash mathbf\_2\; +\; \backslash cdots\; +\; c\_n\; \backslash mathbf\_n,$
where are scalars. The set of all possible linear combinations of is called the column space of . That is, the column space of is the linear span, span of the vectors .
Any linear combination of the column vectors of a matrix can be written as the product of with a column vector:
:$\backslash begin\; A\; \backslash begin\; c\_1\; \backslash \backslash \; \backslash vdots\; \backslash \backslash \; c\_n\; \backslash end\; \&\; =\; \&\; \backslash begin\; a\_\; \&\; \backslash cdots\; \&\; a\_\; \backslash \backslash \; \backslash vdots\; \&\; \backslash ddots\; \&\; \backslash vdots\; \backslash \backslash \; a\_\; \&\; \backslash cdots\; \&\; a\_\; \backslash end\; \backslash begin\; c\_1\; \backslash \backslash \; \backslash vdots\; \backslash \backslash \; c\_n\; \backslash end\; =\; \backslash begin\; c\_1\; a\_\; +\; \backslash cdots\; +\; c\_\; a\_\; \backslash \backslash \; \backslash vdots\; \backslash \backslash \; c\_\; a\_\; +\; \backslash cdots\; +\; c\_\; a\_\; \backslash end\; =\; c\_1\; \backslash begin\; a\_\; \backslash \backslash \; \backslash vdots\; \backslash \backslash \; a\_\; \backslash end\; +\; \backslash cdots\; +\; c\_n\; \backslash begin\; a\_\; \backslash \backslash \; \backslash vdots\; \backslash \backslash \; a\_\; \backslash end\; \backslash \backslash \; \&\; =\; \&\; c\_1\; \backslash mathbf\_1\; +\; \backslash cdots\; +\; c\_n\; \backslash mathbf\_n\; \backslash end$
Therefore, the column space of consists of all possible products , for . This is the same as the image
An SAR radar image acquired by the SIR-C/X-SAR radar on board the Space Shuttle Endeavour shows the Teide volcano. The city of Santa Cruz de Tenerife is visible as the purple and white area on the lower right edge of the island. Lava flows ...

(or range
Range may refer to:
Geography
* Range (geographic)A range, in geography, is a chain of hill
A hill is a landform
A landform is a natural or artificial feature of the solid surface of the Earth or other planetary body. Landforms together ...

) of the corresponding matrix transformation.
Example

If $A\; =\; \backslash begin\; 1\; \&\; 0\; \backslash \backslash \; 0\; \&\; 1\; \backslash \backslash \; 2\; \&\; 0\; \backslash end$, then the column vectors are and . A linear combination of vBasis

The columns of span the column space, but they may not form a basis (linear algebra), basis if the column vectors are not linearly independent. Fortunately, elementary row operations do not affect the dependence relations between the column vectors. This makes it possible to use row reduction to find a basis (linear algebra), basis for the column space. For example, consider the matrix :$A\; =\; \backslash begin\; 1\; \&\; 3\; \&\; 1\; \&\; 4\; \backslash \backslash \; 2\; \&\; 7\; \&\; 3\; \&\; 9\; \backslash \backslash \; 1\; \&\; 5\; \&\; 3\; \&\; 1\; \backslash \backslash \; 1\; \&\; 2\; \&\; 0\; \&\; 8\; \backslash end.$ The columns of this matrix span the column space, but they may not be linearly independent, in which case some subset of them will form a basis. To find this basis, we reduce to reduced row echelon form: :$\backslash begin\; 1\; \&\; 3\; \&\; 1\; \&\; 4\; \backslash \backslash \; 2\; \&\; 7\; \&\; 3\; \&\; 9\; \backslash \backslash \; 1\; \&\; 5\; \&\; 3\; \&\; 1\; \backslash \backslash \; 1\; \&\; 2\; \&\; 0\; \&\; 8\; \backslash end\; \backslash sim\; \backslash begin\; 1\; \&\; 3\; \&\; 1\; \&\; 4\; \backslash \backslash \; 0\; \&\; 1\; \&\; 1\; \&\; 1\; \backslash \backslash \; 0\; \&\; 2\; \&\; 2\; \&\; -3\; \backslash \backslash \; 0\; \&\; -1\; \&\; -1\; \&\; 4\; \backslash end\; \backslash sim\; \backslash begin\; 1\; \&\; 0\; \&\; -2\; \&\; 1\; \backslash \backslash \; 0\; \&\; 1\; \&\; 1\; \&\; 1\; \backslash \backslash \; 0\; \&\; 0\; \&\; 0\; \&\; -5\; \backslash \backslash \; 0\; \&\; 0\; \&\; 0\; \&\; 5\; \backslash end\; \backslash sim\; \backslash begin\; 1\; \&\; 0\; \&\; -2\; \&\; 0\; \backslash \backslash \; 0\; \&\; 1\; \&\; 1\; \&\; 0\; \backslash \backslash \; 0\; \&\; 0\; \&\; 0\; \&\; 1\; \backslash \backslash \; 0\; \&\; 0\; \&\; 0\; \&\; 0\; \backslash end.$ At this point, it is clear that the first, second, and fourth columns are linearly independent, while the third column is a linear combination of the first two. (Specifically, .) Therefore, the first, second, and fourth columns of the original matrix are a basis for the column space: :$\backslash begin\; 1\; \backslash \backslash \; 2\; \backslash \backslash \; 1\; \backslash \backslash \; 1\backslash end,\backslash ;\backslash ;\; \backslash begin\; 3\; \backslash \backslash \; 7\; \backslash \backslash \; 5\; \backslash \backslash \; 2\backslash end,\backslash ;\backslash ;\; \backslash begin\; 4\; \backslash \backslash \; 9\; \backslash \backslash \; 1\; \backslash \backslash \; 8\backslash end.$ Note that the independent columns of the reduced row echelon form are precisely the columns with Pivot element, pivots. This makes it possible to determine which columns are linearly independent by reducing only to row echelon form, echelon form. The above algorithm can be used in general to find the dependence relations between any set of vectors, and to pick out a basis from any spanning set. Also finding a basis for the column space of is equivalent to finding a basis for the row space of the transpose matrix . To find the basis in a practical setting (e.g., for large matrices), the singular-value decomposition is typically used.Dimension

The dimension (linear algebra), dimension of the column space is called the rank (linear algebra), rank of the matrix. The rank is equal to the number of pivots in the reduced row echelon form, and is the maximum number of linearly independent columns that can be chosen from the matrix. For example, the 4 × 4 matrix in the example above has rank three. Because the column space is theRelation to the left null space

The left null space of is the set of all vectors such that . It is the same as the kernel (matrix), null space of the transpose of . The product of the matrix and the vector can be written in terms of the dot product of vectors: :$A^\backslash mathsf\backslash mathbf\; =\; \backslash begin\; \backslash mathbf\_1\; \backslash cdot\; \backslash mathbf\; \backslash \backslash \; \backslash mathbf\_2\; \backslash cdot\; \backslash mathbf\; \backslash \backslash \; \backslash vdots\; \backslash \backslash \; \backslash mathbf\_n\; \backslash cdot\; \backslash mathbf\; \backslash end,$ because row vectors of are transposes of column vectors of . Thus if and only if is orthogonal (perpendicular) to each of the column vectors of . It follows that the left null space (the null space of ) is the orthogonal complement to the column space of . For a matrix , the column space, row space, null space, and left null space are sometimes referred to as the ''four fundamental subspaces''.For matrices over a ring

Similarly the column space (sometimes disambiguated as ''right'' column space) can be defined for matrices over a ring (mathematics), ring as :$\backslash sum\backslash limits\_^n\; \backslash mathbf\_k\; c\_k$ for any , with replacement of the vector -space with "left and right (algebra), right free module", which changes the order of scalar multiplication of the vector to the scalar such that it is written in an unusual order ''vector''–''scalar''.Important only if is not commutative ring, commutative. Actually, this form is merely a matrix multiplication, product of the matrix to the column vector from where the order of factors is ''preserved'', unlike #Definition, the formula above.Row space

Definition

Let be a field (mathematics), field of scalar (mathematics), scalars. Let be an matrix, with row vectors . Alinear combination
In mathematics
Mathematics (from Ancient Greek, Greek: ) includes the study of such topics as quantity (number theory), mathematical structure, structure (algebra), space (geometry), and calculus, change (mathematical analysis, analysis). It h ...

of these vectors is any vector of the form
:$c\_1\; \backslash mathbf\_1\; +\; c\_2\; \backslash mathbf\_2\; +\; \backslash cdots\; +\; c\_m\; \backslash mathbf\_m,$
where are scalars. The set of all possible linear combinations of is called the row space of . That is, the row space of is the linear span, span of the vectors .
For example, if
:$A\; =\; \backslash begin\; 1\; \&\; 0\; \&\; 2\; \backslash \backslash \; 0\; \&\; 1\; \&\; 0\; \backslash end,$
then the row vectors are and . A linear combination of and is any vector of the form
:$c\_1\; \backslash begin1\; \&\; 0\; \&\; 2\backslash end\; +\; c\_2\; \backslash begin0\; \&\; 1\; \&\; 0\backslash end\; =\; \backslash beginc\_1\; \&\; c\_2\; \&\; 2c\_1\backslash end.$
The set of all such vectors is the row space of . In this case, the row space is precisely the set of vectors satisfying the equation (using Cartesian coordinates, this set is a plane (mathematics), plane through the origin in three-dimensional space).
For a matrix that represents a homogeneous system of linear equations, the row space consists of all linear equations that follow from those in the system.
The column space of is equal to the row space of .
Basis

The row space is not affected by elementary row operations. This makes it possible to use row reduction to find a basis (linear algebra), basis for the row space. For example, consider the matrix :$A\; =\; \backslash begin\; 1\; \&\; 3\; \&\; 2\; \backslash \backslash \; 2\; \&\; 7\; \&\; 4\; \backslash \backslash \; 1\; \&\; 5\; \&\; 2\backslash end.$ The rows of this matrix span the row space, but they may not be linearly independent, in which case the rows will not be a basis. To find a basis, we reduce to row echelon form: , , represents the rows. :$\backslash begin\; \backslash begin\; 1\; \&\; 3\; \&\; 2\; \backslash \backslash \; 2\; \&\; 7\; \&\; 4\; \backslash \backslash \; 1\; \&\; 5\; \&\; 2\backslash end\; \&\backslash xrightarrow\; \backslash begin\; 1\; \&\; 3\; \&\; 2\; \backslash \backslash \; 0\; \&\; 1\; \&\; 0\; \backslash \backslash \; 1\; \&\; 5\; \&\; 2\backslash end\; \backslash xrightarrow\; \backslash begin\; 1\; \&\; 3\; \&\; 2\; \backslash \backslash \; 0\; \&\; 1\; \&\; 0\; \backslash \backslash \; 0\; \&\; 2\; \&\; 0\backslash end\; \backslash \backslash \; \&\backslash xrightarrow\; \backslash begin\; 1\; \&\; 3\; \&\; 2\; \backslash \backslash \; 0\; \&\; 1\; \&\; 0\; \backslash \backslash \; 0\; \&\; 0\; \&\; 0\backslash end\; \backslash xrightarrow\; \backslash begin\; 1\; \&\; 0\; \&\; 2\; \backslash \backslash \; 0\; \&\; 1\; \&\; 0\; \backslash \backslash \; 0\; \&\; 0\; \&\; 0\backslash end.\; \backslash end$ Once the matrix is in echelon form, the nonzero rows are a basis for the row space. In this case, the basis is . Another possible basis comes from a further reduction.The example is valid over the real numbers, the rational numbers, and other number fields. It is not necessarily correct over fields and rings with non-zero characteristic (algebra), characteristic. This algorithm can be used in general to find a basis for the span of a set of vectors. If the matrix is further simplified to reduced row echelon form, then the resulting basis is uniquely determined by the row space. It is sometimes convenient to find a basis for the row space from among the rows of the original matrix instead (for example, this result is useful in giving an elementary proof that the Rank (linear algebra)#Alternative definitions, determinantal rank of a matrix is equal to its rank). Since row operations can affect linear dependence relations of the row vectors, such a basis is instead found indirectly using the fact that the column space of is equal to the row space of . Using the example matrix above, find and reduce it to row echelon form: :$A^\; =\; \backslash begin\; 1\; \&\; 2\; \&\; 1\; \backslash \backslash \; 3\; \&\; 7\; \&\; 5\; \backslash \backslash \; 2\; \&\; 4\; \&\; 2\backslash end\; \backslash sim\; \backslash begin\; 1\; \&\; 2\; \&\; 1\; \backslash \backslash \; 0\; \&\; 1\; \&\; 2\; \backslash \backslash \; 0\; \&\; 0\; \&\; 0\backslash end.$ The pivots indicate that the first two columns of form a basis of the column space of . Therefore, the first two rows of (before any row reductions) also form a basis of the row space of .Dimension

The dimension (linear algebra), dimension of the row space is called the rank (linear algebra), rank of the matrix. This is the same as the maximum number of linearly independent rows that can be chosen from the matrix, or equivalently the number of pivots. For example, the 3 × 3 matrix in the example above has rank two. The rank of a matrix is also equal to the dimension of the column space. The dimension of the null space is called the nullity of the matrix, and is related to the rank by the following equation: :$\backslash operatorname(A)\; +\; \backslash operatorname(A)\; =\; n,$ where is the number of columns of the matrix . The equation above is known as the rank–nullity theorem.Relation to the null space

The null space of matrix is the set of all vectors for which . The product of the matrix and the vector can be written in terms of the dot product of vectors: :$A\backslash mathbf\; =\; \backslash begin\; \backslash mathbf\_1\; \backslash cdot\; \backslash mathbf\; \backslash \backslash \; \backslash mathbf\_2\; \backslash cdot\; \backslash mathbf\; \backslash \backslash \; \backslash vdots\; \backslash \backslash \; \backslash mathbf\_m\; \backslash cdot\; \backslash mathbf\; \backslash end,$ where are the row vectors of . Thus if and only if is orthogonal (perpendicular) to each of the row vectors of . It follows that the null space of is the orthogonal complement to the row space. For example, if the row space is a plane through the origin in three dimensions, then the null space will be the perpendicular line through the origin. This provides a proof of the rank–nullity theorem (see #Dimension, dimension above). The row space and null space are two of the four fundamental subspaces associated with a matrix (the other two being the column space and left null space).Relation to coimage

If and are vector spaces, then the kernel (linear algebra), kernel of a linear transformation is the set of vectors for which . The kernel of a linear transformation is analogous to the null space of a matrix. If is an inner product space, then the orthogonal complement to the kernel can be thought of as a generalization of the row space. This is sometimes called the coimage of . The transformation is one-to-one on its coimage, and the coimage maps isomorphism, isomorphically onto theSee also

* Euclidean subspaceReferences & Notes

Further reading

* * * * * * * * *External links

* * *MIT Linear Algebra Lecture on the Four Fundamental Subspaces

at Google Video, from MIT OpenCourseWare

Khan Academy video tutorial

Lecture on column space and nullspace by Gilbert Strang of MIT

{{linear algebra Abstract algebra Linear algebra Matrices