HOME

TheInfoList



OR:

In mathematics, given a field \mathbb F, nonnegative integers m,n, and a matrix A\in\mathbb F^, a rank decomposition or rank factorization of is a factorization of of the form , where C\in\mathbb F^ and F\in\mathbb F^, where r=\operatorname A is the
rank Rank is the relative position, value, worth, complexity, power, importance, authority, level, etc. of a person or object within a ranking, such as: Level or position in a hierarchical organization * Academic rank * Diplomatic rank * Hierarchy * ...
of A.


Existence

Every finite-dimensional matrix has a rank decomposition: Let A be an m\times n matrix whose
column rank In linear algebra, the rank of a matrix is the dimension of the vector space generated (or spanned) by its columns. p. 48, § 1.16 This corresponds to the maximal number of linearly independent columns of . This, in turn, is identical to the dime ...
is r. Therefore, there are r
linearly independent In the theory of vector spaces, a set of vectors is said to be if there is a nontrivial linear combination of the vectors that equals the zero vector. If no such linear combination exists, then the vectors are said to be . These concepts are ...
columns in A; equivalently, the
dimension In physics and mathematics, the dimension of a mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any point within it. Thus, a line has a dimension of one (1D) because only one coor ...
of the
column space In linear algebra, the column space (also called the range or image) of a matrix ''A'' is the span (set of all possible linear combinations) of its column vectors. The column space of a matrix is the image or range of the corresponding mat ...
of A is r. Let \mathbf_1, \mathbf_2, \ldots, \mathbf_r be any
basis Basis may refer to: Finance and accounting * Adjusted basis, the net cost of an asset after adjusting for various tax-related items *Basis point, 0.01%, often used in the context of interest rates * Basis trading, a trading strategy consisting ...
for the column space of A and place them as column vectors to form the m\times r matrix C = \begin\mathbf_1 & \mathbf_2 & \cdots & \mathbf_r\end. Therefore, every column vector of A is a linear combination of the columns of C. To be precise, if A = \begin\mathbf_1 & \mathbf_2 & \cdots & \mathbf_n\end is an m\times n matrix with \mathbf_j as the j-th column, then :\mathbf_j = f_ \mathbf_1 + f_ \mathbf_2 + \cdots + f_ \mathbf_r, where f_'s are the scalar coefficients of \mathbf_j in terms of the basis \mathbf_1, \mathbf_2, \ldots, \mathbf_r. This implies that A = CF, where f_ is the (i,j)-th element of F.


Non-uniqueness

If A = C_1 F_1 is a rank factorization, taking C_2 = C_1 R and F_2 = R^ F_1 gives another rank factorization for any invertible matrix R of compatible dimensions. Conversely, if A = F_G_ = F_G_ are two rank factorizations of A, then there exists an invertible matrix R such that F_1 = F_2 R and G_1 = R^ G_.


Construction


Rank factorization from reduced row echelon forms

In practice, we can construct one specific rank factorization as follows: we can compute B, the
reduced row echelon form In linear algebra, a matrix is in echelon form if it has the shape resulting from a Gaussian elimination. A matrix being in row echelon form means that Gaussian elimination has operated on the rows, and column echelon form means that Gaussian e ...
of A. Then C is obtained by removing from A all non- pivot columns (which can be determined by looking for columns in B which do not contain a pivot), and F is obtained by eliminating any all-zero rows of B. Note: For a full-rank square matrix (i.e. when n=m=r), this procedure will yield the trivial result C=A and F=B=I_n (the n\times n identity matrix).


Example

Consider the matrix : A = \begin 1 & 3 & 1 & 4 \\ 2 & 7 & 3 & 9 \\ 1 & 5 & 3 & 1 \\ 1 & 2 & 0 & 8 \end \sim \begin 1 & 0 & -2 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \end = B\text B is in reduced echelon form. Then C is obtained by removing the third column of A, the only one which is not a pivot column, and F by getting rid of the last row of zeroes from B, so : C = \begin 1 & 3 & 4 \\ 2 & 7 & 9 \\ 1 & 5 & 1 \\ 1 & 2 & 8 \end\text\qquad F = \begin 1 & 0 & -2 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 1 \end\text It is straightforward to check that : A = \begin 1 & 3 & 1 & 4 \\ 2 & 7 & 3 & 9 \\ 1 & 5 & 3 & 1 \\ 1 & 2 & 0 & 8 \end = \begin 1 & 3 & 4 \\ 2 & 7 & 9 \\ 1 & 5 & 1 \\ 1 & 2 & 8 \end \begin 1 & 0 & -2 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 1 \end = CF\text


Proof

Let P be an n\times n permutation matrix such that AP = (C, D) in block partitioned form, where the columns of C are the r pivot columns of A. Every column of D is a linear combination of the columns of C, so there is a matrix G such that D = CG, where the columns of G contain the coefficients of each of those linear combinations. So AP = (C, CG) = C(I_r, G), I_r being the r\times r identity matrix. We will show now that (I_r, G) = FP. Transforming A into its reduced row echelon form B amounts to left-multiplying by a matrix E which is a product of elementary matrices, so EAP = BP = EC(I_r, G), where EC = \begin I_r \\ 0 \end. We then can write BP = \begin I_r & G \\ 0 & 0 \end, which allows us to identify (I_r, G) = FP, i.e. the nonzero r rows of the reduced echelon form, with the same permutation on the columns as we did for A. We thus have AP = CFP, and since P is invertible this implies A = CF, and the proof is complete.


Singular value decomposition

If \mathbb F\in\, then one can also construct a full-rank factorization of A via a
singular value decomposition In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any \ m \times n\ matrix. It is re ...
: A = U \Sigma V^* = \begin U_1 & U_2 \end \begin \Sigma_r & 0 \\ 0 & 0 \end \begin V_1^* \\ V_2^* \end = U_1 \left(\Sigma_r V_1^*\right) . Since U_1 is a full-column-rank matrix and \Sigma_r V_1^* is a full-row-rank matrix, we can take C = U_1 and F = \Sigma_r V_1^*.


Consequences


rank(A) = rank(AT)

An immediate consequence of rank factorization is that the rank of A is equal to the rank of its transpose A^\textsf. Since the columns of A are the rows of A^\textsf, the
column rank In linear algebra, the rank of a matrix is the dimension of the vector space generated (or spanned) by its columns. p. 48, § 1.16 This corresponds to the maximal number of linearly independent columns of . This, in turn, is identical to the dime ...
of A equals its row rank. Proof: To see why this is true, let us first define rank to mean column rank. Since A = CF, it follows that A^\textsf = F^\textsfC^\textsf. From the definition of
matrix multiplication In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the s ...
, this means that each column of A^\textsf is a linear combination of the columns of F^\textsf. Therefore, the column space of A^\textsf is contained within the column space of F^\textsf and, hence, \operatorname\left(A^\textsf\right) \leq \operatorname\left(F^\textsf\right). Now, F^\textsf is n \times r, so there are r columns in F^\textsf and, hence, \operatorname\left(A^\textsf\right) \leq r = \operatorname\left(A\right). This proves that \operatorname\left(A^\textsf\right) \leq \operatorname\left(A\right). Now apply the result to A^\textsf to obtain the reverse inequality: since \left(A^\textsf\right)^\textsf = A, we can write \operatorname\left(A\right)= \operatorname\left(\left(A^\textsf\right)^\textsf\right) \leq \operatorname\left(A^\textsf\right). This proves \operatorname\left(A\right) \leq \operatorname\left(A^\textsf\right). We have, therefore, proved \operatorname\left(A^\textsf\right) \leq \operatorname\left(A\right) and \operatorname\left(A\right) \leq \operatorname\left(A^\textsf\right), so \operatorname\left(A\right) = \operatorname\left(A^\textsf\right).


Notes


References

* * * * * {{refend Matrix decompositions Linear algebra