HOME

TheInfoList



OR:

In
mathematics Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics ...
, and more specifically in
linear algebra Linear algebra is the branch of mathematics concerning linear equations such as: :a_1x_1+\cdots +a_nx_n=b, linear maps such as: :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrices ...
, a linear subspace, also known as a vector subspaceThe term ''linear subspace'' is sometimes used for referring to flats and affine subspaces. In the case of vector spaces over the reals, linear subspaces, flats, and affine subspaces are also called ''linear manifolds'' for emphasizing that there are also
manifold In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point. More precisely, an n-dimensional manifold, or ''n-manifold'' for short, is a topological space with the property that each point has a ...
s.
is a
vector space In mathematics and physics, a vector space (also called a linear space) is a set (mathematics), set whose elements, often called ''vector (mathematics and physics), vectors'', may be Vector addition, added together and Scalar multiplication, mu ...
that is a subset of some larger vector space. A linear subspace is usually simply called a ''subspace'' when the context serves to distinguish it from other types of subspaces.


Definition

If ''V'' is a vector space over a field ''K'' and if ''W'' is a subset of ''V'', then ''W'' is a linear subspace of ''V'' if under the operations of ''V'', ''W'' is a vector space over ''K''. Equivalently, a
nonempty In mathematics, the empty set is the unique set having no elements; its size or cardinality (count of elements in a set) is zero. Some axiomatic set theories ensure that the empty set exists by including an axiom of empty set, while in other ...
subset ''W'' is a subspace of ''V'' if, whenever are elements of ''W'' and are elements of ''K'', it follows that is in ''W''. As a corollary, all vector spaces are equipped with at least two (possibly different) linear subspaces: the
zero vector space In algebra, the zero object of a given algebraic structure is, in the sense explained below, the simplest object of such structure. As a set it is a singleton, and as a magma has a trivial structure, which is also an abelian group. The afor ...
consisting of the zero vector alone and the entire vector space itself. These are called the trivial subspaces of the vector space.


Examples


Example I

Let the field ''K'' be the set R of
real number In mathematics, a real number is a number that can be used to measure a ''continuous'' one- dimensional quantity such as a distance, duration or temperature. Here, ''continuous'' means that values can have arbitrarily small variations. Ever ...
s, and let the vector space ''V'' be the real coordinate space R3. Take ''W'' to be the set of all vectors in ''V'' whose last component is 0. Then ''W'' is a subspace of ''V''. ''Proof:'' #Given u and v in ''W'', then they can be expressed as and . Then . Thus, u + v is an element of ''W'', too. #Given u in ''W'' and a scalar ''c'' in R, if again, then . Thus, ''c''u is an element of ''W'' too.


Example II

Let the field be R again, but now let the vector space ''V'' be the
Cartesian plane A Cartesian coordinate system (, ) in a plane is a coordinate system that specifies each point uniquely by a pair of numerical coordinates, which are the signed distances to the point from two fixed perpendicular oriented lines, measured in ...
R2. Take ''W'' to be the set of points (''x'', ''y'') of R2 such that ''x'' = ''y''. Then ''W'' is a subspace of R2. ''Proof:'' #Let and be elements of ''W'', that is, points in the plane such that ''p''1 = ''p''2 and ''q''1 = ''q''2. Then ; since ''p''1 = ''p''2 and ''q''1 = ''q''2, then ''p''1 + ''q''1 = ''p''2 + ''q''2, so p + q is an element of ''W''. #Let p = (''p''1, ''p''2) be an element of ''W'', that is, a point in the plane such that ''p''1 = ''p''2, and let ''c'' be a scalar in R. Then ; since ''p''1 = ''p''2, then ''cp''1 = ''cp''2, so ''c''p is an element of ''W''. In general, any subset of the real coordinate space R''n'' that is defined by a system of homogeneous linear equations will yield a subspace. (The equation in example I was ''z'' = 0, and the equation in example II was ''x'' = ''y''.)


Example III

Again take the field to be R, but now let the vector space ''V'' be the set RR of all
function Function or functionality may refer to: Computing * Function key, a type of key on computer keyboards * Function model, a structured representation of processes in a system * Function object or functor or functionoid, a concept of object-orien ...
s from R to R. Let C(R) be the subset consisting of continuous functions. Then C(R) is a subspace of RR. ''Proof:'' #We know from calculus that . #We know from calculus that the sum of continuous functions is continuous. #Again, we know from calculus that the product of a continuous function and a number is continuous.


Example IV

Keep the same field and vector space as before, but now consider the set Diff(R) of all
differentiable function In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. In other words, the graph of a differentiable function has a non-vertical tangent line at each interior point in ...
s. The same sort of argument as before shows that this is a subspace too. Examples that extend these themes are common in functional analysis.


Properties of subspaces

From the definition of vector spaces, it follows that subspaces are nonempty, and are closed under sums and under scalar multiples. Equivalently, subspaces can be characterized by the property of being closed under linear combinations. That is, a nonempty set ''W'' is a subspace if and only if every linear combination of
finite Finite is the opposite of infinite. It may refer to: * Finite number (disambiguation) * Finite set In mathematics, particularly set theory, a finite set is a set that has a finite number of elements. Informally, a finite set is a set which ...
ly many elements of ''W'' also belongs to ''W''. The equivalent definition states that it is also equivalent to consider linear combinations of two elements at a time. In a topological vector space ''X'', a subspace ''W'' need not be topologically closed, but a
finite-dimensional In mathematics, the dimension of a vector space ''V'' is the cardinality (i.e., the number of vectors) of a basis of ''V'' over its base field. p. 44, §2.36 It is sometimes called Hamel dimension (after Georg Hamel) or algebraic dimension to dist ...
subspace is always closed. The same is true for subspaces of finite codimension (i.e., subspaces determined by a finite number of continuous
linear functional In mathematics, a linear form (also known as a linear functional, a one-form, or a covector) is a linear map from a vector space to its field of scalars (often, the real numbers or the complex numbers). If is a vector space over a field , ...
s).


Descriptions

Descriptions of subspaces include the solution set to a homogeneous
system of linear equations In mathematics, a system of linear equations (or linear system) is a collection of one or more linear equations involving the same variables. For example, :\begin 3x+2y-z=1\\ 2x-2y+4z=-2\\ -x+\fracy-z=0 \end is a system of three equations in th ...
, the subset of Euclidean space described by a system of homogeneous linear
parametric equations Parametric may refer to: Mathematics *Parametric equation, a representation of a curve through equations, as functions of a variable *Parametric statistics, a branch of statistics that assumes data has come from a type of probability distribut ...
, the span of a collection of vectors, and the null space, column space, and row space of a matrix. Geometrically (especially over the field of real numbers and its subfields), a subspace is a flat in an ''n''-space that passes through the origin. A natural description of a 1-subspace is the scalar multiplication of one non-
zero 0 (zero) is a number representing an empty quantity. In place-value notation such as the Hindu–Arabic numeral system, 0 also serves as a placeholder numerical digit, which works by multiplying digits to the left of 0 by the radix, usua ...
vector v to all possible scalar values. 1-subspaces specified by two vectors are equal if and only if one vector can be obtained from another with scalar multiplication: :\exist c\in K: \mathbf' = c\mathbf\text\mathbf = \frac\mathbf'\text This idea is generalized for higher dimensions with
linear span In mathematics, the linear span (also called the linear hull or just span) of a set of vectors (from a vector space), denoted , pp. 29-30, §§ 2.5, 2.8 is defined as the set of all linear combinations of the vectors in . It can be characteriz ...
, but criteria for equality of ''k''-spaces specified by sets of ''k'' vectors are not so simple. A dual description is provided with linear functionals (usually implemented as linear equations). One non-
zero 0 (zero) is a number representing an empty quantity. In place-value notation such as the Hindu–Arabic numeral system, 0 also serves as a placeholder numerical digit, which works by multiplying digits to the left of 0 by the radix, usua ...
linear functional F specifies its
kernel Kernel may refer to: Computing * Kernel (operating system), the central component of most operating systems * Kernel (image processing), a matrix used for image convolution * Compute kernel, in GPGPU programming * Kernel method, in machine learni ...
subspace F = 0 of codimension 1. Subspaces of codimension 1 specified by two linear functionals are equal, if and only if one functional can be obtained from another with scalar multiplication (in the
dual space In mathematics, any vector space ''V'' has a corresponding dual vector space (or just dual space for short) consisting of all linear forms on ''V'', together with the vector space structure of pointwise addition and scalar multiplication by con ...
): :\exist c\in K: \mathbf' = c\mathbf\text\mathbf = \frac\mathbf'\text It is generalized for higher codimensions with a system of equations. The following two subsections will present this latter description in details, and the remaining four subsections further describe the idea of linear span.


Systems of linear equations

The solution set to any homogeneous
system of linear equations In mathematics, a system of linear equations (or linear system) is a collection of one or more linear equations involving the same variables. For example, :\begin 3x+2y-z=1\\ 2x-2y+4z=-2\\ -x+\fracy-z=0 \end is a system of three equations in th ...
with ''n'' variables is a subspace in the
coordinate space In mathematics and physics, a vector space (also called a linear space) is a set whose elements, often called ''vectors'', may be added together and multiplied ("scaled") by numbers called '' scalars''. Scalars are often real numbers, but ...
''K''''n'': \left\. For example, the set of all vectors (over real or rational numbers) satisfying the equations x + 3y + 2z = 0 \quad\text\quad 2x - 4y + 5z = 0 is a one-dimensional subspace. More generally, that is to say that given a set of ''n'' independent functions, the dimension of the subspace in ''K''''k'' will be the dimension of the
null set In mathematical analysis, a null set N \subset \mathbb is a measurable set that has measure zero. This can be characterized as a set that can be covered by a countable union of intervals of arbitrarily small total length. The notion of null ...
of ''A'', the composite matrix of the ''n'' functions.


Null space of a matrix

In a finite-dimensional space, a homogeneous system of linear equations can be written as a single matrix equation: :A\mathbf = \mathbf. The set of solutions to this equation is known as the null space of the matrix. For example, the subspace described above is the null space of the matrix :A = \begin 1 & 3 & 2 \\ 2 & -4 & 5 \end . Every subspace of ''K''''n'' can be described as the null space of some matrix (see below for more).


Linear parametric equations

The subset of ''K''''n'' described by a system of homogeneous linear
parametric equations Parametric may refer to: Mathematics *Parametric equation, a representation of a curve through equations, as functions of a variable *Parametric statistics, a branch of statistics that assumes data has come from a type of probability distribut ...
is a subspace: :\left\. For example, the set of all vectors (''x'', ''y'', ''z'') parameterized by the equations :x = 2t_1 + 3t_2,\;\;\;\;y = 5t_1 - 4t_2,\;\;\;\;\text\;\;\;\;z = -t_1 + 2t_2 is a two-dimensional subspace of ''K''3, if ''K'' is a number field (such as real or rational numbers).Generally, ''K'' can be any field of such characteristic that the given integer matrix has the appropriate
rank Rank is the relative position, value, worth, complexity, power, importance, authority, level, etc. of a person or object within a ranking, such as: Level or position in a hierarchical organization * Academic rank * Diplomatic rank * Hierarchy * ...
in it. All fields include integers, but some integers may equal to zero in some fields.


Span of vectors

In linear algebra, the system of parametric equations can be written as a single vector equation: :\begin x \\ y \\ z \end \;=\; t_1 \!\begin 2 \\ 5 \\ -1 \end + t_2 \!\begin 3 \\ -4 \\ 2 \end. The expression on the right is called a linear combination of the vectors (2, 5, −1) and (3, −4, 2). These two vectors are said to span the resulting subspace. In general, a linear combination of vectors v1, v2, ... , v''k'' is any vector of the form :t_1 \mathbf_1 + \cdots + t_k \mathbf_k. The set of all possible linear combinations is called the span: :\text \ = \left\ . If the vectors v1, ... , v''k'' have ''n'' components, then their span is a subspace of ''K''''n''. Geometrically, the span is the flat through the origin in ''n''-dimensional space determined by the points v1, ... , v''k''. ; Example : The ''xz''-plane in R3 can be parameterized by the equations ::x = t_1, \;\;\; y = 0, \;\;\; z = t_2. :As a subspace, the ''xz''-plane is spanned by the vectors (1, 0, 0) and (0, 0, 1). Every vector in the ''xz''-plane can be written as a linear combination of these two: ::(t_1, 0, t_2) = t_1(1,0,0) + t_2(0,0,1)\text :Geometrically, this corresponds to the fact that every point on the ''xz''-plane can be reached from the origin by first moving some distance in the direction of (1, 0, 0) and then moving some distance in the direction of (0, 0, 1).


Column space and row space

A system of linear parametric equations in a finite-dimensional space can also be written as a single matrix equation: :\mathbf = A\mathbf\;\;\;\;\text\;\;\;\;A = \left \begin 2 && 3 & \\ 5 && \;\;-4 & \\ -1 && 2 & \end \,\righttext In this case, the subspace consists of all possible values of the vector x. In linear algebra, this subspace is known as the column space (or
image An image is a visual representation of something. It can be two-dimensional, three-dimensional, or somehow otherwise feed into the visual system to convey information. An image can be an artifact, such as a photograph or other two-dimension ...
) of the matrix ''A''. It is precisely the subspace of ''K''''n'' spanned by the column vectors of ''A''. The row space of a matrix is the subspace spanned by its row vectors. The row space is interesting because it is the
orthogonal complement In the mathematical fields of linear algebra and functional analysis, the orthogonal complement of a subspace ''W'' of a vector space ''V'' equipped with a bilinear form ''B'' is the set ''W''⊥ of all vectors in ''V'' that are orthogonal to ev ...
of the null space (see below).


Independence, basis, and dimension

In general, a subspace of ''K''''n'' determined by ''k'' parameters (or spanned by ''k'' vectors) has dimension ''k''. However, there are exceptions to this rule. For example, the subspace of ''K''3 spanned by the three vectors (1, 0, 0), (0, 0, 1), and (2, 0, 3) is just the ''xz''-plane, with each point on the plane described by infinitely many different values of . In general, vectors v1, ... , v''k'' are called linearly independent if :t_1 \mathbf_1 + \cdots + t_k \mathbf_k \;\ne\; u_1 \mathbf_1 + \cdots + u_k \mathbf_k for (''t''1, ''t''2, ... , ''tk'') ≠ (''u''1, ''u''2, ... , ''uk'').This definition is often stated differently: vectors v1, ..., v''k'' are linearly independent if for . The two definitions are equivalent. If are linearly independent, then the coordinates for a vector in the span are uniquely determined. A basis for a subspace ''S'' is a set of linearly independent vectors whose span is ''S''. The number of elements in a basis is always equal to the geometric dimension of the subspace. Any spanning set for a subspace can be changed into a basis by removing redundant vectors (see § Algorithms below for more). ; Example : Let ''S'' be the subspace of R4 defined by the equations ::x_1 = 2 x_2\;\;\;\;\text\;\;\;\;x_3 = 5x_4. :Then the vectors (2, 1, 0, 0) and (0, 0, 5, 1) are a basis for ''S''. In particular, every vector that satisfies the above equations can be written uniquely as a linear combination of the two basis vectors: ::(2t_1, t_1, 5t_2, t_2) = t_1(2, 1, 0, 0) + t_2(0, 0, 5, 1). :The subspace ''S'' is two-dimensional. Geometrically, it is the plane in R4 passing through the points (0, 0, 0, 0), (2, 1, 0, 0), and (0, 0, 5, 1).


Operations and relations on subspaces


Inclusion

The set-theoretical inclusion binary relation specifies a partial order on the set of all subspaces (of any dimension). A subspace cannot lie in any subspace of lesser dimension. If dim ''U'' = ''k'', a finite number, and ''U'' ⊂ ''W'', then dim ''W'' = ''k'' if and only if ''U'' = ''W''.


Intersection

Given subspaces ''U'' and ''W'' of a vector space ''V'', then their intersection ''U'' ∩ ''W'' := is also a subspace of ''V''. ''Proof:'' # Let v and w be elements of ''U'' ∩ ''W''. Then v and w belong to both ''U'' and ''W''. Because ''U'' is a subspace, then v + w belongs to ''U''. Similarly, since ''W'' is a subspace, then v + w belongs to ''W''. Thus, v + w belongs to ''U'' ∩ ''W''. # Let v belong to ''U'' ∩ ''W'', and let ''c'' be a scalar. Then v belongs to both ''U'' and ''W''. Since ''U'' and ''W'' are subspaces, ''c''v belongs to both ''U'' and ''W''. # Since ''U'' and ''W'' are vector spaces, then 0 belongs to both sets. Thus, 0 belongs to ''U'' ∩ ''W''. For every vector space ''V'', the set and ''V'' itself are subspaces of ''V''.


Sum

If ''U'' and ''W'' are subspaces, their sum is the subspaceVector space related operators. U + W = \left\. For example, the sum of two lines is the plane that contains them both. The dimension of the sum satisfies the inequality \max(\dim U,\dim W) \leq \dim(U + W) \leq \dim(U) + \dim(W). Here, the minimum only occurs if one subspace is contained in the other, while the maximum is the most general case. The dimension of the intersection and the sum are related by the following equation: \dim(U+W) = \dim(U) + \dim(W) - \dim(U \cap W). A set of subspaces is independent when the only intersection between any pair of subspaces is the trivial subspace. The
direct sum The direct sum is an operation between structures in abstract algebra, a branch of mathematics. It is defined differently, but analogously, for different kinds of structures. To see how the direct sum is used in abstract algebra, consider a mor ...
is the sum of independent subspaces, written as U \oplus W. An equivalent restatement is that a direct sum is a subspace sum under the condition that every subspace contributes to the span of the sum. The dimension of a direct sum U \oplus W is the same as the sum of subspaces, but may be shortened because the dimension of the trivial subspace is zero. \dim (U \oplus W) = \dim (U) + \dim (W)


Lattice of subspaces

The operations intersection and sum make the set of all subspaces a bounded modular lattice, where the subspace, the
least element In mathematics, especially in order theory, the greatest element of a subset S of a partially ordered set (poset) is an element of S that is greater than every other element of S. The term least element is defined dually, that is, it is an elem ...
, is an
identity element In mathematics, an identity element, or neutral element, of a binary operation operating on a set is an element of the set that leaves unchanged every element of the set when the operation is applied. This concept is used in algebraic structures s ...
of the sum operation, and the identical subspace ''V'', the greatest element, is an identity element of the intersection operation.


Orthogonal complements

If V is an
inner product space In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often de ...
and N is a subset of V, then the
orthogonal complement In the mathematical fields of linear algebra and functional analysis, the orthogonal complement of a subspace ''W'' of a vector space ''V'' equipped with a bilinear form ''B'' is the set ''W''⊥ of all vectors in ''V'' that are orthogonal to ev ...
of N, denoted N^, is again a subspace. If V is finite-dimensional and N is a subspace, then the dimensions of N and N^ satisfy the complementary relationship \dim (N) + \dim (N^) = \dim (V) . Moreover, no vector is orthogonal to itself, so N \cap N^\perp = \ and V is the
direct sum The direct sum is an operation between structures in abstract algebra, a branch of mathematics. It is defined differently, but analogously, for different kinds of structures. To see how the direct sum is used in abstract algebra, consider a mor ...
of N and N^. Applying orthogonal complements twice returns the original subspace: (N^)^ = N for every subspace N. p. 195, § 6.51 This operation, understood as negation (\neg), makes the lattice of subspaces a (possibly infinite) orthocomplemented lattice (although not a distributive lattice). In spaces with other bilinear forms, some but not all of these results still hold. In
pseudo-Euclidean space In mathematics and theoretical physics, a pseudo-Euclidean space is a finite- dimensional real -space together with a non- degenerate quadratic form . Such a quadratic form can, given a suitable choice of basis , be applied to a vector , giving q( ...
s and symplectic vector spaces, for example, orthogonal complements exist. However, these spaces may have null vectors that are orthogonal to themselves, and consequently there exist subspaces N such that N \cap N^ \ne \. As a result, this operation does not turn the lattice of subspaces into a Boolean algebra (nor a
Heyting algebra In mathematics, a Heyting algebra (also known as pseudo-Boolean algebra) is a bounded lattice (with join and meet operations written ∨ and ∧ and with least element 0 and greatest element 1) equipped with a binary operation ''a'' → ''b'' of ''i ...
).


Algorithms

Most algorithms for dealing with subspaces involve row reduction. This is the process of applying elementary row operations to a matrix, until it reaches either row echelon form or
reduced row echelon form In linear algebra, a matrix is in echelon form if it has the shape resulting from a Gaussian elimination. A matrix being in row echelon form means that Gaussian elimination has operated on the rows, and column echelon form means that Gaussian e ...
. Row reduction has the following important properties: # The reduced matrix has the same null space as the original. # Row reduction does not change the span of the row vectors, i.e. the reduced matrix has the same row space as the original. # Row reduction does not affect the linear dependence of the column vectors.


Basis for a row space

:Input An ''m'' × ''n'' matrix ''A''. :Output A basis for the row space of ''A''. :# Use elementary row operations to put ''A'' into row echelon form. :# The nonzero rows of the echelon form are a basis for the row space of ''A''. See the article on row space for an example. If we instead put the matrix ''A'' into reduced row echelon form, then the resulting basis for the row space is uniquely determined. This provides an algorithm for checking whether two row spaces are equal and, by extension, whether two subspaces of ''K''''n'' are equal.


Subspace membership

:Input A basis for a subspace ''S'' of ''K''''n'', and a vector v with ''n'' components. :Output Determines whether v is an element of ''S'' :# Create a (''k'' + 1) × ''n'' matrix ''A'' whose rows are the vectors b1, ... , b''k'' and v. :# Use elementary row operations to put ''A'' into row echelon form. :# If the echelon form has a row of zeroes, then the vectors are linearly dependent, and therefore .


Basis for a column space

:Input An ''m'' × ''n'' matrix ''A'' :Output A basis for the column space of ''A'' :# Use elementary row operations to put ''A'' into row echelon form. :# Determine which columns of the echelon form have pivots. The corresponding columns of the original matrix are a basis for the column space. See the article on column space for an example. This produces a basis for the column space that is a subset of the original column vectors. It works because the columns with pivots are a basis for the column space of the echelon form, and row reduction does not change the linear dependence relationships between the columns.


Coordinates for a vector

:Input A basis for a subspace ''S'' of ''K''''n'', and a vector :Output Numbers ''t''1, ''t''2, ..., ''t''''k'' such that :# Create an
augmented matrix In linear algebra, an augmented matrix is a matrix obtained by appending the columns of two given matrices, usually for the purpose of performing the same elementary row operations on each of the given matrices. Given the matrices and , where ...
''A'' whose columns are b1,...,b''k'' , with the last column being v. :# Use elementary row operations to put ''A'' into reduced row echelon form. :# Express the final column of the reduced echelon form as a linear combination of the first ''k'' columns. The coefficients used are the desired numbers . (These should be precisely the first ''k'' entries in the final column of the reduced echelon form.) If the final column of the reduced row echelon form contains a pivot, then the input vector v does not lie in ''S''.


Basis for a null space

:Input An ''m'' × ''n'' matrix ''A''. :Output A basis for the null space of ''A'' :# Use elementary row operations to put ''A'' in reduced row echelon form. :# Using the reduced row echelon form, determine which of the variables are free. Write equations for the dependent variables in terms of the free variables. :# For each free variable ''xi'', choose a vector in the null space for which and the remaining free variables are zero. The resulting collection of vectors is a basis for the null space of ''A''. See the article on null space for an example.


Basis for the sum and intersection of two subspaces

Given two subspaces and of , a basis of the sum U + W and the intersection U \cap W can be calculated using the Zassenhaus algorithm.


Equations for a subspace

:Input A basis for a subspace ''S'' of ''K''''n'' :Output An (''n'' − ''k'') × ''n'' matrix whose null space is ''S''. :# Create a matrix ''A'' whose rows are . :# Use elementary row operations to put ''A'' into reduced row echelon form. :# Let be the columns of the reduced row echelon form. For each column without a pivot, write an equation expressing the column as a linear combination of the columns with pivots. :# This results in a homogeneous system of ''n'' − ''k'' linear equations involving the variables c1,...,c''n''. The matrix corresponding to this system is the desired matrix with nullspace ''S''. ; Example :If the reduced row echelon form of ''A'' is ::\left \begin 1 && 0 && -3 && 0 && 2 && 0 \\ 0 && 1 && 5 && 0 && -1 && 4 \\ 0 && 0 && 0 && 1 && 7 && -9 \\ 0 && \;\;\;\;\;0 && \;\;\;\;\;0 && \;\;\;\;\;0 && \;\;\;\;\;0 && \;\;\;\;\;0 \end \,\right :then the column vectors satisfy the equations :: \begin \mathbf_3 &= -3\mathbf_1 + 5\mathbf_2 \\ \mathbf_5 &= 2\mathbf_1 - \mathbf_2 + 7\mathbf_4 \\ \mathbf_6 &= 4\mathbf_2 - 9\mathbf_4 \end :It follows that the row vectors of ''A'' satisfy the equations :: \begin x_3 &= -3x_1 + 5x_2 \\ x_5 &= 2x_1 - x_2 + 7x_4 \\ x_6 &= 4x_2 - 9x_4. \end :In particular, the row vectors of ''A'' are a basis for the null space of the corresponding matrix.


See also

* Cyclic subspace * Invariant subspace * Multilinear subspace learning *
Quotient space (linear algebra) In linear algebra, the quotient of a vector space ''V'' by a subspace ''N'' is a vector space obtained by "collapsing" ''N'' to zero. The space obtained is called a quotient space and is denoted ''V''/''N'' (read "''V'' mod ''N''" or "''V'' by ...
* Signal subspace * Subspace topology


Notes


Citations


Sources


Textbook

* * * * * * * * * * * * *


Web

* *


External links

* * {{Linear algebra Linear algebra Articles containing proofs Operator theory Functional analysis ru:Векторное подпространство