HOME

TheInfoList



OR:

In
mathematics Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics ...
, the kernel of a
linear map In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that ...
, also known as the null space or nullspace, is the linear subspace of the
domain Domain may refer to: Mathematics *Domain of a function, the set of input values for which the (total) function is defined ** Domain of definition of a partial function ** Natural domain of a partial function **Domain of holomorphy of a function * ...
of the map which is mapped to the zero vector. That is, given a linear map between two
vector space In mathematics and physics, a vector space (also called a linear space) is a set whose elements, often called '' vectors'', may be added together and multiplied ("scaled") by numbers called ''scalars''. Scalars are often real numbers, but can ...
s and , the kernel of is the vector space of all elements of such that , where denotes the zero vector in , or more symbolically: :\ker(L) = \left\ .


Properties

The kernel of is a linear subspace of the domain .Linear algebra, as discussed in this article, is a very well established mathematical discipline for which there are many sources. Almost all of the material in this article can be found in , , and Strang's lectures. In the linear map L : V \to W, two elements of have the same image in if and only if their difference lies in the kernel of , that is, L\left(\mathbf_1\right) = L\left(\mathbf_2\right) \quad \text \quad L\left(\mathbf_1-\mathbf_2\right) = \mathbf. From this, it follows that the image of is
isomorphic In mathematics, an isomorphism is a structure-preserving mapping between two structures of the same type that can be reversed by an inverse mapping. Two mathematical structures are isomorphic if an isomorphism exists between them. The word i ...
to the quotient of by the kernel: \operatorname(L) \cong V / \ker(L). In the case where is
finite-dimensional In mathematics, the dimension of a vector space ''V'' is the cardinality (i.e., the number of vectors) of a basis of ''V'' over its base field. p. 44, §2.36 It is sometimes called Hamel dimension (after Georg Hamel) or algebraic dimension to d ...
, this implies the rank–nullity theorem: \dim(\ker L) + \dim(\operatorname L) = \dim(V). where the term refers the dimension of the image of , \dim(\operatorname L), while ' refers to the dimension of the kernel of , \dim(\ker L). That is, \operatorname(T) = \dim(\operatorname L) \qquad \text \qquad \operatorname(T) = \dim(\ker L), so that the rank–nullity theorem can be restated as \operatorname(T) + \operatorname(T) = \dim \left(\operatorname T\right). When is an
inner product space In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often ...
, the quotient V / \ker(L) can be identified with the orthogonal complement in of \ker(L) This is the generalization to linear operators of the row space, or coimage, of a matrix.


Application to modules

The notion of kernel also makes sense for
homomorphism In algebra, a homomorphism is a morphism, structure-preserving map (mathematics), map between two algebraic structures of the same type (such as two group (mathematics), groups, two ring (mathematics), rings, or two vector spaces). The word ''homo ...
s of
modules Broadly speaking, modularity is the degree to which a system's components may be separated and recombined, often with the benefit of flexibility and variety in use. The concept of modularity is used primarily to reduce complexity by breaking a s ...
, which are generalizations of vector spaces where the scalars are elements of a ring, rather than a field. The domain of the mapping is a module, with the kernel constituting a submodule. Here, the concepts of rank and nullity do not necessarily apply.


In functional analysis

If ''V'' and ''W'' are topological vector spaces such that ''W'' is finite-dimensional, then a linear operator ''L'': ''V'' → ''W'' is
continuous Continuity or continuous may refer to: Mathematics * Continuity (mathematics), the opposing concept to discreteness; common examples include ** Continuous probability distribution or random variable in probability and statistics ** Continuous g ...
if and only if the kernel of ''L'' is a
closed Closed may refer to: Mathematics * Closure (mathematics), a set, along with operations, for which applying those operations on members always results in a member of the set * Closed set, a set which contains all its limit points * Closed interval, ...
subspace of ''V''.


Representation as matrix multiplication

Consider a linear map represented as a ''m'' × ''n'' matrix ''A'' with coefficients in a field ''K'' (typically \mathbb or \mathbb), that is operating on column vectors x with ''n'' components over ''K''. The kernel of this linear map is the set of solutions to the equation , where 0 is understood as the zero vector. The
dimension In physics and mathematics, the dimension of a mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any point within it. Thus, a line has a dimension of one (1D) because only one coord ...
of the kernel of ''A'' is called the nullity of ''A''. In set-builder notation, :\operatorname(A) = \operatorname(A) = \operatorname(A) = \left\. The matrix equation is equivalent to a homogeneous system of linear equations: :A\mathbf=\mathbf \;\;\Leftrightarrow\;\; \begin a_ x_1 &&\; + \;&& a_ x_2 &&\; + \;\cdots\; + \;&& a_ x_n &&\; = \;&&& 0 \\ a_ x_1 &&\; + \;&& a_ x_2 &&\; + \;\cdots\; + \;&& a_ x_n &&\; = \;&&& 0 \\ && && && && &&\vdots\ \;&&& \\ a_ x_1 &&\; + \;&& a_ x_2 &&\; + \;\cdots\; + \;&& a_ x_n &&\; = \;&&& 0\text \\ \end Thus the kernel of ''A'' is the same as the solution set to the above homogeneous equations.


Subspace properties

The kernel of a matrix ''A'' over a field ''K'' is a linear subspace of K''n''. That is, the kernel of ''A'', the set Null(''A''), has the following three properties: # Null(''A'') always contains the zero vector, since . # If and , then . This follows from the distributivity of matrix multiplication over addition. # If and ''c'' is a scalar , then , since .


The row space of a matrix

The product ''A''x can be written in terms of the
dot product In mathematics, the dot product or scalar productThe term ''scalar product'' means literally "product with a scalar as a result". It is also used sometimes for other symmetric bilinear forms, for example in a pseudo-Euclidean space. is an alg ...
of vectors as follows: :A\mathbf = \begin \mathbf_1 \cdot \mathbf \\ \mathbf_2 \cdot \mathbf \\ \vdots \\ \mathbf_m \cdot \mathbf \end. Here, a1, ... , a''m'' denote the rows of the matrix ''A''. It follows that x is in the kernel of ''A'', if and only if x is orthogonal (or perpendicular) to each of the row vectors of ''A'' (since orthogonality is defined as having a dot product of 0). The row space, or coimage, of a matrix ''A'' is the span of the row vectors of ''A''. By the above reasoning, the kernel of ''A'' is the orthogonal complement to the row space. That is, a vector x lies in the kernel of ''A'', if and only if it is perpendicular to every vector in the row space of ''A''. The dimension of the row space of ''A'' is called the rank of ''A'', and the dimension of the kernel of ''A'' is called the nullity of ''A''. These quantities are related by the rank–nullity theorem :\operatorname(A) + \operatorname(A) = n.


Left null space

The left null space, or cokernel, of a matrix ''A'' consists of all column vectors x such that xT''A'' = 0T, where T denotes the transpose of a matrix. The left null space of ''A'' is the same as the kernel of ''A''T. The left null space of ''A'' is the orthogonal complement to the
column space In linear algebra, the column space (also called the range or image) of a matrix ''A'' is the span (set of all possible linear combinations) of its column vectors. The column space of a matrix is the image or range of the corresponding mat ...
of ''A'', and is dual to the cokernel of the associated linear transformation. The kernel, the row space, the column space, and the left null space of ''A'' are the four fundamental subspaces associated to the matrix ''A''.


Nonhomogeneous systems of linear equations

The kernel also plays a role in the solution to a nonhomogeneous system of linear equations: :A\mathbf=\mathbf\quad \text \quad \begin a_ x_1 &&\; + \;&& a_ x_2 &&\; + \;\cdots\; + \;&& a_ x_n &&\; = \;&&& b_1 \\ a_ x_1 &&\; + \;&& a_ x_2 &&\; + \;\cdots\; + \;&& a_ x_n &&\; = \;&&& b_2 \\ && && && && &&\vdots\ \;&&& \\ a_ x_1 &&\; + \;&& a_ x_2 &&\; + \;\cdots\; + \;&& a_ x_n &&\; = \;&&& b_m \\ \end If u and v are two possible solutions to the above equation, then :A(\mathbf-\mathbf) = A\mathbf - A\mathbf = \mathbf - \mathbf = \mathbf\, Thus, the difference of any two solutions to the equation ''A''x = b lies in the kernel of ''A''. It follows that any solution to the equation ''A''x = b can be expressed as the sum of a fixed solution v and an arbitrary element of the kernel. That is, the solution set to the equation ''A''x = b is :\left\, Geometrically, this says that the solution set to ''A''x = b is the
translation Translation is the communication of the meaning of a source-language text by means of an equivalent target-language text. The English language draws a terminological distinction (which does not exist in every language) between ''transla ...
of the kernel of ''A'' by the vector v. See also
Fredholm alternative In mathematics, the Fredholm alternative, named after Ivar Fredholm, is one of Fredholm's theorems and is a result in Fredholm theory. It may be expressed in several ways, as a theorem of linear algebra, a theorem of integral equations, or as a ...
and flat (geometry).


Illustration

The following is a simple illustration of the computation of the kernel of a matrix (see , below for methods better suited to more complex calculations). The illustration also touches on the row space and its relation to the kernel. Consider the matrix :A = \begin2 & 3 & 5 \\ -4 & 2 & 3\end. The kernel of this matrix consists of all vectors for which :\begin2 & 3 & 5 \\ -4 & 2 & 3\end\begin x \\ y \\ z\end = \begin 0 \\ 0 \end, which can be expressed as a homogeneous system of linear equations involving ''x'', ''y'', and ''z'': :\begin 2x + 3y + 5z &= 0, \\ -4x + 2y + 3z &= 0. \end The same linear equations can also be written in matrix form as: : \left begin 2 & 3 & 5 & 0 \\ -4 & 2 & 3 & 0 \end\right Through Gauss–Jordan elimination, the matrix can be reduced to: : \left begin 1 & 0 & 1/16 & 0 \\ 0 & 1 & 13/8 & 0 \end\right Rewriting the matrix in equation form yields: :\begin x &= -\fracz \\ y &= -\fracz. \end The elements of the kernel can be further expressed in parametric form, as follows: :\begin x \\ y \\ z\end = c \begin -1/16 \\ -13/8 \\ 1\end\quad (\textc \in \mathbb) Since ''c'' is a free variable ranging over all real numbers, this can be expressed equally well as: : \begin x\\ y\\ z \end = c \begin -1\\ -26\\ 16 \end. The kernel of ''A'' is precisely the solution set to these equations (in this case, a line through the origin in R3). Here, since the vector (−1,−26,16)T constitutes a basis of the kernel of ''A''. The nullity of ''A'' is 1. The following dot products are zero: : \begin 2 & 3 & 5 \end \begin -1\\ -26\\ 16 \end = 0 \quad\mathrm\quad \begin -4 & 2 & 3 \end \begin -1\\ -26\\ 16 \end = 0\mathrm which illustrates that vectors in the kernel of ''A'' are orthogonal to each of the row vectors of ''A''. These two (linearly independent) row vectors span the row space of ''A''—a plane orthogonal to the vector (−1,−26,16)T. With the rank 2 of ''A'', the nullity 1 of ''A'', and the dimension 3 of ''A'', we have an illustration of the rank-nullity theorem.


Examples

*If , then the kernel of ''L'' is the solution set to a homogeneous system of linear equations. As in the above illustration, if ''L'' is the operator: L(x_1, x_2, x_3) = (2 x_1 + 3 x_2 + 5 x_3,\; - 4 x_1 + 2 x_2 + 3 x_3) then the kernel of ''L'' is the set of solutions to the equations \begin 2x_1 &\;+\;& 3x_2 &\;+\;& 5x_3 &\;=\;& 0 \\ -4x_1 &\;+\;& 2x_2 &\;+\;& 3x_3 &\;=\;& 0 \end *Let ''C'' ,1denote the
vector space In mathematics and physics, a vector space (also called a linear space) is a set whose elements, often called '' vectors'', may be added together and multiplied ("scaled") by numbers called ''scalars''. Scalars are often real numbers, but can ...
of all continuous real-valued functions on the interval ,1 and define by the rule L(f) = f(0.3). Then the kernel of ''L'' consists of all functions for which . *Let ''C''(R) be the vector space of all infinitely differentiable functions , and let be the differentiation operator: D(f) = \frac. Then the kernel of ''D'' consists of all functions in whose derivatives are zero, i.e. the set of all constant functions. *Let be the
direct product In mathematics, one can often define a direct product of objects already known, giving a new one. This generalizes the Cartesian product of the underlying sets, together with a suitably defined structure on the product set. More abstractly, one t ...
of infinitely many copies of , and let be the shift operator s(x_1, x_2, x_3, x_4, \ldots) = (x_2, x_3, x_4, \ldots). Then the kernel of ''s'' is the one-dimensional subspace consisting of all vectors . *If is an
inner product space In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often ...
and is a subspace, the kernel of the orthogonal projection is the orthogonal complement to in .


Computation by Gaussian elimination

A basis of the kernel of a matrix may be computed by Gaussian elimination. For this purpose, given an ''m'' × ''n'' matrix ''A'', we construct first the row augmented matrix \beginA\\\hline I\end, where is the ''n'' × ''n'' identity matrix. Computing its column echelon form by Gaussian elimination (or any other suitable method), we get a matrix \beginB\\\hline C\end. A basis of the kernel of ''A'' consists in the non-zero columns of ''C'' such that the corresponding column of ''B'' is a zero column. In fact, the computation may be stopped as soon as the upper matrix is in column echelon form: the remainder of the computation consists in changing the basis of the vector space generated by the columns whose upper part is zero. For example, suppose that :A=\begin 1 & 0 & -3 & 0 & 2 & -8 \\ 0 & 1 & 5 & 0 & -1 & 4 \\ 0 & 0 & 0 & 1 & 7 & -9 \\ 0 & 0 & 0 & 0 & 0 & 0 \end. Then : \beginA\\\hline I\end= \begin 1 & 0 & -3 & 0 & 2 & -8 \\ 0 & 1 & 5 & 0 & -1 & 4 \\ 0 & 0 & 0 & 1 & 7 & -9 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ \hline 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \end. Putting the upper part in column echelon form by column operations on the whole matrix gives : \beginB \\ \hline C\end = \begin 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ \hline 1 & 0 & 0 & 3 & -2 & 8 \\ 0 & 1 & 0 & -5 & 1 & -4 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 & -7 & 9 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \end. The last three columns of ''B'' are zero columns. Therefore, the three last vectors of ''C'', :\left !\! \begin 3 \\ -5 \\ 1 \\ 0 \\ 0 \\ 0 \end \right,\; \left !\! \begin -2 \\ 1 \\ 0 \\ -7 \\ 1 \\ 0 \end \right\; \left !\! \begin 8 \\ -4 \\ 0 \\ 9 \\ 0 \\ 1 \end \right are a basis of the kernel of ''A''. Proof that the method computes the kernel: Since column operations correspond to post-multiplication by invertible matrices, the fact that \beginA\\\hline I\end reduces to \beginB \\ \hline C\end means that there exists an invertible matrix P such that \beginA\\ \hline I\end P = \beginB \\ \hline C\end, with B in column echelon form. Thus AP=B, IP=C, and AC=B. A column vector \mathbf v belongs to the kernel of A (that is A \mathbf v = \mathbf 0) if and only B\mathbf w=\mathbf 0, where \mathbf w = P^ \mathbf v = C^ \mathbf v. As B is in column echelon form, B \mathbf w = \mathbf 0, if and only if the nonzero entries of \mathbf w correspond to the zero columns of B. By multiplying by C, one may deduce that this is the case if and only if \mathbf v = C \mathbf w is a linear combination of the corresponding columns of C.


Numerical computation

The problem of computing the kernel on a computer depends on the nature of the coefficients.


Exact coefficients

If the coefficients of the matrix are exactly given numbers, the column echelon form of the matrix may be computed by Bareiss algorithm more efficiently than with Gaussian elimination. It is even more efficient to use
modular arithmetic In mathematics, modular arithmetic is a system of arithmetic for integers, where numbers "wrap around" when reaching a certain value, called the modulus. The modern approach to modular arithmetic was developed by Carl Friedrich Gauss in his boo ...
and Chinese remainder theorem, which reduces the problem to several similar ones over
finite field In mathematics, a finite field or Galois field (so-named in honor of Évariste Galois) is a field that contains a finite number of elements. As with any field, a finite field is a set on which the operations of multiplication, addition, subtr ...
s (this avoids the overhead induced by the non-linearity of the computational complexity of integer multiplication). For coefficients in a finite field, Gaussian elimination works well, but for the large matrices that occur in
cryptography Cryptography, or cryptology (from grc, , translit=kryptós "hidden, secret"; and ''graphein'', "to write", or '' -logia'', "study", respectively), is the practice and study of techniques for secure communication in the presence of adv ...
and Gröbner basis computation, better algorithms are known, which have roughly the same computational complexity, but are faster and behave better with modern computer hardware.


Floating point computation

For matrices whose entries are floating-point numbers, the problem of computing the kernel makes sense only for matrices such that the number of rows is equal to their rank: because of the rounding errors, a floating-point matrix has almost always a full rank, even when it is an approximation of a matrix of a much smaller rank. Even for a full-rank matrix, it is possible to compute its kernel only if it is well conditioned, i.e. it has a low condition number. Even for a well conditioned full rank matrix, Gaussian elimination does not behave correctly: it introduces rounding errors that are too large for getting a significant result. As the computation of the kernel of a matrix is a special instance of solving a homogeneous system of linear equations, the kernel may be computed by any of the various algorithms designed to solve homogeneous systems. A state of the art software for this purpose is the Lapack library.


See also

* Kernel (algebra) * Zero set * System of linear equations * Row and column spaces * Row reduction *
Four fundamental subspaces In mathematics, the kernel of a linear map, also known as the null space or nullspace, is the linear subspace of the Domain of a function, domain of the map which is mapped to the zero vector. That is, given a linear map between two vector space ...
*
Vector space In mathematics and physics, a vector space (also called a linear space) is a set whose elements, often called '' vectors'', may be added together and multiplied ("scaled") by numbers called ''scalars''. Scalars are often real numbers, but can ...
* Linear subspace * Linear operator *
Function space In mathematics, a function space is a set of functions between two fixed sets. Often, the domain and/or codomain will have additional structure which is inherited by the function space. For example, the set of functions from any set into a vect ...
*
Fredholm alternative In mathematics, the Fredholm alternative, named after Ivar Fredholm, is one of Fredholm's theorems and is a result in Fredholm theory. It may be expressed in several ways, as a theorem of linear algebra, a theorem of integral equations, or as a ...


Notes and references


Bibliography

* * * * * * * *


External links

* *
Khan Academy Khan Academy is an American non-profit educational organization created in 2008 by Sal Khan. Its goal is creating a set of online tools that help educate students. The organization produces short lessons in the form of videos. Its website also i ...

Introduction to the Null Space of a Matrix
{{DEFAULTSORT:Kernel (linear algebra) Linear algebra Functional analysis Matrices Numerical linear algebra