HOME

TheInfoList



OR:

In mathematics, the kernel of a linear map, also known as the null space or nullspace, is the linear subspace of the domain of the map which is mapped to the zero vector. That is, given a linear map between two vector spaces and , the kernel of is the vector space of all elements of such that , where denotes the zero vector in , or more symbolically: :\ker(L) = \left\ .


Properties

The kernel of is a linear subspace of the domain .Linear algebra, as discussed in this article, is a very well established mathematical discipline for which there are many sources. Almost all of the material in this article can be found in , , and Strang's lectures. In the linear map L : V \to W, two elements of have the same
image An image is a visual representation of something. It can be two-dimensional, three-dimensional, or somehow otherwise feed into the visual system to convey information. An image can be an artifact, such as a photograph or other two-dimensiona ...
in if and only if their difference lies in the kernel of , that is, L\left(\mathbf_1\right) = L\left(\mathbf_2\right) \quad \text \quad L\left(\mathbf_1-\mathbf_2\right) = \mathbf. From this, it follows that the image of is isomorphic to the
quotient In arithmetic, a quotient (from lat, quotiens 'how many times', pronounced ) is a quantity produced by the division of two numbers. The quotient has widespread use throughout mathematics, and is commonly referred to as the integer part of a ...
of by the kernel: \operatorname(L) \cong V / \ker(L). In the case where is
finite-dimensional In mathematics, the dimension of a vector space ''V'' is the cardinality (i.e., the number of vectors) of a basis of ''V'' over its base field. p. 44, §2.36 It is sometimes called Hamel dimension (after Georg Hamel) or algebraic dimension to dist ...
, this implies the rank–nullity theorem: \dim(\ker L) + \dim(\operatorname L) = \dim(V). where the term refers the dimension of the image of , \dim(\operatorname L), while ' refers to the dimension of the kernel of , \dim(\ker L). That is, \operatorname(T) = \dim(\operatorname L) \qquad \text \qquad \operatorname(T) = \dim(\ker L), so that the rank–nullity theorem can be restated as \operatorname(T) + \operatorname(T) = \dim \left(\operatorname T\right). When is an inner product space, the quotient V / \ker(L) can be identified with the orthogonal complement in of \ker(L) This is the generalization to linear operators of the row space, or coimage, of a matrix.


Application to modules

The notion of kernel also makes sense for homomorphisms of modules, which are generalizations of vector spaces where the scalars are elements of a
ring Ring may refer to: * Ring (jewellery), a round band, usually made of metal, worn as ornamental jewelry * To make a sound with a bell, and the sound made by a bell :(hence) to initiate a telephone connection Arts, entertainment and media Film and ...
, rather than a
field Field may refer to: Expanses of open ground * Field (agriculture), an area of land used for agricultural purposes * Airfield, an aerodrome that lacks the infrastructure of an airport * Battlefield * Lawn, an area of mowed grass * Meadow, a grass ...
. The domain of the mapping is a module, with the kernel constituting a submodule. Here, the concepts of rank and nullity do not necessarily apply.


In functional analysis

If ''V'' and ''W'' are topological vector spaces such that ''W'' is finite-dimensional, then a linear operator ''L'': ''V'' → ''W'' is
continuous Continuity or continuous may refer to: Mathematics * Continuity (mathematics), the opposing concept to discreteness; common examples include ** Continuous probability distribution or random variable in probability and statistics ** Continuous g ...
if and only if the kernel of ''L'' is a
closed Closed may refer to: Mathematics * Closure (mathematics), a set, along with operations, for which applying those operations on members always results in a member of the set * Closed set, a set which contains all its limit points * Closed interval, ...
subspace of ''V''.


Representation as matrix multiplication

Consider a linear map represented as a ''m'' × ''n'' matrix ''A'' with coefficients in a
field Field may refer to: Expanses of open ground * Field (agriculture), an area of land used for agricultural purposes * Airfield, an aerodrome that lacks the infrastructure of an airport * Battlefield * Lawn, an area of mowed grass * Meadow, a grass ...
''K'' (typically \mathbb or \mathbb), that is operating on column vectors x with ''n'' components over ''K''. The kernel of this linear map is the set of solutions to the equation , where 0 is understood as the zero vector. The dimension of the kernel of ''A'' is called the nullity of ''A''. In set-builder notation, :\operatorname(A) = \operatorname(A) = \operatorname(A) = \left\. The matrix equation is equivalent to a homogeneous
system of linear equations In mathematics, a system of linear equations (or linear system) is a collection of one or more linear equations involving the same variables. For example, :\begin 3x+2y-z=1\\ 2x-2y+4z=-2\\ -x+\fracy-z=0 \end is a system of three equations in t ...
: :A\mathbf=\mathbf \;\;\Leftrightarrow\;\; \begin a_ x_1 &&\; + \;&& a_ x_2 &&\; + \;\cdots\; + \;&& a_ x_n &&\; = \;&&& 0 \\ a_ x_1 &&\; + \;&& a_ x_2 &&\; + \;\cdots\; + \;&& a_ x_n &&\; = \;&&& 0 \\ && && && && &&\vdots\ \;&&& \\ a_ x_1 &&\; + \;&& a_ x_2 &&\; + \;\cdots\; + \;&& a_ x_n &&\; = \;&&& 0\text \\ \end Thus the kernel of ''A'' is the same as the solution set to the above homogeneous equations.


Subspace properties

The kernel of a matrix ''A'' over a field ''K'' is a linear subspace of K''n''. That is, the kernel of ''A'', the set Null(''A''), has the following three properties: # Null(''A'') always contains the zero vector, since . # If and , then . This follows from the distributivity of matrix multiplication over addition. # If and ''c'' is a
scalar Scalar may refer to: *Scalar (mathematics), an element of a field, which is used to define a vector space, usually the field of real numbers *Scalar (physics), a physical quantity that can be described by a single element of a number field such a ...
, then , since .


The row space of a matrix

The product ''A''x can be written in terms of the
dot product In mathematics, the dot product or scalar productThe term ''scalar product'' means literally "product with a scalar as a result". It is also used sometimes for other symmetric bilinear forms, for example in a pseudo-Euclidean space. is an algeb ...
of vectors as follows: :A\mathbf = \begin \mathbf_1 \cdot \mathbf \\ \mathbf_2 \cdot \mathbf \\ \vdots \\ \mathbf_m \cdot \mathbf \end. Here, a1, ... , a''m'' denote the rows of the matrix ''A''. It follows that x is in the kernel of ''A'', if and only if x is
orthogonal In mathematics, orthogonality is the generalization of the geometric notion of ''perpendicularity''. By extension, orthogonality is also used to refer to the separation of specific features of a system. The term also has specialized meanings in ...
(or perpendicular) to each of the row vectors of ''A'' (since orthogonality is defined as having a dot product of 0). The row space, or coimage, of a matrix ''A'' is the span of the row vectors of ''A''. By the above reasoning, the kernel of ''A'' is the orthogonal complement to the row space. That is, a vector x lies in the kernel of ''A'', if and only if it is perpendicular to every vector in the row space of ''A''. The dimension of the row space of ''A'' is called the
rank Rank is the relative position, value, worth, complexity, power, importance, authority, level, etc. of a person or object within a ranking, such as: Level or position in a hierarchical organization * Academic rank * Diplomatic rank * Hierarchy * H ...
of ''A'', and the dimension of the kernel of ''A'' is called the nullity of ''A''. These quantities are related by the rank–nullity theorem :\operatorname(A) + \operatorname(A) = n.


Left null space

The left null space, or
cokernel The cokernel of a linear mapping of vector spaces is the quotient space of the codomain of by the image of . The dimension of the cokernel is called the ''corank'' of . Cokernels are dual to the kernels of category theory, hence the name: ...
, of a matrix ''A'' consists of all column vectors x such that xT''A'' = 0T, where T denotes the transpose of a matrix. The left null space of ''A'' is the same as the kernel of ''A''T. The left null space of ''A'' is the orthogonal complement to the
column space In linear algebra, the column space (also called the range or image) of a matrix ''A'' is the span (set of all possible linear combinations) of its column vectors. The column space of a matrix is the image or range of the corresponding mat ...
of ''A'', and is dual to the
cokernel The cokernel of a linear mapping of vector spaces is the quotient space of the codomain of by the image of . The dimension of the cokernel is called the ''corank'' of . Cokernels are dual to the kernels of category theory, hence the name: ...
of the associated linear transformation. The kernel, the row space, the column space, and the left null space of ''A'' are the four fundamental subspaces associated to the matrix ''A''.


Nonhomogeneous systems of linear equations

The kernel also plays a role in the solution to a nonhomogeneous system of linear equations: :A\mathbf=\mathbf\quad \text \quad \begin a_ x_1 &&\; + \;&& a_ x_2 &&\; + \;\cdots\; + \;&& a_ x_n &&\; = \;&&& b_1 \\ a_ x_1 &&\; + \;&& a_ x_2 &&\; + \;\cdots\; + \;&& a_ x_n &&\; = \;&&& b_2 \\ && && && && &&\vdots\ \;&&& \\ a_ x_1 &&\; + \;&& a_ x_2 &&\; + \;\cdots\; + \;&& a_ x_n &&\; = \;&&& b_m \\ \end If u and v are two possible solutions to the above equation, then :A(\mathbf-\mathbf) = A\mathbf - A\mathbf = \mathbf - \mathbf = \mathbf\, Thus, the difference of any two solutions to the equation ''A''x = b lies in the kernel of ''A''. It follows that any solution to the equation ''A''x = b can be expressed as the sum of a fixed solution v and an arbitrary element of the kernel. That is, the solution set to the equation ''A''x = b is :\left\, Geometrically, this says that the solution set to ''A''x = b is the translation of the kernel of ''A'' by the vector v. See also
Fredholm alternative In mathematics, the Fredholm alternative, named after Ivar Fredholm, is one of Fredholm's theorems and is a result in Fredholm theory. It may be expressed in several ways, as a theorem of linear algebra, a theorem of integral equations, or as a ...
and
flat (geometry) In geometry, a flat or Euclidean subspace is a subset of a Euclidean space that is itself a Euclidean space (of lower dimension). The flats in two-dimensional space are points and lines, and the flats in three-dimensional space are points, lin ...
.


Illustration

The following is a simple illustration of the computation of the kernel of a matrix (see , below for methods better suited to more complex calculations). The illustration also touches on the row space and its relation to the kernel. Consider the matrix :A = \begin2 & 3 & 5 \\ -4 & 2 & 3\end. The kernel of this matrix consists of all vectors for which :\begin2 & 3 & 5 \\ -4 & 2 & 3\end\begin x \\ y \\ z\end = \begin 0 \\ 0 \end, which can be expressed as a homogeneous
system of linear equations In mathematics, a system of linear equations (or linear system) is a collection of one or more linear equations involving the same variables. For example, :\begin 3x+2y-z=1\\ 2x-2y+4z=-2\\ -x+\fracy-z=0 \end is a system of three equations in t ...
involving ''x'', ''y'', and ''z'': :\begin 2x + 3y + 5z &= 0, \\ -4x + 2y + 3z &= 0. \end The same linear equations can also be written in matrix form as: : \left begin 2 & 3 & 5 & 0 \\ -4 & 2 & 3 & 0 \end\right Through
Gauss–Jordan elimination In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of operations performed on the corresponding matrix of coefficients. This method can also be used ...
, the matrix can be reduced to: : \left begin 1 & 0 & 1/16 & 0 \\ 0 & 1 & 13/8 & 0 \end\right Rewriting the matrix in equation form yields: :\begin x &= -\fracz \\ y &= -\fracz. \end The elements of the kernel can be further expressed in parametric form, as follows: :\begin x \\ y \\ z\end = c \begin -1/16 \\ -13/8 \\ 1\end\quad (\textc \in \mathbb) Since ''c'' is a free variable ranging over all real numbers, this can be expressed equally well as: : \begin x\\ y\\ z \end = c \begin -1\\ -26\\ 16 \end. The kernel of ''A'' is precisely the solution set to these equations (in this case, a
line Line most often refers to: * Line (geometry), object with zero thickness and curvature that stretches to infinity * Telephone line, a single-user circuit on a telephone communication system Line, lines, The Line, or LINE may also refer to: Art ...
through the origin in R3). Here, since the vector (−1,−26,16)T constitutes a
basis Basis may refer to: Finance and accounting *Adjusted basis, the net cost of an asset after adjusting for various tax-related items *Basis point, 0.01%, often used in the context of interest rates *Basis trading, a trading strategy consisting of ...
of the kernel of ''A''. The nullity of ''A'' is 1. The following dot products are zero: : \begin 2 & 3 & 5 \end \begin -1\\ -26\\ 16 \end = 0 \quad\mathrm\quad \begin -4 & 2 & 3 \end \begin -1\\ -26\\ 16 \end = 0\mathrm which illustrates that vectors in the kernel of ''A'' are orthogonal to each of the row vectors of ''A''. These two (linearly independent) row vectors span the row space of ''A''—a plane orthogonal to the vector (−1,−26,16)T. With the rank 2 of ''A'', the nullity 1 of ''A'', and the dimension 3 of ''A'', we have an illustration of the rank-nullity theorem.


Examples

*If , then the kernel of ''L'' is the solution set to a homogeneous
system of linear equations In mathematics, a system of linear equations (or linear system) is a collection of one or more linear equations involving the same variables. For example, :\begin 3x+2y-z=1\\ 2x-2y+4z=-2\\ -x+\fracy-z=0 \end is a system of three equations in t ...
. As in the above illustration, if ''L'' is the operator: L(x_1, x_2, x_3) = (2 x_1 + 3 x_2 + 5 x_3,\; - 4 x_1 + 2 x_2 + 3 x_3) then the kernel of ''L'' is the set of solutions to the equations \begin 2x_1 &\;+\;& 3x_2 &\;+\;& 5x_3 &\;=\;& 0 \\ -4x_1 &\;+\;& 2x_2 &\;+\;& 3x_3 &\;=\;& 0 \end *Let ''C'' ,1denote the vector space of all continuous real-valued functions on the interval ,1 and define by the rule L(f) = f(0.3). Then the kernel of ''L'' consists of all functions for which . *Let ''C''(R) be the vector space of all infinitely differentiable functions , and let be the differentiation operator: D(f) = \frac. Then the kernel of ''D'' consists of all functions in whose derivatives are zero, i.e. the set of all
constant function In mathematics, a constant function is a function whose (output) value is the same for every input value. For example, the function is a constant function because the value of is 4 regardless of the input value (see image). Basic properti ...
s. *Let be the
direct product In mathematics, one can often define a direct product of objects already known, giving a new one. This generalizes the Cartesian product of the underlying sets, together with a suitably defined structure on the product set. More abstractly, one t ...
of infinitely many copies of , and let be the
shift operator In mathematics, and in particular functional analysis, the shift operator also known as translation operator is an operator that takes a function to its translation . In time series analysis, the shift operator is called the lag operator. Shift ...
s(x_1, x_2, x_3, x_4, \ldots) = (x_2, x_3, x_4, \ldots). Then the kernel of ''s'' is the one-dimensional subspace consisting of all vectors . *If is an inner product space and is a subspace, the kernel of the orthogonal projection is the orthogonal complement to in .


Computation by Gaussian elimination

A
basis Basis may refer to: Finance and accounting *Adjusted basis, the net cost of an asset after adjusting for various tax-related items *Basis point, 0.01%, often used in the context of interest rates *Basis trading, a trading strategy consisting of ...
of the kernel of a matrix may be computed by
Gaussian elimination In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of operations performed on the corresponding matrix of coefficients. This method can also be used ...
. For this purpose, given an ''m'' × ''n'' matrix ''A'', we construct first the row
augmented matrix In linear algebra, an augmented matrix is a matrix obtained by appending the columns of two given matrices, usually for the purpose of performing the same elementary row operations on each of the given matrices. Given the matrices and , where A ...
\beginA\\\hline I\end, where is the ''n'' × ''n'' identity matrix. Computing its column echelon form by Gaussian elimination (or any other suitable method), we get a matrix \beginB\\\hline C\end. A basis of the kernel of ''A'' consists in the non-zero columns of ''C'' such that the corresponding column of ''B'' is a zero column. In fact, the computation may be stopped as soon as the upper matrix is in column echelon form: the remainder of the computation consists in changing the basis of the vector space generated by the columns whose upper part is zero. For example, suppose that :A=\begin 1 & 0 & -3 & 0 & 2 & -8 \\ 0 & 1 & 5 & 0 & -1 & 4 \\ 0 & 0 & 0 & 1 & 7 & -9 \\ 0 & 0 & 0 & 0 & 0 & 0 \end. Then : \beginA\\\hline I\end= \begin 1 & 0 & -3 & 0 & 2 & -8 \\ 0 & 1 & 5 & 0 & -1 & 4 \\ 0 & 0 & 0 & 1 & 7 & -9 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ \hline 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \end. Putting the upper part in column echelon form by column operations on the whole matrix gives : \beginB \\ \hline C\end = \begin 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ \hline 1 & 0 & 0 & 3 & -2 & 8 \\ 0 & 1 & 0 & -5 & 1 & -4 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 & -7 & 9 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \end. The last three columns of ''B'' are zero columns. Therefore, the three last vectors of ''C'', :\left !\! \begin 3 \\ -5 \\ 1 \\ 0 \\ 0 \\ 0 \end \right,\; \left !\! \begin -2 \\ 1 \\ 0 \\ -7 \\ 1 \\ 0 \end \right\; \left !\! \begin 8 \\ -4 \\ 0 \\ 9 \\ 0 \\ 1 \end \right are a basis of the kernel of ''A''. Proof that the method computes the kernel: Since column operations correspond to post-multiplication by invertible matrices, the fact that \beginA\\\hline I\end reduces to \beginB \\ \hline C\end means that there exists an invertible matrix P such that \beginA\\ \hline I\end P = \beginB \\ \hline C\end, with B in column echelon form. Thus AP=B, IP=C, and AC=B. A column vector \mathbf v belongs to the kernel of A (that is A \mathbf v = \mathbf 0) if and only B\mathbf w=\mathbf 0, where \mathbf w = P^ \mathbf v = C^ \mathbf v. As B is in column echelon form, B \mathbf w = \mathbf 0, if and only if the nonzero entries of \mathbf w correspond to the zero columns of B. By multiplying by C, one may deduce that this is the case if and only if \mathbf v = C \mathbf w is a linear combination of the corresponding columns of C.


Numerical computation

The problem of computing the kernel on a computer depends on the nature of the coefficients.


Exact coefficients

If the coefficients of the matrix are exactly given numbers, the column echelon form of the matrix may be computed by
Bareiss algorithm In mathematics, the Bareiss algorithm, named after Erwin Bareiss, is an algorithm to calculate the determinant or the echelon form of a matrix with integer entries using only integer arithmetic; any divisions that are performed are guaranteed to be ...
more efficiently than with Gaussian elimination. It is even more efficient to use
modular arithmetic In mathematics, modular arithmetic is a system of arithmetic for integers, where numbers "wrap around" when reaching a certain value, called the modulus. The modern approach to modular arithmetic was developed by Carl Friedrich Gauss in his bo ...
and
Chinese remainder theorem In mathematics, the Chinese remainder theorem states that if one knows the remainders of the Euclidean division of an integer ''n'' by several integers, then one can determine uniquely the remainder of the division of ''n'' by the product of the ...
, which reduces the problem to several similar ones over finite fields (this avoids the overhead induced by the non-linearity of the
computational complexity In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations ...
of integer multiplication). For coefficients in a finite field, Gaussian elimination works well, but for the large matrices that occur in
cryptography Cryptography, or cryptology (from grc, , translit=kryptós "hidden, secret"; and ''graphein'', "to write", or ''-logia'', "study", respectively), is the practice and study of techniques for secure communication in the presence of adve ...
and
Gröbner basis In mathematics, and more specifically in computer algebra, computational algebraic geometry, and computational commutative algebra, a Gröbner basis is a particular kind of generating set of an ideal in a polynomial ring over a field . A Gröbne ...
computation, better algorithms are known, which have roughly the same
computational complexity In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations ...
, but are faster and behave better with modern
computer hardware Computer hardware includes the physical parts of a computer, such as the case, central processing unit (CPU), random access memory (RAM), monitor, mouse, keyboard, computer data storage, graphics card, sound card, speakers and motherboard. ...
.


Floating point computation

For matrices whose entries are floating-point numbers, the problem of computing the kernel makes sense only for matrices such that the number of rows is equal to their rank: because of the rounding errors, a floating-point matrix has almost always a full rank, even when it is an approximation of a matrix of a much smaller rank. Even for a full-rank matrix, it is possible to compute its kernel only if it is well conditioned, i.e. it has a low
condition number In numerical analysis, the condition number of a function measures how much the output value of the function can change for a small change in the input argument. This is used to measure how sensitive a function is to changes or errors in the inpu ...
. Even for a well conditioned full rank matrix, Gaussian elimination does not behave correctly: it introduces rounding errors that are too large for getting a significant result. As the computation of the kernel of a matrix is a special instance of solving a homogeneous system of linear equations, the kernel may be computed by any of the various algorithms designed to solve homogeneous systems. A state of the art software for this purpose is the Lapack library.


See also

*
Kernel (algebra) In algebra, the kernel of a homomorphism (function that preserves the structure) is generally the inverse image of 0 (except for groups whose operation is denoted multiplicatively, where the kernel is the inverse image of 1). An important special ...
* Zero set *
System of linear equations In mathematics, a system of linear equations (or linear system) is a collection of one or more linear equations involving the same variables. For example, :\begin 3x+2y-z=1\\ 2x-2y+4z=-2\\ -x+\fracy-z=0 \end is a system of three equations in t ...
* Row and column spaces *
Row reduction In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of operations performed on the corresponding matrix of coefficients. This method can also be used ...
* Four fundamental subspaces * Vector space * Linear subspace *
Linear operator In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that pr ...
* Function space *
Fredholm alternative In mathematics, the Fredholm alternative, named after Ivar Fredholm, is one of Fredholm's theorems and is a result in Fredholm theory. It may be expressed in several ways, as a theorem of linear algebra, a theorem of integral equations, or as a ...


Notes and references


Bibliography

* * * * * * * *


External links

* *
Khan Academy Khan Academy is an American non-profit educational organization created in 2008 by Sal Khan. Its goal is creating a set of online tools that help educate students. The organization produces short lessons in the form of videos. Its website also in ...

Introduction to the Null Space of a Matrix
{{DEFAULTSORT:Kernel (linear algebra) Linear algebra Functional analysis Matrices Numerical linear algebra