
In
linear algebra
Linear algebra is the branch of mathematics concerning linear equations such as:
:a_1x_1+\cdots +a_nx_n=b,
linear maps such as:
:(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n,
and their representations in vector spaces and through matrices.
...
and
functional analysis
Functional analysis is a branch of mathematical analysis, the core of which is formed by the study of vector spaces endowed with some kind of limit-related structure (e.g. Inner product space#Definition, inner product, Norm (mathematics)#Defini ...
, a projection is a
linear transformation from a
vector space
In mathematics and physics, a vector space (also called a linear space) is a set whose elements, often called ''vectors'', may be added together and multiplied ("scaled") by numbers called '' scalars''. Scalars are often real numbers, but can ...
to itself (an
endomorphism) such that
. That is, whenever
is applied twice to any vector, it gives the same result as if it were applied once (i.e.
is
idempotent). It leaves its
image
An image is a visual representation of something. It can be two-dimensional, three-dimensional, or somehow otherwise feed into the visual system to convey information. An image can be an artifact, such as a photograph or other two-dimensiona ...
unchanged. This definition of "projection" formalizes and generalizes the idea of
graphical projection. One can also consider the effect of a projection on a geometrical object by examining the effect of the projection on
point
Point or points may refer to:
Places
* Point, Lewis, a peninsula in the Outer Hebrides, Scotland
* Point, Texas, a city in Rains County, Texas, United States
* Point, the NE tip and a ferry terminal of Lismore, Inner Hebrides, Scotland
* Point ...
s in the object.
Definitions
A projection on a vector space
is a linear operator
such that
.
When
has an
inner product
In mathematics, an inner product space (or, rarely, a Hausdorff space, Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation (mathematics), operation called an inner product. The inner product of two ve ...
and is
complete
Complete may refer to:
Logic
* Completeness (logic)
* Completeness of a theory, the property of a theory that every formula in the theory's language or its negation is provable
Mathematics
* The completeness of the real numbers, which implies t ...
(i.e. when
is a
Hilbert space
In mathematics, Hilbert spaces (named after David Hilbert) allow generalizing the methods of linear algebra and calculus from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. Hilbert spaces arise natural ...
) the concept of
orthogonality
In mathematics, orthogonality is the generalization of the geometric notion of ''perpendicularity''.
By extension, orthogonality is also used to refer to the separation of specific features of a system. The term also has specialized meanings in ...
can be used. A projection
on a Hilbert space
is called an orthogonal projection if it satisfies
for all
. A projection on a Hilbert space that is not orthogonal is called an oblique projection.
Projection matrix
* In the
finite-dimensional
In mathematics, the dimension of a vector space ''V'' is the cardinality (i.e., the number of vectors) of a basis of ''V'' over its base field. p. 44, §2.36 It is sometimes called Hamel dimension (after Georg Hamel) or algebraic dimension to disti ...
case, a
square matrix
In mathematics, a square matrix is a matrix with the same number of rows and columns. An ''n''-by-''n'' matrix is known as a square matrix of order Any two square matrices of the same order can be added and multiplied.
Square matrices are often ...
is called a projection matrix if it is equal to its square, i.e. if
.
* A square matrix
is called an orthogonal projection matrix if
for a
real matrix
Matrix most commonly refers to:
* ''The Matrix'' (franchise), an American media franchise
** ''The Matrix'', a 1999 science-fiction action film
** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchis ...
, and respectively
for a
complex matrix, where
denotes the
transpose of
and
denotes the adjoint or
Hermitian transpose of
.
* A projection matrix that is not an orthogonal projection matrix is called an oblique projection matrix.
The
eigenvalue
In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted b ...
s of a projection matrix must be 0 or 1.
Examples
Orthogonal projection
For example, the function which maps the point
in three-dimensional space
to the point
is an orthogonal projection onto the ''xy''-plane. This function is represented by the matrix
The action of this matrix on an arbitrary
vector is
To see that
is indeed a projection, i.e.,
, we compute
Observing that
shows that the projection is an orthogonal projection.
Oblique projection
A simple example of a non-orthogonal (oblique) projection is
Via
matrix multiplication, one sees that
showing that
is indeed a projection.
The projection
is orthogonal
if and only if
In logic and related fields such as mathematics and philosophy, "if and only if" (shortened as "iff") is a biconditional logical connective between statements, where either both statements are true or both are false.
The connective is bicondi ...
because only then
Properties and classification
Idempotence
By definition, a projection
is
idempotent (i.e.
).
Open map
Every projection is an
open map
In mathematics, more specifically in topology, an open map is a function between two topological spaces that maps open sets to open sets.
That is, a function f : X \to Y is open if for any open set U in X, the image f(U) is open in Y.
Likewise, a ...
, meaning that it maps each
open set
In mathematics, open sets are a generalization of open intervals in the real line.
In a metric space (a set along with a distance defined between any two points), open sets are the sets that, with every point , contain all points that are suf ...
in the
domain
Domain may refer to:
Mathematics
*Domain of a function, the set of input values for which the (total) function is defined
**Domain of definition of a partial function
**Natural domain of a partial function
**Domain of holomorphy of a function
* Do ...
to an open set in the
subspace topology of the
image
An image is a visual representation of something. It can be two-dimensional, three-dimensional, or somehow otherwise feed into the visual system to convey information. An image can be an artifact, such as a photograph or other two-dimensiona ...
. That is, for any vector
and any ball
(with positive radius) centered on
, there exists a ball
(with positive radius) centered on
that is wholly contained in the image
.
Complementarity of image and kernel
Let
be a finite-dimensional vector space and
be a projection on
. Suppose the
subspaces
and
are the
image
An image is a visual representation of something. It can be two-dimensional, three-dimensional, or somehow otherwise feed into the visual system to convey information. An image can be an artifact, such as a photograph or other two-dimensiona ...
and
kernel of
respectively. Then
has the following properties:
#
is the
identity operator on
:
# We have a
direct sum
The direct sum is an operation between structures in abstract algebra, a branch of mathematics. It is defined differently, but analogously, for different kinds of structures. To see how the direct sum is used in abstract algebra, consider a more ...
. Every vector
may be decomposed uniquely as
with
and
, and where
The image and kernel of a projection are ''complementary'', as are
and
. The operator
is also a projection as the image and kernel of
become the kernel and image of
and vice versa. We say
is a projection along
onto
(kernel/image) and
is a projection along
onto
.
Spectrum
In infinite-dimensional vector spaces, the
spectrum
A spectrum (plural ''spectra'' or ''spectrums'') is a condition that is not limited to a specific set of values but can vary, without gaps, across a continuum. The word was first used scientifically in optics to describe the rainbow of colors i ...
of a projection is contained in
as
Only 0 or 1 can be an
eigenvalue
In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted b ...
of a projection. This implies that an orthogonal projection
is always a
positive semi-definite matrix. In general, the corresponding
eigenspace
In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted b ...
s are (respectively) the kernel and range of the projection. Decomposition of a vector space into direct sums is not unique. Therefore, given a subspace
, there may be many projections whose range (or kernel) is
.
If a projection is nontrivial it has
minimal polynomial , which factors into distinct linear factors, and thus
is
diagonalizable.
Product of projections
The product of projections is not in general a projection, even if they are orthogonal. If two projections
commute
Commute, commutation or commutative may refer to:
* Commuting, the process of travelling between a place of residence and a place of work
Mathematics
* Commutative property, a property of a mathematical operation whose result is insensitive to th ...
then their product is a projection, but the
converse is false: the product of two non-commuting projections may be a projection.
If two orthogonal projections commute then their product is an orthogonal projection. If the product of two orthogonal projections is an orthogonal projection, then the two orthogonal projections commute (more generally: two self-adjoint
endomorphisms commute if and only if their product is self-adjoint).
Orthogonal projections
When the vector space
has an
inner product
In mathematics, an inner product space (or, rarely, a Hausdorff space, Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation (mathematics), operation called an inner product. The inner product of two ve ...
and is complete (is a
Hilbert space
In mathematics, Hilbert spaces (named after David Hilbert) allow generalizing the methods of linear algebra and calculus from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. Hilbert spaces arise natural ...
) the concept of
orthogonality
In mathematics, orthogonality is the generalization of the geometric notion of ''perpendicularity''.
By extension, orthogonality is also used to refer to the separation of specific features of a system. The term also has specialized meanings in ...
can be used. An orthogonal projection is a projection for which the range
and the null space
are
orthogonal subspaces
In mathematics, orthogonality is the generalization of the geometric notion of ''perpendicularity''.
By extension, orthogonality is also used to refer to the separation of specific features of a system. The term also has specialized meanings in ...
. Thus, for every
and
in
,
. Equivalently:
A projection is orthogonal if and only if it is
self-adjoint
In mathematics, and more specifically in abstract algebra, an element ''x'' of a *-algebra is self-adjoint if x^*=x. A self-adjoint element is also Hermitian, though the reverse doesn't necessarily hold.
A collection ''C'' of elements of a sta ...
. Using the self-adjoint and idempotent properties of
, for any
and
in
we have
,
, and
where
is the inner product associated with
. Therefore,
and
are orthogonal projections. The other direction, namely that if
is orthogonal then it is self-adjoint, follows from
for every
and
in
; thus
.
Properties and special cases
An orthogonal projection is a
bounded operator. This is because for every
in the vector space we have, by the
Cauchy–Schwarz inequality
The Cauchy–Schwarz inequality (also called Cauchy–Bunyakovsky–Schwarz inequality) is considered one of the most important and widely used inequalities in mathematics.
The inequality for sums was published by . The corresponding inequality fo ...
:
Thus
.
For finite-dimensional complex or real vector spaces, the
standard inner product can be substituted for
.
=Formulas
=
A simple case occurs when the orthogonal projection is onto a line. If
is a
unit vector
In mathematics, a unit vector in a normed vector space is a vector (often a spatial vector) of length 1. A unit vector is often denoted by a lowercase letter with a circumflex, or "hat", as in \hat (pronounced "v-hat").
The term ''direction vecto ...
on the line, then the projection is given by the
outer product
In linear algebra, the outer product of two coordinate vector
In linear algebra, a coordinate vector is a representation of a vector as an ordered list of numbers (a tuple) that describes the vector in terms of a particular ordered basis. An ea ...
(If
is complex-valued, the transpose in the above equation is replaced by a Hermitian transpose). This operator leaves u invariant, and it annihilates all vectors orthogonal to
, proving that it is indeed the orthogonal projection onto the line containing u. A simple way to see this is to consider an arbitrary vector
as the sum of a component on the line (i.e. the projected vector we seek) and another perpendicular to it,
. Applying projection, we get
by the properties of the
dot product
In mathematics, the dot product or scalar productThe term ''scalar product'' means literally "product with a scalar as a result". It is also used sometimes for other symmetric bilinear forms, for example in a pseudo-Euclidean space. is an algebra ...
of parallel and perpendicular vectors.
This formula can be generalized to orthogonal projections on a subspace of arbitrary
dimension
In physics and mathematics, the dimension of a Space (mathematics), mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any Point (geometry), point within it. Thus, a Line (geometry), lin ...
. Let
be an
orthonormal basis
In mathematics, particularly linear algebra, an orthonormal basis for an inner product space ''V'' with finite dimension is a basis for V whose vectors are orthonormal, that is, they are all unit vectors and orthogonal to each other. For example, ...
of the subspace
, with the assumption that the integer
, and let
denote the
matrix whose columns are
, i.e.,
. Then the projection is given by:
which can be rewritten as
The matrix
is the
partial isometry In functional analysis a partial isometry is a linear map between Hilbert spaces such that it is an isometry on the orthogonal complement of its kernel.
The orthogonal complement of its kernel is called the initial subspace and its range is called ...
that vanishes on the
orthogonal complement of
and
is the isometry that embeds
into the underlying vector space. The range of
is therefore the ''final space'' of
. It is also clear that
is the identity operator on
.
The orthonormality condition can also be dropped. If
is a (not necessarily orthonormal)
basis
Basis may refer to:
Finance and accounting
*Adjusted basis, the net cost of an asset after adjusting for various tax-related items
*Basis point, 0.01%, often used in the context of interest rates
*Basis trading, a trading strategy consisting of ...
with
, and
is the matrix with these vectors as columns, then the projection is:
The matrix
still embeds
into the underlying vector space but is no longer an isometry in general. The matrix
is a "normalizing factor" that recovers the norm. For example, the
rank-1 operator
is not a projection if
After dividing by
we obtain the projection
onto the subspace spanned by
.
In the general case, we can have an arbitrary
positive definite In mathematics, positive definiteness is a property of any object to which a bilinear form or a sesquilinear form may be naturally associated, which is positive-definite. See, in particular:
* Positive-definite bilinear form
* Positive-definite f ...
matrix
defining an inner product
, and the projection
is given by
. Then
When the range space of the projection is generated by a
frame
A frame is often a structural system that supports other components of a physical construction and/or steel frame that limits the construction's extent.
Frame and FRAME may also refer to:
Physical objects
In building construction
*Framing (con ...
(i.e. the number of generators is greater than its dimension), the formula for the projection takes the form:
. Here
stands for the
Moore–Penrose pseudoinverse. This is just one of many ways to construct the projection operator.
If
is a non-singular matrix and
(i.e.,
is the
null space
In mathematics, the kernel of a linear map, also known as the null space or nullspace, is the linear subspace of the domain of the map which is mapped to the zero vector. That is, given a linear map between two vector spaces and , the kernel ...
matrix of
), the following holds:
If the orthogonal condition is enhanced to
with
non-singular, the following holds:
All these formulas also hold for complex inner product spaces, provided that the
conjugate transpose is used instead of the transpose. Further details on sums of projectors can be found in Banerjee and Roy (2014). Also see Banerjee (2004) for application of sums of projectors in basic
spherical trigonometry.
Oblique projections
The term ''oblique projections'' is sometimes used to refer to non-orthogonal projections. These projections are also used to represent spatial figures in two-dimensional drawings (see
oblique projection), though not as frequently as orthogonal projections. Whereas calculating the fitted value of an
ordinary least squares
In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression model (with fixed level-one effects of a linear function of a set of explanatory variables) by the prin ...
regression requires an orthogonal projection, calculating the fitted value of an
instrumental variables regression requires an oblique projection.
Projections are defined by their null space and the basis vectors used to characterize their range (which is the complement of the null space). When these basis vectors are orthogonal to the null space, then the projection is an orthogonal projection. When these basis vectors are not orthogonal to the null space, the projection is an oblique projection, or just a general projection.
A matrix representation formula for a nonzero projection operator
Let
be a linear operator
such that
and assume that
is not the zero operator. Let the vectors
form a basis for the range of the projection, and assemble these vectors in the
matrix
. Therefore the integer
, otherwise
and
is the zero operator. The range and the null space are complementary spaces, so the null space has dimension
. It follows that the
orthogonal complement of the null space has dimension
. Let
form a basis for the orthogonal complement of the null space of the projection, and assemble these vectors in the matrix
. Then the projection
(with the condition
) is given by
This expression generalizes the formula for orthogonal projections given above. A standard proof of this expression is the following. For any vector
in the vector space
, we can decompose
, where vector
is in the image of
, and vector
. So
, and then
is in the null space of
. In other words, the vector
is in the column space of
, so
for some
dimension vector
and the vector
satisfies
by the construction of
. Put these conditions together, and we find a vector
so that
. Since matrices
and
are of full rank
by their construction, the
-matrix
is invertible. So the equation
gives the vector
In this way,
for any vector
and hence
.
In the case that
is an orthogonal projection, we can take
, and it follows that
. By using this formula, one can easily check that
. In general, if the vector space is over complex number field, one then uses the
Hermitian transpose and has the formula
. Recall that one can define the
Moore–Penrose inverse
In mathematics, and in particular linear algebra, the Moore–Penrose inverse of a matrix is the most widely known generalization of the inverse matrix. It was independently described by E. H. Moore in 1920, Arne Bjerhammar in 1951, and Roger Pe ...
of the matrix
by
since
has full column rank, so
.
Singular Values
Note that
is also an oblique projection. The singular values of
and
can be computed by an
orthonormal basis
In mathematics, particularly linear algebra, an orthonormal basis for an inner product space ''V'' with finite dimension is a basis for V whose vectors are orthonormal, that is, they are all unit vectors and orthogonal to each other. For example, ...
of
. Let
be an orthonormal basis of
and let
be the
orthogonal complement of
. Denote the singular values of the matrix
by the positive values
. With this, the singular values for
are:
and the singular values for
are
This implies that the largest singular values of
and
are equal, and thus that the
matrix norm of the oblique projections are the same.
However, the
condition number
In numerical analysis, the condition number of a function measures how much the output value of the function can change for a small change in the input argument. This is used to measure how sensitive a function is to changes or errors in the input ...
satisfies the relation
, and is therefore not necessarily equal.
Finding projection with an inner product
Let
be a vector space (in this case a plane) spanned by orthogonal vectors
. Let
be a vector. One can define a projection of
onto
as
where repeated indices are summed over (
Einstein sum notation). The vector
can be written as an orthogonal sum such that
.
is sometimes denoted as
. There is a theorem in linear algebra that states that this
is the smallest distance (the ''
orthogonal distance In geometry, the perpendicular distance between two objects is the distance from one to the other, measured along a line that is perpendicular to one or both.
The distance from a point to a line is the distance to the nearest point on that line. Th ...
'') from
to
and is commonly used in areas such as
machine learning
Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence.
Machine ...
.
Canonical forms
Any projection
on a vector space of dimension
over a
field
Field may refer to:
Expanses of open ground
* Field (agriculture), an area of land used for agricultural purposes
* Airfield, an aerodrome that lacks the infrastructure of an airport
* Battlefield
* Lawn, an area of mowed grass
* Meadow, a grass ...
is a
diagonalizable matrix
In linear algebra, a square matrix A is called diagonalizable or non-defective if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix P and a diagonal matrix D such that or equivalently (Such D are not unique.) ...
, since its
minimal polynomial divides
, which splits into distinct linear factors. Thus there exists a basis in which
has the form
:
where
is the
rank of
. Here
is the
identity matrix
In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere.
Terminology and notation
The identity matrix is often denoted by I_n, or simply by I if the size is immaterial o ...
of size
, and
is the
zero matrix of size
. If the vector space is complex and equipped with an
inner product
In mathematics, an inner product space (or, rarely, a Hausdorff space, Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation (mathematics), operation called an inner product. The inner product of two ve ...
, then there is an ''orthonormal'' basis in which the matrix of ''P'' is
:
where
. The
integer
An integer is the number zero (), a positive natural number (, , , etc.) or a negative integer with a minus sign (−1, −2, −3, etc.). The negative numbers are the additive inverses of the corresponding positive numbers. In the language ...
s
and the real numbers
are uniquely determined. Note that
. The factor
corresponds to the maximal invariant subspace on which
acts as an ''orthogonal'' projection (so that ''P'' itself is orthogonal if and only if
) and the
-blocks correspond to the ''oblique'' components.
Projections on normed vector spaces
When the underlying vector space
is a (not necessarily finite-dimensional)
normed vector space, analytic questions, irrelevant in the finite-dimensional case, need to be considered. Assume now
is a
Banach space
In mathematics, more specifically in functional analysis, a Banach space (pronounced ) is a complete normed vector space. Thus, a Banach space is a vector space with a metric that allows the computation of vector length and distance between vector ...
.
Many of the algebraic results discussed above survive the passage to this context. A given direct sum decomposition of
into complementary subspaces still specifies a projection, and vice versa. If
is the direct sum
, then the operator defined by
is still a projection with range
and kernel
. It is also clear that
. Conversely, if
is projection on
, i.e.
, then it is easily verified that
. In other words,
is also a projection. The relation
implies
and
is the direct sum
.
However, in contrast to the finite-dimensional case, projections need not be
continuous
Continuity or continuous may refer to:
Mathematics
* Continuity (mathematics), the opposing concept to discreteness; common examples include
** Continuous probability distribution or random variable in probability and statistics
** Continuous ...
in general. If a subspace
of
is not closed in the norm topology, then the projection onto
is not continuous. In other words, the range of a continuous projection
must be a closed subspace. Furthermore, the kernel of a continuous projection (in fact, a continuous linear operator in general) is closed. Thus a ''continuous'' projection
gives a decomposition of
into two complementary ''closed'' subspaces:
.
The converse holds also, with an additional assumption. Suppose
is a closed subspace of
. If there exists a closed subspace
such that , then the projection
with range
and kernel
is continuous. This follows from the
closed graph theorem. Suppose and . One needs to show that
. Since
is closed and , ''y'' lies in
, i.e. . Also, . Because
is closed and , we have
, i.e.
, which proves the claim.
The above argument makes use of the assumption that both
and
are closed. In general, given a closed subspace
, there need not exist a complementary closed subspace
, although for
Hilbert space
In mathematics, Hilbert spaces (named after David Hilbert) allow generalizing the methods of linear algebra and calculus from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. Hilbert spaces arise natural ...
s this can always be done by taking the
orthogonal complement. For Banach spaces, a one-dimensional subspace always has a closed complementary subspace. This is an immediate consequence of
Hahn–Banach theorem. Let
be the linear span of
. By Hahn–Banach, there exists a bounded
linear functional such that . The operator
satisfies
, i.e. it is a projection. Boundedness of
implies continuity of
and therefore
is a closed complementary subspace of
.
Applications and further considerations
Projections (orthogonal and otherwise) play a major role in
algorithm
In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algorithms are used as specificat ...
s for certain linear algebra problems:
*
QR decomposition (see
Householder transformation and
Gram–Schmidt decomposition);
*
Singular value decomposition
In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any \ m \times n\ matrix. It is related ...
* Reduction to
Hessenberg form (the first step in many
eigenvalue algorithms)
*
Linear regression
In statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables). The case of one explanatory variable is call ...
* Projective elements of matrix algebras are used in the construction of certain K-groups in
Operator K-theory
In mathematics, operator K-theory is a noncommutative analogue of topological K-theory for Banach algebras with most applications used for C*-algebras.
Overview
Operator K-theory resembles topological K-theory more than algebraic K-theory. In pa ...
As stated above, projections are a special case of idempotents. Analytically, orthogonal projections are non-commutative generalizations of
characteristic functions. Idempotents are used in classifying, for instance,
semisimple algebras, while
measure theory
In mathematics, the concept of a measure is a generalization and formalization of geometrical measures ( length, area, volume) and other common notions, such as mass and probability of events. These seemingly distinct concepts have many simil ...
begins with considering characteristic functions of
measurable set
In mathematics, the concept of a measure is a generalization and formalization of geometrical measures (length, area, volume) and other common notions, such as mass and probability of events. These seemingly distinct concepts have many simila ...
s. Therefore, as one can imagine, projections are very often encountered in the context of
operator algebras. In particular, a
von Neumann algebra is generated by its complete
lattice of projections.
Generalizations
More generally, given a map between normed vector spaces
one can analogously ask for this map to be an isometry on the orthogonal complement of the kernel: that
be an isometry (compare
Partial isometry In functional analysis a partial isometry is a linear map between Hilbert spaces such that it is an isometry on the orthogonal complement of its kernel.
The orthogonal complement of its kernel is called the initial subspace and its range is called ...
); in particular it must be
onto
In mathematics, a surjective function (also known as surjection, or onto function) is a function that every element can be mapped from element so that . In other words, every element of the function's codomain is the image of one element of i ...
. The case of an orthogonal projection is when ''W'' is a subspace of ''V.'' In
Riemannian geometry, this is used in the definition of a
Riemannian submersion.
See also
*
Centering matrix, which is an example of a projection matrix.
*
Dykstra's projection algorithm
Dykstra's algorithm is a method that computes a point in the intersection of convex sets, and is a variant of the alternating projection method (also called the projections onto convex sets method). In its simplest form, the method finds a point ...
to compute the projection onto an intersection of sets
*
Invariant subspace In mathematics, an invariant subspace of a linear mapping ''T'' : ''V'' → ''V '' i.e. from some vector space ''V'' to itself, is a subspace ''W'' of ''V'' that is preserved by ''T''; that is, ''T''(''W'') ⊆ ''W''.
General descrip ...
*
Least-squares spectral analysis
*
Orthogonalization
In linear algebra, orthogonalization is the process of finding a set of orthogonal vectors that span a particular subspace. Formally, starting with a linearly independent set of vectors in an inner product space (most commonly the Euclidean spa ...
*
Properties of trace
Notes
References
*
*
*
External links
* , from MIT OpenCourseWare
* , by
Pavel Grinfeld
Pavel Grinfeld (also known as Greenfield) is an American mathematician and associate professor of Applied Mathematics at Drexel University working on problems in moving surfaces in applied mathematics (particularly calculus of variations), geom ...
.
Planar Geometric Projections Tutorial– a simple-to-follow tutorial explaining the different types of planar geometric projections.
{{linear algebra
Functional analysis
Linear algebra
Linear operators