Hermitian Operators
   HOME

TheInfoList



OR:

In
mathematics Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. There are many ar ...
, a self-adjoint operator on a
complex vector space In mathematics and physics, a vector space (also called a linear space) is a set whose elements, often called ''vectors'', can be added together and multiplied ("scaled") by numbers called ''scalars''. The operations of vector addition and sc ...
''V'' with
inner product In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, ofte ...
\langle\cdot,\cdot\rangle is a
linear map In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that p ...
''A'' (from ''V'' to itself) that is its own
adjoint In mathematics, the term ''adjoint'' applies in several situations. Several of these share a similar formalism: if ''A'' is adjoint to ''B'', then there is typically some formula of the type :(''Ax'', ''y'') = (''x'', ''By''). Specifically, adjoin ...
. That is, \langle Ax,y \rangle = \langle x,Ay \rangle for all x, y ∊ ''V''. If ''V'' is
finite-dimensional In mathematics, the dimension of a vector space ''V'' is the cardinality (i.e., the number of vectors) of a basis of ''V'' over its base field. p. 44, §2.36 It is sometimes called Hamel dimension (after Georg Hamel) or algebraic dimension to d ...
with a given
orthonormal basis In mathematics, particularly linear algebra, an orthonormal basis for an inner product space V with finite Dimension (linear algebra), dimension is a Basis (linear algebra), basis for V whose vectors are orthonormal, that is, they are all unit vec ...
, this is equivalent to the condition that the
matrix Matrix (: matrices or matrixes) or MATRIX may refer to: Science and mathematics * Matrix (mathematics), a rectangular array of numbers, symbols or expressions * Matrix (logic), part of a formula in prenex normal form * Matrix (biology), the m ...
of ''A'' is a
Hermitian matrix In mathematics, a Hermitian matrix (or self-adjoint matrix) is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the -th row and -th column is equal to the complex conjugate of the element in the ...
, i.e., equal to its
conjugate transpose In mathematics, the conjugate transpose, also known as the Hermitian transpose, of an m \times n complex matrix \mathbf is an n \times m matrix obtained by transposing \mathbf and applying complex conjugation to each entry (the complex conjugate ...
''A''. By the finite-dimensional
spectral theorem In linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized (that is, represented as a diagonal matrix in some basis). This is extremely useful because computations involvin ...
, ''V'' has an
orthonormal basis In mathematics, particularly linear algebra, an orthonormal basis for an inner product space V with finite Dimension (linear algebra), dimension is a Basis (linear algebra), basis for V whose vectors are orthonormal, that is, they are all unit vec ...
such that the matrix of ''A'' relative to this basis is a
diagonal matrix In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagon ...
with entries in the
real number In mathematics, a real number is a number that can be used to measure a continuous one- dimensional quantity such as a duration or temperature. Here, ''continuous'' means that pairs of values can have arbitrarily small differences. Every re ...
s. This article deals with applying generalizations of this concept to operators on
Hilbert space In mathematics, a Hilbert space is a real number, real or complex number, complex inner product space that is also a complete metric space with respect to the metric induced by the inner product. It generalizes the notion of Euclidean space. The ...
s of arbitrary dimension. Self-adjoint operators are used in
functional analysis Functional analysis is a branch of mathematical analysis, the core of which is formed by the study of vector spaces endowed with some kind of limit-related structure (for example, Inner product space#Definition, inner product, Norm (mathematics ...
and
quantum mechanics Quantum mechanics is the fundamental physical Scientific theory, theory that describes the behavior of matter and of light; its unusual characteristics typically occur at and below the scale of atoms. Reprinted, Addison-Wesley, 1989, It is ...
. In quantum mechanics their importance lies in the Dirac–von Neumann formulation of quantum mechanics, in which physical
observable In physics, an observable is a physical property or physical quantity that can be measured. In classical mechanics, an observable is a real-valued "function" on the set of all possible system states, e.g., position and momentum. In quantum ...
s such as
position Position often refers to: * Position (geometry), the spatial location (rather than orientation) of an entity * Position, a job or occupation Position may also refer to: Games and recreation * Position (poker), location relative to the dealer * ...
,
momentum In Newtonian mechanics, momentum (: momenta or momentums; more specifically linear momentum or translational momentum) is the product of the mass and velocity of an object. It is a vector quantity, possessing a magnitude and a direction. ...
,
angular momentum Angular momentum (sometimes called moment of momentum or rotational momentum) is the rotational analog of Momentum, linear momentum. It is an important physical quantity because it is a Conservation law, conserved quantity – the total ang ...
and
spin Spin or spinning most often refers to: * Spin (physics) or particle spin, a fundamental property of elementary particles * Spin quantum number, a number which defines the value of a particle's spin * Spinning (textiles), the creation of yarn or thr ...
are represented by self-adjoint operators on a Hilbert space. Of particular significance is the
Hamiltonian Hamiltonian may refer to: * Hamiltonian mechanics, a function that represents the total energy of a system * Hamiltonian (quantum mechanics), an operator corresponding to the total energy of that system ** Dyall Hamiltonian, a modified Hamiltonian ...
operator \hat defined by : \hat \psi = -\frac \nabla^2 \psi + V \psi, which as an observable corresponds to the total
energy Energy () is the physical quantity, quantitative physical property, property that is transferred to a physical body, body or to a physical system, recognizable in the performance of Work (thermodynamics), work and in the form of heat and l ...
of a particle of mass ''m'' in a real potential field ''V''.
Differential operator In mathematics, a differential operator is an operator defined as a function of the differentiation operator. It is helpful, as a matter of notation first, to consider differentiation as an abstract operation that accepts a function and retur ...
s are an important class of
unbounded operator In mathematics, more specifically functional analysis and operator theory, the notion of unbounded operator provides an abstract framework for dealing with differential operators, unbounded observables in quantum mechanics, and other cases. The t ...
s. The structure of self-adjoint operators on infinite-dimensional Hilbert spaces essentially resembles the finite-dimensional case. That is to say, operators are self-adjoint if and only if they are unitarily equivalent to real-valued
multiplication operator In operator theory, a multiplication operator is a linear operator defined on some vector space of functions and whose value at a function is given by multiplication by a fixed function . That is, T_f\varphi(x) = f(x) \varphi (x) \quad for all ...
s. With suitable modifications, this result can be extended to possibly unbounded operators on infinite-dimensional spaces. Since an everywhere-defined self-adjoint operator is necessarily bounded, one needs to be more attentive to the domain issue in the unbounded case. This is explained below in more detail.


Definitions

Let H be a
Hilbert space In mathematics, a Hilbert space is a real number, real or complex number, complex inner product space that is also a complete metric space with respect to the metric induced by the inner product. It generalizes the notion of Euclidean space. The ...
and A an unbounded (i.e. not necessarily bounded)
linear operator In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that pr ...
with a
dense Density (volumetric mass density or specific mass) is the ratio of a substance's mass to its volume. The symbol most often used for density is ''ρ'' (the lower case Greek letter rho), although the Latin letter ''D'' (or ''d'') can also be use ...
domain \operatornameA \subseteq H. This condition holds automatically when H is
finite-dimensional In mathematics, the dimension of a vector space ''V'' is the cardinality (i.e., the number of vectors) of a basis of ''V'' over its base field. p. 44, §2.36 It is sometimes called Hamel dimension (after Georg Hamel) or algebraic dimension to d ...
since \operatornameA = H for every linear operator on a finite-dimensional space. The
graph Graph may refer to: Mathematics *Graph (discrete mathematics), a structure made of vertices and edges **Graph theory, the study of such graphs and their properties *Graph (topology), a topological space resembling a graph in the sense of discret ...
of an (arbitrary) operator A is the set G(A) = \. An operator B is said to extend A if G(A) \subseteq G(B). This is written as A \subseteq B. Let the inner product \langle \cdot, \cdot\rangle be conjugate linear on the ''second'' argument. The
adjoint operator In mathematics, specifically in operator theory, each linear operator A on an inner product space defines a Hermitian adjoint (or adjoint) operator A^* on that space according to the rule :\langle Ax,y \rangle = \langle x,A^*y \rangle, where \l ...
A^* acts on the subspace \operatorname A^* \subseteq H consisting of the elements y such that : \langle Ax,y \rangle = \langle x,A^*y \rangle, \quad \forall x \in \operatorname A. The densely defined operator A is called symmetric (or Hermitian) if A \subseteq A^*, i.e., if \operatorname A \subseteq \operatorname A^* and Ax =A^*x for all x \in \operatorname A. Equivalently, A is symmetric if and only if : \langle Ax , y \rangle = \lang x , Ay \rangle, \quad \forall x,y\in \operatornameA. Since \operatorname A^* \supseteq \operatorname A is dense in H, symmetric operators are always closable (i.e. the closure of G(A) is the graph of an operator). If A^* is a closed extension of A, the smallest closed extension A^ of A must be contained in A^*. Hence, : A \subseteq A^ \subseteq A^* for symmetric operators and : A = A^ \subseteq A^* for closed symmetric operators. The densely defined operator A is called self-adjoint if A = A^*, that is, if and only if A is symmetric and \operatornameA = \operatornameA^*. Equivalently, a closed symmetric operator A is self-adjoint if and only if A^* is symmetric. If A is self-adjoint, then \left\langle x, A x \right\rangle is real for all x \in \operatornameA, i.e., : \langle x, Ax\rangle = \overline=\overline \in \mathbb, \quad \forall x \in \operatornameA. A symmetric operator A is said to be essentially self-adjoint if the closure of A is self-adjoint. Equivalently, A is essentially self-adjoint if it has a ''unique'' self-adjoint extension. In practical terms, having an essentially self-adjoint operator is almost as good as having a self-adjoint operator, since we merely need to take the closure to obtain a self-adjoint operator. In physics, the term Hermitian refers to symmetric as well as self-adjoint operators alike. The subtle difference between the two is generally overlooked.


Bounded self-adjoint operators

Let H be a
Hilbert space In mathematics, a Hilbert space is a real number, real or complex number, complex inner product space that is also a complete metric space with respect to the metric induced by the inner product. It generalizes the notion of Euclidean space. The ...
and A:\operatorname(A) \to H a symmetric operator. According to
Hellinger–Toeplitz theorem In functional analysis, a branch of mathematics, the Hellinger–Toeplitz theorem states that an everywhere-defined symmetric operator on a Hilbert space with inner product \langle \cdot , \cdot \rangle is bounded. By definition, an operator ' ...
, if \operatorname(A)=H then A is necessarily bounded. A
bounded operator In functional analysis and operator theory, a bounded linear operator is a linear transformation L : X \to Y between topological vector spaces (TVSs) X and Y that maps bounded subsets of X to bounded subsets of Y. If X and Y are normed vector ...
A : H \to H is self-adjoint if : \langle Ax, y\rangle = \langle x, Ay\rangle, \quad \forall x,y\in H. Every bounded operator T:H\to H can be written in the
complex Complex commonly refers to: * Complexity, the behaviour of a system whose components interact in multiple ways so possible interactions are difficult to describe ** Complex system, a system composed of many components which may interact with each ...
form T = A + i B where A:H\to H and B:H\to H are bounded self-adjoint operators. Alternatively, every positive
bounded linear operator In functional analysis and operator theory, a bounded linear operator is a linear transformation L : X \to Y between topological vector spaces (TVSs) X and Y that maps bounded subsets of X to bounded subsets of Y. If X and Y are normed vector ...
A:H \to H is self-adjoint if the Hilbert space H is ''complex''.


Properties

A bounded self-adjoint operator A : H \to H defined on \operatorname\left( A \right) = H has the following properties: * A : H \to \operatorname A \subseteq H is invertible if the
image An image or picture is a visual representation. An image can be Two-dimensional space, two-dimensional, such as a drawing, painting, or photograph, or Three-dimensional space, three-dimensional, such as a carving or sculpture. Images may be di ...
of A is dense in H. * The
operator norm In mathematics, the operator norm measures the "size" of certain linear operators by assigning each a real number called its . Formally, it is a norm defined on the space of bounded linear operators between two given normed vector spaces. Inform ...
is given by \left\, A \right\, = \sup \left\ * If \lambda is an
eigenvalue In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a ...
of A then , \lambda , \leq \sup \left\; the eigenvalues are real and the corresponding
eigenvector In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by ...
s are orthogonal. Bounded self-adjoint operators do not necessarily have an eigenvalue. If, however, A is a compact self-adjoint operator then it always has an eigenvalue , \lambda , = \, A \, and corresponding normalized eigenvector.


Spectrum of self-adjoint operators

Let A:\operatorname(A) \to H be an unbounded operator. The
resolvent set In linear algebra and operator theory, the resolvent set of a linear operator is a set of complex numbers for which the operator is in some sense "well-behaved". The resolvent set plays an important role in the resolvent formalism. Definitions L ...
(or regular set) of A is defined as : \rho(A) = \left\. If A is bounded, the definition reduces to A - \lambda I being
bijective In mathematics, a bijection, bijective function, or one-to-one correspondence is a function between two sets such that each element of the second set (the codomain) is the image of exactly one element of the first set (the domain). Equival ...
on H. The
spectrum A spectrum (: spectra or spectrums) is a set of related ideas, objects, or properties whose features overlap such that they blend to form a continuum. The word ''spectrum'' was first used scientifically in optics to describe the rainbow of co ...
of A is defined as the complement : \sigma(A) = \Complex \setminus \rho(A). In finite dimensions, \sigma(A)\subseteq \mathbb consists exclusively of (complex)
eigenvalue In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a ...
s. The spectrum of a self-adjoint operator is always real (i.e. \sigma(A)\subseteq \mathbb), though non-self-adjoint operators with real spectrum exist as well. For bounded ( normal) operators, however, the spectrum is real ''if and only if'' the operator is self-adjoint. This implies, for example, that a non-self-adjoint operator with real spectrum is necessarily unbounded. As a preliminary, define S=\, \textstyle m=\inf_ \langle Ax,x \rangle and \textstyle M=\sup_ \langle Ax,x \rangle with m,M \in \mathbb \cup \. Then, for every \lambda \in \Complex and every x \in \operatornameA, : \Vert (A - \lambda) x\Vert \geq d(\lambda)\cdot \Vert x\Vert, where \textstyle d(\lambda) = \inf_ , r - \lambda, . Indeed, let x \in \operatornameA \setminus \. By the
Cauchy–Schwarz inequality The Cauchy–Schwarz inequality (also called Cauchy–Bunyakovsky–Schwarz inequality) is an upper bound on the absolute value of the inner product between two vectors in an inner product space in terms of the product of the vector norms. It is ...
, : \Vert (A - \lambda) x\Vert \geq \frac =\left, \left\langle A\frac,\frac\right\rangle - \lambda\ \cdot \Vert x\Vert \geq d(\lambda)\cdot \Vert x\Vert. If \lambda \notin ,M then d(\lambda) > 0, and A - \lambda I is called ''bounded below''.


Spectral theorem

In the physics literature, the spectral theorem is often stated by saying that a self-adjoint operator has an orthonormal basis of eigenvectors. Physicists are well aware, however, of the phenomenon of "continuous spectrum"; thus, when they speak of an "orthonormal basis" they mean either an orthonormal basis in the classic sense ''or'' some continuous analog thereof. In the case of the
momentum operator In quantum mechanics, the momentum operator is the operator associated with the linear momentum. The momentum operator is, in the position representation, an example of a differential operator. For the case of one particle in one spatial dimensio ...
P = -i\frac, for example, physicists would say that the eigenvectors are the functions f_p(x) := e^, which are clearly not in the Hilbert space L^2(\mathbb). (Physicists would say that the eigenvectors are "non-normalizable.") Physicists would then go on to say that these "generalized eigenvectors" form an "orthonormal basis in the continuous sense" for L^2(\mathbb), after replacing the usual
Kronecker delta In mathematics, the Kronecker delta (named after Leopold Kronecker) is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise: \delta_ = \begin 0 &\text i \neq j, \\ 1 &\ ...
\delta_ by a
Dirac delta function In mathematical analysis, the Dirac delta function (or distribution), also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line ...
\delta\left(p - p'\right). Although these statements may seem disconcerting to mathematicians, they can be made rigorous by use of the Fourier transform, which allows a general L^2 function to be expressed as a "superposition" (i.e., integral) of the functions e^, even though these functions are not in L^2. The Fourier transform "diagonalizes" the momentum operator; that is, it converts it into the operator of multiplication by p, where p is the variable of the Fourier transform. The spectral theorem in general can be expressed similarly as the possibility of "diagonalizing" an operator by showing it is unitarily equivalent to a multiplication operator. Other versions of the spectral theorem are similarly intended to capture the idea that a self-adjoint operator can have "eigenvectors" that are not actually in the Hilbert space in question.


Multiplication operator form of the spectral theorem

Firstly, let (X, \Sigma, \mu) be a σ-finite measure space and h : X \to \mathbb a
measurable function In mathematics, and in particular measure theory, a measurable function is a function between the underlying sets of two measurable spaces that preserves the structure of the spaces: the preimage of any measurable set is measurable. This is in ...
on X. Then the operator T_h : \operatornameT_h \to L^2(X,\mu), defined by : T_h \psi(x) = h(x)\psi(x), \quad \forall \psi \in \operatornameT_h, where : \operatornameT_h := \left\, is called a
multiplication operator In operator theory, a multiplication operator is a linear operator defined on some vector space of functions and whose value at a function is given by multiplication by a fixed function . That is, T_f\varphi(x) = f(x) \varphi (x) \quad for all ...
. Any multiplication operator is a self-adjoint operator. Secondly, two operators A and B with dense domains \operatornameA \subseteq H_1 and \operatornameB \subseteq H_2 in Hilbert spaces H_1 and H_2, respectively, are unitarily equivalent if and only if there is a
unitary transformation In mathematics, a unitary transformation is a linear isomorphism that preserves the inner product: the inner product of two vectors before the transformation is equal to their inner product after the transformation. Formal definition More precise ...
U: H_1 \to H_2 such that: * U\operatornameA = \operatornameB, * U A U^ \xi = B \xi, \quad \forall \xi \in \operatornameB. If unitarily equivalent A and B are bounded, then \, A\, _=\, B\, _; if A is self-adjoint, then so is B. The spectral theorem holds for both bounded and unbounded self-adjoint operators. Proof of the latter follows by reduction to the spectral theorem for
unitary operator In functional analysis, a unitary operator is a surjective bounded operator on a Hilbert space that preserves the inner product. Non-trivial examples include rotations, reflections, and the Fourier operator. Unitary operators generalize unitar ...
s. We might note that if T is multiplication by h, then the spectrum of T is just the
essential range In mathematics, particularly measure theory, the essential range, or the set of essential values, of a function is intuitively the 'non-negligible' range of the function: It does not change between two functions that are equal almost everywhere. On ...
of h. More complete versions of the spectral theorem exist as well that involve direct integrals and carry with it the notion of "generalized eigenvectors".


Functional calculus

One application of the spectral theorem is to define a
functional calculus In mathematics, a functional calculus is a theory allowing one to apply mathematical functions to mathematical operators. It is now a branch (more accurately, several related areas) of the field of functional analysis, connected with spectral theo ...
. That is, if f is a function on the real line and T is a self-adjoint operator, we wish to define the operator f(T). The spectral theorem shows that if T is represented as the operator of multiplication by h, then f(T) is the operator of multiplication by the composition f \circ h. One example from quantum mechanics is the case where T is the
Hamiltonian operator In quantum mechanics, the Hamiltonian of a system is an operator corresponding to the total energy of that system, including both kinetic energy and potential energy. Its spectrum, the system's ''energy spectrum'' or its set of ''energy eigenvalu ...
\hat. If \hat has a true orthonormal basis of eigenvectors e_j with eigenvalues \lambda_j, then f(\hat) := e^ can be defined as the unique bounded operator with eigenvalues f(\lambda_j) := e^ such that: : f(\hat) e_j = f(\lambda_j)e_j. The goal of functional calculus is to extend this idea to the case where T has continuous spectrum (i.e. where T has no normalizable eigenvectors). It has been customary to introduce the following notation : \operatorname(\lambda) = \mathbf_ (T) where \mathbf_ is the
indicator function In mathematics, an indicator function or a characteristic function of a subset of a set is a function that maps elements of the subset to one, and all other elements to zero. That is, if is a subset of some set , then the indicator functio ...
of the interval (-\infty, \lambda]. The family of projection operators E(λ) is called Borel functional calculus#Resolution of the identity, resolution of the identity for ''T''. Moreover, the following
Stieltjes integral Thomas Joannes Stieltjes ( , ; 29 December 1856 – 31 December 1894) was a Dutch mathematician. He was a pioneer in the field of moment problems and contributed to the study of continued fractions. The Thomas Stieltjes Institute for Mathematics ...
representation for ''T'' can be proved: : T = \int_^ \lambda d \operatorname(\lambda).


Formulation in the physics literature

In quantum mechanics,
Dirac notation Paul Adrien Maurice Dirac ( ; 8 August 1902 – 20 October 1984) was an English mathematician and Theoretical physics, theoretical physicist who is considered to be one of the founders of quantum mechanics. Dirac laid the foundations for bot ...
is used as combined expression for both the spectral theorem and the Borel functional calculus. That is, if ''H'' is self-adjoint and ''f'' is a
Borel function In mathematics, and in particular measure theory, a measurable function is a function between the underlying sets of two measurable spaces that preserves the structure of the spaces: the preimage of any measurable set is measurable. This is in ...
, : f(H) = \int dE \left, \Psi_E \rangle f(E) \langle \Psi_E \ with : H \left, \Psi_E\right\rangle = E \left, \Psi_E\right\rangle where the integral runs over the whole spectrum of ''H''. The notation suggests that ''H'' is diagonalized by the eigenvectors Ψ''E''. Such a notation is purely
formal Formal, formality, informal or informality imply the complying with, or not complying with, some set of requirements ( forms, in Ancient Greek). They may refer to: Dress code and events * Formal wear, attire for formal events * Semi-formal atti ...
. The resolution of the identity (sometimes called
projection-valued measure In mathematics, particularly in functional analysis, a projection-valued measure, or spectral measure, is a function defined on certain subsets of a fixed set and whose values are self-adjoint projections on a fixed Hilbert space. A projection-va ...
s) formally resembles the rank-1 projections \left, \Psi_E\right\rangle \left\langle\Psi_E\. In the Dirac notation, (projective) measurements are described via
eigenvalues In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a ...
and
eigenstates In quantum physics, a quantum state is a mathematical entity that embodies the knowledge of a quantum system. Quantum mechanics specifies the construction, evolution, and measurement of a quantum state. The result is a prediction for the system re ...
, both purely formal objects. As one would expect, this does not survive passage to the resolution of the identity. In the latter formulation, measurements are described using the
spectral measure In mathematics, particularly in functional analysis, a projection-valued measure, or spectral measure, is a function defined on certain subsets of a fixed set and whose values are self-adjoint projections on a fixed Hilbert space. A projection-va ...
of , \Psi \rangle, if the system is prepared in , \Psi \rangle prior to the measurement. Alternatively, if one would like to preserve the notion of eigenstates and make it rigorous, rather than merely formal, one can replace the state space by a suitable
rigged Hilbert space In mathematics, a rigged Hilbert space (Gelfand triple, nested Hilbert space, equipped Hilbert space) is a construction designed to link the distribution and square-integrable aspects of functional analysis. Such spaces were introduced to study s ...
. If , the theorem is referred to as resolution of unity: : I = \int dE \left, \Psi_E\right\rangle \left\langle\Psi_E\ In the case H_\text = H - i\Gamma is the sum of an Hermitian ''H'' and a skew-Hermitian (see
skew-Hermitian matrix __NOTOC__ In linear algebra, a square matrix with complex entries is said to be skew-Hermitian or anti-Hermitian if its conjugate transpose is the negative of the original matrix. That is, the matrix A is skew-Hermitian if it satisfies the relation ...
) operator -i\Gamma, one defines the biorthogonal basis set : H^*_\text \left, \Psi_E^*\right\rangle = E^* \left, \Psi_E^*\right\rangle and write the spectral theorem as: : f\left(H_\text\right) = \int dE \left, \Psi_E\right\rangle f(E) \left\langle\Psi_E^*\ (See '' Feshbach–Fano partitioning'' for the context where such operators appear in
scattering theory In physics, scattering is a wide range of physical processes where moving particles or radiation of some form, such as light or sound, are forced to deviate from a straight trajectory by localized non-uniformities (including particles and radiat ...
).


Formulation for symmetric operators

The
spectral theorem In linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized (that is, represented as a diagonal matrix in some basis). This is extremely useful because computations involvin ...
applies only to self-adjoint operators, and not in general to symmetric operators. Nevertheless, we can at this point give a simple example of a symmetric (specifically, an essentially self-adjoint) operator that has an orthonormal basis of eigenvectors. Consider the complex Hilbert space ''L''2 ,1and the
differential operator In mathematics, a differential operator is an operator defined as a function of the differentiation operator. It is helpful, as a matter of notation first, to consider differentiation as an abstract operation that accepts a function and retur ...
: A = -\frac with \mathrm(A) consisting of all complex-valued infinitely
differentiable In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. In other words, the graph of a differentiable function has a non- vertical tangent line at each interior point in ...
functions ''f'' on
, 1 The comma is a punctuation mark that appears in several variants in different languages. Some typefaces render it as a small line, slightly curved or straight, but inclined from the vertical; others give it the appearance of a miniature fille ...
satisfying the boundary conditions : f(0) = f(1) = 0. Then
integration by parts In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of the product of their derivative and antiderivati ...
of the inner product shows that ''A'' is symmetric.The reader is invited to perform integration by parts twice and verify that the given boundary conditions for \operatorname(A) ensure that the boundary terms in the integration by parts vanish. The eigenfunctions of ''A'' are the sinusoids : f_n(x) = \sin(n \pi x) \qquad n= 1, 2, \ldots with the real eigenvalues ''n''2π2; the well-known orthogonality of the sine functions follows as a consequence of ''A'' being symmetric. The operator ''A'' can be seen to have a
compact Compact as used in politics may refer broadly to a pact or treaty; in more specific cases it may refer to: * Interstate compact, a type of agreement used by U.S. states * Blood compact, an ancient ritual of the Philippines * Compact government, a t ...
inverse, meaning that the corresponding differential equation ''Af'' = ''g'' is solved by some integral (and therefore compact) operator ''G''. The compact symmetric operator ''G'' then has a countable family of eigenvectors which are complete in . The same can then be said for ''A''.


Pure point spectrum

A self-adjoint operator ''A'' on ''H'' has pure point spectrum if and only if ''H'' has an orthonormal basis ''i'' ∈ I consisting of eigenvectors for ''A''. Example. The Hamiltonian for the harmonic oscillator has a quadratic potential ''V'', that is : -\Delta + , x, ^2. This Hamiltonian has pure point spectrum; this is typical for bound state Hamiltonians in quantum mechanics. As was pointed out in a previous example, a sufficient condition that an unbounded symmetric operator has eigenvectors which form a Hilbert space basis is that it has a compact inverse.


Symmetric vs self-adjoint operators

Although the distinction between a symmetric operator and a (essentially) self-adjoint operator is subtle, it is important since self-adjointness is the hypothesis in the spectral theorem. Here we discuss some concrete examples of the distinction.


Boundary conditions

In the case where the Hilbert space is a space of functions on a bounded domain, these distinctions have to do with a familiar issue in quantum physics: One cannot define an operator—such as the momentum or Hamiltonian operator—on a bounded domain without specifying ''boundary conditions''. In mathematical terms, choosing the boundary conditions amounts to choosing an appropriate domain for the operator. Consider, for example, the Hilbert space L^2(
, 1 The comma is a punctuation mark that appears in several variants in different languages. Some typefaces render it as a small line, slightly curved or straight, but inclined from the vertical; others give it the appearance of a miniature fille ...
(the space of square-integrable functions on the interval ,1. Let us define a momentum operator ''A'' on this space by the usual formula, setting the Planck constant to 1: : Af = -i\frac. We must now specify a domain for ''A'', which amounts to choosing boundary conditions. If we choose : \operatorname(A) = \left\, then ''A'' is not symmetric (because the boundary terms in the integration by parts do not vanish). If we choose : \operatorname(A) = \left\, then using integration by parts, one can easily verify that ''A'' is symmetric. This operator is not essentially self-adjoint, however, basically because we have specified too many boundary conditions on the domain of ''A'', which makes the domain of the adjoint too big (see also the
example Example may refer to: * ''exempli gratia'' (e.g.), usually read out in English as "for example" * .example, reserved as a domain name that may not be installed as a top-level domain of the Internet ** example.com, example.net, example.org, an ...
below). Specifically, with the above choice of domain for ''A'', the domain of the closure A^ of ''A'' is : \operatorname\left(A^\right) = \left\, whereas the domain of the adjoint A^* of ''A'' is : \operatorname\left(A^*\right) = \left\. That is to say, the domain of the closure has the same boundary conditions as the domain of ''A'' itself, just a less stringent smoothness assumption. Meanwhile, since there are "too many" boundary conditions on ''A'', there are "too few" (actually, none at all in this case) for A^*. If we compute \langle g, Af\rangle for f \in \operatorname(A) using integration by parts, then since f vanishes at both ends of the interval, no boundary conditions on g are needed to cancel out the boundary terms in the integration by parts. Thus, any sufficiently smooth function g is in the domain of A^*, with A^*g = -i\,dg/dx. Since the domain of the closure and the domain of the adjoint do not agree, ''A'' is not essentially self-adjoint. After all, a general result says that the domain of the adjoint of A^\mathrm is the same as the domain of the adjoint of ''A''. Thus, in this case, the domain of the adjoint of A^\mathrm is bigger than the domain of A^\mathrm itself, showing that A^\mathrm is not self-adjoint, which by definition means that ''A'' is not essentially self-adjoint. The problem with the preceding example is that we imposed too many boundary conditions on the domain of ''A''. A better choice of domain would be to use periodic boundary conditions: : \operatorname(A) = \. With this domain, ''A'' is essentially self-adjoint. In this case, we can understand the implications of the domain issues for the spectral theorem. If we use the first choice of domain (with no boundary conditions), all functions f_\beta(x) = e^ for \beta \in \mathbb C are eigenvectors, with eigenvalues -i \beta, and so the spectrum is the whole complex plane. If we use the second choice of domain (with Dirichlet boundary conditions), ''A'' has no eigenvectors at all. If we use the third choice of domain (with periodic boundary conditions), we can find an orthonormal basis of eigenvectors for ''A'', the functions f_n(x) := e^. Thus, in this case finding a domain such that ''A'' is self-adjoint is a compromise: the domain has to be small enough so that ''A'' is symmetric, but large enough so that D(A^*)=D(A).


Schrödinger operators with singular potentials

A more subtle example of the distinction between symmetric and (essentially) self-adjoint operators comes from Schrödinger operators in quantum mechanics. If the potential energy is singular—particularly if the potential is unbounded below—the associated Schrödinger operator may fail to be essentially self-adjoint. In one dimension, for example, the operator : \hat := \frac - X^4 is not essentially self-adjoint on the space of smooth, rapidly decaying functions. In this case, the failure of essential self-adjointness reflects a pathology in the underlying classical system: A classical particle with a -x^4 potential escapes to infinity in finite time. This operator does not have a ''unique'' self-adjoint, but it does admit self-adjoint extensions obtained by specifying "boundary conditions at infinity". (Since \hat is a real operator, it commutes with complex conjugation. Thus, the deficiency indices are automatically equal, which is the condition for having a self-adjoint extension.) In this case, if we initially define \hat on the space of smooth, rapidly decaying functions, the adjoint will be "the same" operator (i.e., given by the same formula) but on the largest possible domain, namely : \operatorname\left(\hat^*\right) = \left\. It is then possible to show that \hat^* is not a symmetric operator, which certainly implies that \hat is not essentially self-adjoint. Indeed, \hat^* has eigenvectors with pure imaginary eigenvalues, which is impossible for a symmetric operator. This strange occurrence is possible because of a cancellation between the two terms in \hat^*: There are functions f in the domain of \hat^* for which neither d^2 f/dx^2 nor x^4f(x) is separately in L^2(\mathbb), but the combination of them occurring in \hat^* is in L^2(\mathbb). This allows for \hat^* to be nonsymmetric, even though both d^2/dx^2 and X^4 are symmetric operators. This sort of cancellation does not occur if we replace the repelling potential -x^4 with the confining potential x^4.


Non-self-adjoint operators in quantum mechanics

In quantum mechanics, observables correspond to self-adjoint operators. By
Stone's theorem on one-parameter unitary groups In mathematics, Stone's theorem on one-parameter unitary groups is a basic theorem of functional analysis that establishes a one-to-one correspondence between self-adjoint operators on a Hilbert space \mathcal and one-parameter families :(U_)_ o ...
, self-adjoint operators are precisely the infinitesimal generators of unitary groups of
time evolution Time evolution is the change of state brought about by the passage of time, applicable to systems with internal state (also called ''stateful systems''). In this formulation, ''time'' is not required to be a continuous parameter, but may be discr ...
operators. However, many physical problems are formulated as a time-evolution equation involving differential operators for which the Hamiltonian is only symmetric. In such cases, either the Hamiltonian is essentially self-adjoint, in which case the physical problem has unique solutions or one attempts to find self-adjoint extensions of the Hamiltonian corresponding to different types of boundary conditions or conditions at infinity. Example. The one-dimensional Schrödinger operator with the potential V(x) = -(1 + , x, )^\alpha, defined initially on smooth compactly supported functions, is essentially self-adjoint for but not for . The failure of essential self-adjointness for \alpha > 2 has a counterpart in the classical dynamics of a particle with potential V(x): The classical particle escapes to infinity in finite time. Example. There is no self-adjoint momentum operator p for a particle moving on a half-line. Nevertheless, the Hamiltonian p^2 of a "free" particle on a half-line has several self-adjoint extensions corresponding to different types of boundary conditions. Physically, these boundary conditions are related to reflections of the particle at the origin.


Examples


A symmetric operator that is not essentially self-adjoint

We first consider the Hilbert space L^2
, 1 The comma is a punctuation mark that appears in several variants in different languages. Some typefaces render it as a small line, slightly curved or straight, but inclined from the vertical; others give it the appearance of a miniature fille ...
/math> and the differential operator : D: \phi \mapsto \frac \phi' defined on the space of continuously differentiable complex-valued functions on ,1 satisfying the boundary conditions :\phi(0) = \phi(1) = 0. Then ''D'' is a symmetric operator as can be shown by
integration by parts In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of the product of their derivative and antiderivati ...
. The spaces ''N''+, ''N'' (defined below) are given respectively by the
distribution Distribution may refer to: Mathematics *Distribution (mathematics), generalized functions used to formulate solutions of partial differential equations *Probability distribution, the probability of a particular value or value range of a varia ...
al solutions to the equation : \begin -i u' &= i u \\ -i u' &= -i u \end which are in ''L''2
, 1 The comma is a punctuation mark that appears in several variants in different languages. Some typefaces render it as a small line, slightly curved or straight, but inclined from the vertical; others give it the appearance of a miniature fille ...
One can show that each one of these solution spaces is 1-dimensional, generated by the functions ''x'' → ''e''''−x'' and ''x'' → ''e''''x'' respectively. This shows that ''D'' is not essentially self-adjoint, but does have self-adjoint extensions. These self-adjoint extensions are parametrized by the space of unitary mappings ''N''+ → ''N'', which in this case happens to be the unit circle T. In this case, the failure of essential self-adjointenss is due to an "incorrect" choice of boundary conditions in the definition of the domain of D. Since D is a first-order operator, only one boundary condition is needed to ensure that D is symmetric. If we replaced the boundary conditions given above by the single boundary condition : \phi(0) = \phi(1), then ''D'' would still be symmetric and would now, in fact, be essentially self-adjoint. This change of boundary conditions gives one particular essentially self-adjoint extension of ''D''. Other essentially self-adjoint extensions come from imposing boundary conditions of the form \phi(1) = e^\phi(0). This simple example illustrates a general fact about self-adjoint extensions of symmetric differential operators ''P'' on an open set ''M''. They are determined by the unitary maps between the eigenvalue spaces : N_\pm = \left\ where ''P''dist is the distributional extension of ''P''.


Constant-coefficient operators

We next give the example of differential operators with constant coefficients. Let : P\left(\vec\right) = \sum_\alpha c_\alpha x^\alpha be a polynomial on R''n'' with ''real'' coefficients, where α ranges over a (finite) set of multi-indices. Thus : \alpha = (\alpha_1, \alpha_2, \ldots, \alpha_n) and : x^\alpha = x_1^ x_2^ \cdots x_n^. We also use the notation : D^\alpha = \frac \partial_^\partial_^ \cdots \partial_^. Then the operator ''P''(D) defined on the space of infinitely differentiable functions of compact support on R''n'' by : P(\operatorname) \phi = \sum_\alpha c_\alpha \operatorname^\alpha \phi is essentially self-adjoint on ''L''2(R''n''). More generally, consider linear differential operators acting on infinitely differentiable complex-valued functions of compact support. If ''M'' is an open subset of R''n'' : P \phi(x) = \sum_\alpha a_\alpha (x) \left ^\alpha \phi\rightx) where ''a''α are (not necessarily constant) infinitely differentiable functions. ''P'' is a linear operator : C_0^\infty(M) \to C_0^\infty(M). Corresponding to ''P'' there is another differential operator, the
formal adjoint In mathematics, a differential operator is an operator defined as a function of the differentiation operator. It is helpful, as a matter of notation first, to consider differentiation as an abstract operation that accepts a function and retur ...
of ''P'' : P^\mathrm \phi = \sum_\alpha D^\alpha \left(\overline \phi\right)


Spectral multiplicity theory

The multiplication representation of a self-adjoint operator, though extremely useful, is not a canonical representation. This suggests that it is not easy to extract from this representation a criterion to determine when self-adjoint operators ''A'' and ''B'' are unitarily equivalent. The finest grained representation which we now discuss involves spectral multiplicity. This circle of results is called the '' HahnHellinger theory of spectral multiplicity''.


Uniform multiplicity

We first define ''uniform multiplicity'': Definition. A self-adjoint operator ''A'' has uniform multiplicity ''n'' where ''n'' is such that 1 ≤ ''n'' ≤ ''ω'' if and only if ''A'' is unitarily equivalent to the operator M''f'' of multiplication by the function ''f''(''λ'') = ''λ'' on : L^2_\mu\left(\mathbf, \mathbf_n\right) = \left\ where H''n'' is a Hilbert space of dimension ''n''. The domain of M''f'' consists of vector-valued functions ''ψ'' on R such that : \int_\mathbf , \lambda, ^2\ \, \psi(\lambda)\, ^2 \, d\mu(\lambda) < \infty. Non-negative countably additive measures ''μ'', ''ν'' are mutually singular if and only if they are supported on disjoint Borel sets. This representation is unique in the following sense: For any two such representations of the same ''A'', the corresponding measures are equivalent in the sense that they have the same sets of measure 0.


Direct integrals

The spectral multiplicity theorem can be reformulated using the language of
direct integral In mathematics and functional analysis, a direct integral or Hilbert integral is a generalization of the concept of direct sum. The theory is most developed for direct integrals of Hilbert spaces and direct integrals of von Neumann algebras. The c ...
s of Hilbert spaces: Unlike the multiplication-operator version of the spectral theorem, the direct-integral version is unique in the sense that the measure equivalence class of ''μ'' (or equivalently its sets of measure 0) is uniquely determined and the measurable function \lambda\mapsto\mathrm(H_) is determined almost everywhere with respect to ''μ''. The function \lambda \mapsto \operatorname\left(H_\lambda\right) is the spectral multiplicity function of the operator. We may now state the classification result for self-adjoint operators: Two self-adjoint operators are unitarily equivalent if and only if (1) their spectra agree as sets, (2) the measures appearing in their direct-integral representations have the same sets of measure zero, and (3) their spectral multiplicity functions agree almost everywhere with respect to the measure in the direct integral. Proposition 7.24


Example: structure of the Laplacian

The Laplacian on R''n'' is the operator : \Delta = \sum_^n \partial_^2. As remarked above, the Laplacian is diagonalized by the Fourier transform. Actually it is more natural to consider the ''negative'' of the Laplacian −Δ since as an operator it is non-negative; (see
elliptic operator In the theory of partial differential equations, elliptic operators are differential operators that generalize the Laplace operator. They are defined by the condition that the coefficients of the highest-order derivatives be positive, which im ...
).


See also

* Compact operator on Hilbert space *
Unbounded operator In mathematics, more specifically functional analysis and operator theory, the notion of unbounded operator provides an abstract framework for dealing with differential operators, unbounded observables in quantum mechanics, and other cases. The t ...
*
Hermitian adjoint In mathematics, specifically in operator theory, each linear operator A on an inner product space defines a Hermitian adjoint (or adjoint) operator A^* on that space according to the rule :\langle Ax,y \rangle = \langle x,A^*y \rangle, where \l ...
*
Normal operator In mathematics, especially functional analysis, a normal operator on a complex number, complex Hilbert space H is a continuous function (topology), continuous linear operator N\colon H\rightarrow H that commutator, commutes with its Hermitian adjo ...
*
Positive operator In mathematics (specifically linear algebra, operator theory, and functional analysis) as well as physics, a linear operator A acting on an inner product space is called positive-semidefinite (or ''non-negative'') if, for every x \in \operatorname( ...
*
Helffer–Sjöstrand formula The Helffer–Sjöstrand formula is a mathematical tool used in spectral theory and functional analysis to represent functions of self-adjoint operators. Named after Bernard Helffer and Johannes Sjöstrand, this formula provides a way to calcula ...


Remarks


Notes


References

* * * * * * * * * * * * * * * * * * * * * {{DEFAULTSORT:Self-Adjoint Operator Hilbert spaces Operator theory Linear operators