HOME

TheInfoList



OR:

In mathematics, an eigenfunction of a
linear operator In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that pr ...
''D'' defined on some
function space In mathematics, a function space is a set of functions between two fixed sets. Often, the domain and/or codomain will have additional structure which is inherited by the function space. For example, the set of functions from any set into a vec ...
is any non-zero
function Function or functionality may refer to: Computing * Function key, a type of key on computer keyboards * Function model, a structured representation of processes in a system * Function object or functor or functionoid, a concept of object-oriente ...
f in that space that, when acted upon by ''D'', is only multiplied by some scaling factor called an
eigenvalue In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted ...
. As an equation, this condition can be written as Df = \lambda f for some
scalar Scalar may refer to: *Scalar (mathematics), an element of a field, which is used to define a vector space, usually the field of real numbers *Scalar (physics), a physical quantity that can be described by a single element of a number field such a ...
eigenvalue \lambda. The solutions to this equation may also be subject to
boundary conditions In mathematics, in the field of differential equations, a boundary value problem is a differential equation together with a set of additional constraints, called the boundary conditions. A solution to a boundary value problem is a solution to t ...
that limit the allowable eigenvalues and eigenfunctions. An eigenfunction is a type of
eigenvector In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted b ...
.


Eigenfunctions

In general, an eigenvector of a linear operator ''D'' defined on some vector space is a nonzero vector in the domain of ''D'' that, when ''D'' acts upon it, is simply scaled by some scalar value called an eigenvalue. In the special case where ''D'' is defined on a function space, the eigenvectors are referred to as eigenfunctions. That is, a function ''f'' is an eigenfunction of ''D'' if it satisfies the equation where λ is a scalar. The solutions to Equation may also be subject to boundary conditions. Because of the boundary conditions, the possible values of λ are generally limited, for example to a discrete set ''λ''1, ''λ''2, … or to a continuous set over some range. The set of all possible eigenvalues of ''D'' is sometimes called its
spectrum A spectrum (plural ''spectra'' or ''spectrums'') is a condition that is not limited to a specific set of values but can vary, without gaps, across a continuum. The word was first used scientifically in optics to describe the rainbow of colors ...
, which may be discrete, continuous, or a combination of both. Each value of λ corresponds to one or more eigenfunctions. If multiple linearly independent eigenfunctions have the same eigenvalue, the eigenvalue is said to be degenerate and the maximum number of linearly independent eigenfunctions associated with the same eigenvalue is the eigenvalue's ''degree of degeneracy'' or
geometric multiplicity In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted ...
.


Derivative example

A widely used class of linear operators acting on infinite dimensional spaces are differential operators on the space C of infinitely differentiable real or complex functions of a real or complex argument ''t''. For example, consider the derivative operator \frac with eigenvalue equation \fracf(t) = \lambda f(t). This differential equation can be solved by multiplying both sides by \frac and integrating. Its solution, the
exponential function The exponential function is a mathematical function denoted by f(x)=\exp(x) or e^x (where the argument is written as an exponent). Unless otherwise specified, the term generally refers to the positive-valued function of a real variable, a ...
f(t)=f_0 e^, is the eigenfunction of the derivative operator, where ''f''0 is a parameter that depends on the boundary conditions. Note that in this case the eigenfunction is itself a function of its associated eigenvalue λ, which can take any real or complex value. In particular, note that for λ = 0 the eigenfunction ''f''(''t'') is a constant. Suppose in the example that ''f''(''t'') is subject to the boundary conditions ''f''(0) = 1 and \left.\frac\_ = 2. We then find that f(t)=e^, where λ = 2 is the only eigenvalue of the differential equation that also satisfies the boundary condition.


Link to eigenvalues and eigenvectors of matrices

Eigenfunctions can be expressed as column vectors and linear operators can be expressed as matrices, although they may have infinite dimensions. As a result, many of the concepts related to eigenvectors of matrices carry over to the study of eigenfunctions. Define the
inner product In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often ...
in the function space on which ''D'' is defined as \langle f,g \rangle = \int_ \ f^*(t)g(t) dt, integrated over some range of interest for ''t'' called Ω. The ''*'' denotes the
complex conjugate In mathematics, the complex conjugate of a complex number is the number with an equal real part and an imaginary part equal in magnitude but opposite in sign. That is, (if a and b are real, then) the complex conjugate of a + bi is equal to a - ...
. Suppose the function space has an
orthonormal basis In mathematics, particularly linear algebra, an orthonormal basis for an inner product space ''V'' with finite dimension is a basis for V whose vectors are orthonormal, that is, they are all unit vectors and orthogonal to each other. For example, ...
given by the set of functions , where ''n'' may be infinite. For the orthonormal basis, \langle u_i,u_j \rangle = \int_ \ u_i^*(t)u_j(t) dt = \delta_ = \begin 1 & i=j \\ 0 & i \ne j \end, where ''δ''''ij'' is the
Kronecker delta In mathematics, the Kronecker delta (named after Leopold Kronecker) is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise: \delta_ = \begin 0 &\text i \neq j, \\ 1 & ...
and can be thought of as the elements of the
identity matrix In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere. Terminology and notation The identity matrix is often denoted by I_n, or simply by I if the size is immaterial or ...
. Functions can be written as a linear combination of the basis functions, f(t) = \sum_^n b_j u_j(t), for example through a
Fourier expansion A Fourier series () is a summation of harmonically related sinusoidal functions, also known as components or harmonics. The result of the summation is a periodic function whose functional form is determined by the choices of cycle length (or ''p ...
of ''f''(''t''). The coefficients ''b''''j'' can be stacked into an ''n'' by 1 column vector . In some special cases, such as the coefficients of the Fourier series of a sinusoidal function, this column vector has finite dimension. Additionally, define a matrix representation of the linear operator ''D'' with elements A_ = \langle u_i,Du_j \rangle = \int_\ u^*_i(t)Du_j(t) dt. We can write the function ''Df''(''t'') either as a linear combination of the basis functions or as ''D'' acting upon the expansion of ''f''(''t''), Df(t) = \sum_^n c_j u_j(t) = \sum_^n b_j Du_j(t). Taking the inner product of each side of this equation with an arbitrary basis function ''u''''i''(''t''), \begin \sum_^n c_j \int_ \ u_i^*(t)u_j(t) dt &= \sum_^n b_j \int_ \ u_i^*(t)Du_j(t) dt, \\ c_i &= \sum_^n b_j A_. \end This is the matrix multiplication ''Ab'' = ''c'' written in summation notation and is a matrix equivalent of the operator ''D'' acting upon the function ''f''(''t'') expressed in the orthonormal basis. If ''f''(''t'') is an eigenfunction of ''D'' with eigenvalue λ, then ''Ab'' = ''λb''.


Eigenvalues and eigenfunctions of Hermitian operators

Many of the operators encountered in physics are
Hermitian {{Short description, none Numerous things are named after the French mathematician Charles Hermite (1822–1901): Hermite * Cubic Hermite spline, a type of third-degree spline * Gauss–Hermite quadrature, an extension of Gaussian quadrature me ...
. Suppose the linear operator ''D'' acts on a function space that is a
Hilbert space In mathematics, Hilbert spaces (named after David Hilbert) allow generalizing the methods of linear algebra and calculus from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. Hilbert spaces arise natural ...
with an orthonormal basis given by the set of functions , where ''n'' may be infinite. In this basis, the operator ''D'' has a matrix representation ''A'' with elements A_ = \langle u_i,Du_j \rangle = \int_ dt\ u^*_i(t)Du_j(t). integrated over some range of interest for ''t'' denoted Ω. By analogy with
Hermitian matrices In mathematics, a Hermitian matrix (or self-adjoint matrix) is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the -th row and -th column is equal to the complex conjugate of the element in the - ...
, ''D'' is a Hermitian operator if ''A''''ij'' = ''A''''ji''*, or: \begin \langle u_i,Du_j \rangle &= \langle Du_i,u_j \rangle, \\ 1pt\int_ dt\ u^*_i(t)Du_j(t) &= \int_ dt\ u_j(t) u_i(t)*. \end Consider the Hermitian operator ''D'' with eigenvalues ''λ''1, ''λ''2, … and corresponding eigenfunctions ''f''1(''t''), ''f''2(''t''), …. This Hermitian operator has the following properties: * Its eigenvalues are real, ''λ''''i'' = ''λ''''i''* * Its eigenfunctions obey an orthogonality condition, \langle f_i,f_j \rangle = 0 if ''i'' ≠ ''j'' The second condition always holds for ''λ''''i'' ≠ ''λ''''j''. For degenerate eigenfunctions with the same eigenvalue ''λ''''i'', orthogonal eigenfunctions can always be chosen that span the eigenspace associated with ''λ''''i'', for example by using the Gram-Schmidt process. Depending on whether the spectrum is discrete or continuous, the eigenfunctions can be normalized by setting the inner product of the eigenfunctions equal to either a Kronecker delta or a
Dirac delta function In mathematics, the Dirac delta distribution ( distribution), also known as the unit impulse, is a generalized function or distribution over the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire ...
, respectively. For many Hermitian operators, notably Sturm–Liouville operators, a third property is * Its eigenfunctions form a basis of the function space on which the operator is defined As a consequence, in many important cases, the eigenfunctions of the Hermitian operator form an orthonormal basis. In these cases, an arbitrary function can be expressed as a linear combination of the eigenfunctions of the Hermitian operator.


Applications


Vibrating strings

Let denote the transverse displacement of a stressed elastic chord, such as the
vibrating string A vibration in a string is a wave. Resonance causes a vibrating string to produce a sound with constant frequency, i.e. constant pitch. If the length or tension of the string is correctly adjusted, the sound produced is a musical tone. Vibrating ...
s of a
string instrument String instruments, stringed instruments, or chordophones are musical instruments that produce sound from vibrating strings when a performer plays or sounds the strings in some manner. Musicians play some string instruments by plucking the s ...
, as a function of the position along the string and of time . Applying the laws of mechanics to
infinitesimal In mathematics, an infinitesimal number is a quantity that is closer to zero than any standard real number, but that is not zero. The word ''infinitesimal'' comes from a 17th-century Modern Latin coinage ''infinitesimus'', which originally ref ...
portions of the string, the function satisfies the
partial differential equation In mathematics, a partial differential equation (PDE) is an equation which imposes relations between the various partial derivatives of a multivariable function. The function is often thought of as an "unknown" to be solved for, similarly to ...
\frac = c^2\frac, which is called the (one-dimensional)
wave equation The (two-way) wave equation is a second-order linear partial differential equation for the description of waves or standing wave fields — as they occur in classical physics — such as mechanical waves (e.g. water waves, sound waves and seis ...
. Here is a constant speed that depends on the tension and mass of the string. This problem is amenable to the method of
separation of variables In mathematics, separation of variables (also known as the Fourier method) is any of several methods for solving ordinary and partial differential equations, in which algebra allows one to rewrite an equation so that each of two variables occurs ...
. If we assume that can be written as the product of the form , we can form a pair of ordinary differential equations: \fracX=-\fracX, \qquad \fracT = -\omega^2 T. Each of these is an eigenvalue equation with eigenvalues -\frac and , respectively. For any values of and , the equations are satisfied by the functions X(x) = \sin\left(\frac + \varphi\right), \qquad T(t) = \sin(\omega t + \psi), where the phase angles and are arbitrary real constants. If we impose boundary conditions, for example that the ends of the string are fixed at and , namely , and that , we constrain the eigenvalues. For these boundary conditions, and , so the phase angles , and \sin\left(\frac\right) = 0. This last boundary condition constrains to take a value , where is any integer. Thus, the clamped string supports a family of standing waves of the form h(x,t) = \sin\left(\frac \right) \sin(\omega_n t). In the example of a string instrument, the frequency is the frequency of the -th
harmonic A harmonic is a wave with a frequency that is a positive integer multiple of the '' fundamental frequency'', the frequency of the original periodic signal, such as a sinusoidal wave. The original signal is also called the ''1st harmonic'', the ...
, which is called the -th
overtone An overtone is any resonant frequency above the fundamental frequency of a sound. (An overtone may or may not be a harmonic) In other words, overtones are all pitches higher than the lowest pitch within an individual sound; the fundamental i ...
.


Schrödinger equation

In
quantum mechanics Quantum mechanics is a fundamental theory in physics that provides a description of the physical properties of nature at the scale of atoms and subatomic particles. It is the foundation of all quantum physics including quantum chemistry, qu ...
, the
Schrödinger equation The Schrödinger equation is a linear partial differential equation that governs the wave function of a quantum-mechanical system. It is a key result in quantum mechanics, and its discovery was a significant landmark in the development of the ...
i \hbar \frac\Psi(\mathbf,t) = H \Psi(\mathbf,t) with the
Hamiltonian operator Hamiltonian may refer to: * Hamiltonian mechanics, a function that represents the total energy of a system * Hamiltonian (quantum mechanics), an operator corresponding to the total energy of that system ** Dyall Hamiltonian, a modified Hamiltonian ...
H = -\frac\nabla^2+ V(\mathbf,t) can be solved by separation of variables if the Hamiltonian does not depend explicitly on time. In that case, the
wave function A wave function in quantum physics is a mathematical description of the quantum state of an isolated quantum system. The wave function is a complex-valued probability amplitude, and the probabilities for the possible results of measurements m ...
leads to the two differential equations, Both of these differential equations are eigenvalue equations with eigenvalue . As shown in an earlier example, the solution of Equation is the exponential T(t) = e^. Equation is the time-independent Schrödinger equation. The eigenfunctions of the Hamiltonian operator are
stationary state A stationary state is a quantum state with all observables independent of time. It is an eigenvector of the energy operator (instead of a quantum superposition of different energies). It is also called energy eigenvector, energy eigenstate, ener ...
s of the quantum mechanical system, each with a corresponding energy . They represent allowable energy states of the system and may be constrained by boundary conditions. The Hamiltonian operator is an example of a Hermitian operator whose eigenfunctions form an orthonormal basis. When the Hamiltonian does not depend explicitly on time, general solutions of the Schrödinger equation are linear combinations of the stationary states multiplied by the oscillatory , \Psi(\mathbf,t) = \sum_k c_k \varphi_k(\mathbf) e^ or, for a system with a continuous spectrum, \Psi(\mathbf,t) = \int dE \, c_E \varphi_E(\mathbf) e^. The success of the Schrödinger equation in explaining the spectral characteristics of hydrogen is considered one of the greatest triumphs of 20th century physics.


Signals and systems

In the study of signals and systems, an eigenfunction of a system is a signal that, when input into the system, produces a response , where is a complex scalar eigenvalue.


See also

*
Eigenvalues and eigenvectors In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted b ...
* Hilbert–Schmidt theorem *
Spectral theory of ordinary differential equations In mathematics, the spectral theory of ordinary differential equations is the part of spectral theory concerned with the determination of the spectrum and eigenfunction expansion associated with a linear ordinary differential equation. In his dis ...
*
Fixed point combinator In mathematics and computer science in general, a '' fixed point'' of a function is a value that is mapped to itself by the function. In combinatory logic for computer science, a fixed-point combinator (or fixpoint combinator) is a higher-order fu ...
* Fourier transform eigenfunctions


Notes


Citations


Works cited

* (Volume 2: ) * * * * {{refend


External links

* More images (non-GPL) a
Atom in a Box
Functional analysis