HOME

TheInfoList



OR:

In
probability theory Probability theory or probability calculus is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expre ...
and
statistics Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
, the characteristic function of any real-valued
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a Mathematics, mathematical formalization of a quantity or object which depends on randomness, random events. The term 'random variable' in its mathema ...
completely defines its
probability distribution In probability theory and statistics, a probability distribution is a Function (mathematics), function that gives the probabilities of occurrence of possible events for an Experiment (probability theory), experiment. It is a mathematical descri ...
. If a random variable admits a
probability density function In probability theory, a probability density function (PDF), density function, or density of an absolutely continuous random variable, is a Function (mathematics), function whose value at any given sample (or point) in the sample space (the s ...
, then the characteristic function is the
Fourier transform In mathematics, the Fourier transform (FT) is an integral transform that takes a function as input then outputs another function that describes the extent to which various frequencies are present in the original function. The output of the tr ...
(with sign reversal) of the probability density function. Thus it provides an alternative route to analytical results compared with working directly with
probability density function In probability theory, a probability density function (PDF), density function, or density of an absolutely continuous random variable, is a Function (mathematics), function whose value at any given sample (or point) in the sample space (the s ...
s or
cumulative distribution function In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable X, or just distribution function of X, evaluated at x, is the probability that X will take a value less than or equal to x. Ever ...
s. There are particularly simple results for the characteristic functions of distributions defined by the weighted sums of random variables. In addition to univariate distributions, characteristic functions can be defined for vector- or matrix-valued random variables, and can also be extended to more generic cases. The characteristic function always exists when treated as a function of a real-valued argument, unlike the moment-generating function. There are relations between the behavior of the characteristic function of a distribution and properties of the distribution, such as the existence of moments and the existence of a density function.


Introduction

The characteristic function is a way to describe a
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a Mathematics, mathematical formalization of a quantity or object which depends on randomness, random events. The term 'random variable' in its mathema ...
. The characteristic function, : \varphi_X(t) = \operatorname \left e^ \right a function of , determines the behavior and properties of the probability distribution of . It is equivalent to a
probability density function In probability theory, a probability density function (PDF), density function, or density of an absolutely continuous random variable, is a Function (mathematics), function whose value at any given sample (or point) in the sample space (the s ...
or
cumulative distribution function In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable X, or just distribution function of X, evaluated at x, is the probability that X will take a value less than or equal to x. Ever ...
, since knowing one of these functions allows computation of the others, but they provide different insights into the features of the random variable. In particular cases, one or another of these equivalent functions may be easier to represent in terms of simple standard functions. If a random variable admits a density function, then the characteristic function is its Fourier dual, in the sense that each of them is a
Fourier transform In mathematics, the Fourier transform (FT) is an integral transform that takes a function as input then outputs another function that describes the extent to which various frequencies are present in the original function. The output of the tr ...
of the other. If a random variable has a moment-generating function M_X(t), then the domain of the characteristic function can be extended to the complex plane, and : \varphi_X(-it) = M_X(t). Note however that the characteristic function of a distribution is well defined for all real values of , even when the moment-generating function is not well defined for all real values of . The characteristic function approach is particularly useful in analysis of linear combinations of independent random variables: a classical proof of the Central Limit Theorem uses characteristic functions and Lévy's continuity theorem. Another important application is to the theory of the decomposability of random variables.


Definition

For a scalar random variable the characteristic function is defined as the
expected value In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first Moment (mathematics), moment) is a generalization of the weighted average. Informa ...
of , where is the
imaginary unit The imaginary unit or unit imaginary number () is a mathematical constant that is a solution to the quadratic equation Although there is no real number with this property, can be used to extend the real numbers to what are called complex num ...
, and is the argument of the characteristic function: :\begin \displaystyle \varphi_X\!:\mathbb\to\mathbb \\ \displaystyle \varphi_X(t) = \operatorname\left ^\right= \int_ e^\,dF_X(x) = \int_ e^ f_X(x)\,dx = \int_0^1 e^\,dp \end Here is the
cumulative distribution function In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable X, or just distribution function of X, evaluated at x, is the probability that X will take a value less than or equal to x. Ever ...
of , is the corresponding
probability density function In probability theory, a probability density function (PDF), density function, or density of an absolutely continuous random variable, is a Function (mathematics), function whose value at any given sample (or point) in the sample space (the s ...
, is the corresponding inverse cumulative distribution function also called the quantile function, and the integrals are of the Riemann–Stieltjes kind. If a random variable has a
probability density function In probability theory, a probability density function (PDF), density function, or density of an absolutely continuous random variable, is a Function (mathematics), function whose value at any given sample (or point) in the sample space (the s ...
then the characteristic function is its
Fourier transform In mathematics, the Fourier transform (FT) is an integral transform that takes a function as input then outputs another function that describes the extent to which various frequencies are present in the original function. The output of the tr ...
with sign reversal in the complex exponential. This convention for the constants appearing in the definition of the characteristic function differs from the usual convention for the Fourier transform. For example, some authors define , which is essentially a change of parameter. Other notation may be encountered in the literature: \scriptstyle\hat p as the characteristic function for a probability measure , or \scriptstyle\hat f as the characteristic function corresponding to a density .


Generalizations

The notion of characteristic functions generalizes to multivariate random variables and more complicated random elements. The argument of the characteristic function will always belong to the continuous dual of the space where the random variable takes its values. For common cases such definitions are listed below: * If is a -dimensional random vector, then for \varphi_X(t) = \operatorname\left exp( i t^T\!X)\right where t^T is the
transpose In linear algebra, the transpose of a Matrix (mathematics), matrix is an operator which flips a matrix over its diagonal; that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other ...
of the vector   t , * If is a -dimensional random matrix, then for \varphi_X(t) = \operatorname\left exp \left( i \operatorname(t^T\!X) \right )\right where \operatorname(\cdot) is the trace operator, * If is a
complex random variable In probability theory and statistics, complex random variables are a generalization of real-valued random variables to complex numbers, i.e. the possible values a complex random variable may take are complex numbers. Complex random variables can al ...
, then for \varphi_X(t) = \operatorname\left exp\left( i \operatorname\left(\overlineX\right) \right)\right where \overline t is the
complex conjugate In mathematics, the complex conjugate of a complex number is the number with an equal real part and an imaginary part equal in magnitude but opposite in sign. That is, if a and b are real numbers, then the complex conjugate of a + bi is a - ...
of t and \operatorname(z) is the
real part In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation i^= -1; every complex number can be expressed in the form ...
of the
complex number In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation i^= -1; every complex number can be expressed in the for ...
z , * If is a -dimensional
complex random vector In probability theory and statistics, a complex random vector is typically a tuple of complex-valued random variables, and generally is a random variable taking values in a vector space over the field of complex numbers. If Z_1,\ldots,Z_n are compl ...
, then for    \varphi_X(t) = \operatorname\left exp(i\operatorname(t^*\!X))\right where t^* is the conjugate transpose of the vector   t, * If is a
stochastic process In probability theory and related fields, a stochastic () or random process is a mathematical object usually defined as a family of random variables in a probability space, where the index of the family often has the interpretation of time. Sto ...
, then for all functions such that the integral \int_ t(s)X(s)\,\mathrms converges for almost all realizations of \varphi_X(t) = \operatorname\left exp \left ( i\int_\mathbf t(s)X(s) \, ds \right ) \right


Examples

Oberhettinger (1973) provides extensive tables of characteristic functions.


Properties

* The characteristic function of a real-valued random variable always exists, since it is an integral of a bounded continuous function over a space whose measure is finite. * A characteristic function is uniformly continuous on the entire space. * It is non-vanishing in a region around zero: . * It is bounded: . * It is Hermitian: . In particular, the characteristic function of a symmetric (around the origin) random variable is real-valued and even. * There is a
bijection In mathematics, a bijection, bijective function, or one-to-one correspondence is a function between two sets such that each element of the second set (the codomain) is the image of exactly one element of the first set (the domain). Equival ...
between
probability distribution In probability theory and statistics, a probability distribution is a Function (mathematics), function that gives the probabilities of occurrence of possible events for an Experiment (probability theory), experiment. It is a mathematical descri ...
s and characteristic functions. That is, for any two random variables , , both have the same probability distribution if and only if \varphi_=\varphi_. * If a random variable has moments up to -th order, then the characteristic function is times continuously differentiable on the entire real line. In this case \operatorname ^k= i^ \varphi_X^(0). * If a characteristic function has a -th derivative at zero, then the random variable has all moments up to if is even, but only up to if is odd. \varphi_X^(0) = i^k \operatorname ^k * If are independent random variables, and are some constants, then the characteristic function of the linear combination of the variables is \varphi_(t) = \varphi_(a_1t)\cdots \varphi_(a_nt). One specific case is the sum of two independent random variables and in which case one has \varphi_(t) = \varphi_(t)\cdot\varphi_(t). * Let X and Y be two random variables with characteristic functions \varphi_ and \varphi_. X and Y are independent if and only if \varphi_(s, t)= \varphi_(s) \varphi_(t) \quad \text \quad(s, t) \in \mathbb^. * The tail behavior of the characteristic function determines the
smoothness In mathematical analysis, the smoothness of a function is a property measured by the number of continuous derivatives (''differentiability class)'' it has over its domain. A function of class C^k is a function of smoothness at least ; t ...
of the corresponding density function. * Let the random variable Y = aX + b be the linear transformation of a random variable X. The characteristic function of Y is \varphi_Y(t)=e^\varphi_X(at). For random vectors X and Y = AX + B (where is a constant matrix and a constant vector), we have \varphi_Y(t) = e^\varphi_X(A^\top t).


Continuity

The bijection stated above between probability distributions and characteristic functions is ''sequentially continuous''. That is, whenever a sequence of distribution functions converges (weakly) to some distribution , the corresponding sequence of characteristic functions will also converge, and the limit will correspond to the characteristic function of law . More formally, this is stated as : Lévy’s continuity theorem: A sequence of -variate random variables converges in distribution to random variable if and only if the sequence converges pointwise to a function which is continuous at the origin. Where is the characteristic function of . This theorem can be used to prove the law of large numbers and the central limit theorem.


Inversion formula

There is a
one-to-one correspondence In mathematics, a bijection, bijective function, or one-to-one correspondence is a function between two sets such that each element of the second set (the codomain) is the image of exactly one element of the first set (the domain). Equivale ...
between cumulative distribution functions and characteristic functions, so it is possible to find one of these functions if we know the other. The formula in the definition of characteristic function allows us to compute when we know the distribution function (or density ). If, on the other hand, we know the characteristic function and want to find the corresponding distribution function, then one of the following inversion theorems can be used. Theorem. If the characteristic function of a random variable is integrable, then is absolutely continuous, and therefore has a
probability density function In probability theory, a probability density function (PDF), density function, or density of an absolutely continuous random variable, is a Function (mathematics), function whose value at any given sample (or point) in the sample space (the s ...
. In the univariate case (i.e. when is scalar-valued) the density function is given by f_X(x) = F_X'(x) = \frac\int_ e^\varphi_X(t)\,dt. In the multivariate case it is f_X(x) = \frac \int_ e^\varphi_X(t)\lambda(dt) where t\cdot x is the
dot product In mathematics, the dot product or scalar productThe term ''scalar product'' means literally "product with a Scalar (mathematics), scalar as a result". It is also used for other symmetric bilinear forms, for example in a pseudo-Euclidean space. N ...
. The density function is the Radon–Nikodym derivative of the distribution with respect to the
Lebesgue measure In measure theory, a branch of mathematics, the Lebesgue measure, named after French mathematician Henri Lebesgue, is the standard way of assigning a measure to subsets of higher dimensional Euclidean '-spaces. For lower dimensions or , it c ...
: f_X(x) = \frac(x). Theorem (Lévy). If is characteristic function of distribution function , two points are such that is a continuity set of (in the univariate case this condition is equivalent to continuity of at points and ), then * If is scalar: F_X(b) - F_X(a) = \frac \lim_ \int_^ \frac \, \varphi_X(t)\, dt. This formula can be re-stated in a form more convenient for numerical computation as \frac = \frac \int_^ \frac e^ \varphi_X(t) \, dt . For a random variable bounded from below one can obtain F(b) by taking a such that F(a)=0. Otherwise, if a random variable is not bounded from below, the limit for a\to-\infty gives F(b), but is numerically impractical. * If is a vector random variable: \mu_X\big(\\big) = \frac \lim_\cdots\lim_ \int\limits_ \cdots \int\limits_ \prod_^n\left(\frac\right)\varphi_X(t)\lambda(dt_1 \times \cdots \times dt_n) Theorem. If is (possibly) an atom of (in the univariate case this means a point of discontinuity of ) then * If is scalar: F_X(a) - F_X(a-0) = \lim_\frac \int_^ e^\varphi_X(t)\,dt * If is a vector random variable: \mu_X(\) = \lim_\cdots\lim_ \left(\prod_^n\frac\right) \int\limits_ e^\varphi_X(t)\lambda(dt) Theorem (Gil-Pelaez). For a univariate random variable , if is a continuity point of then : F_X(x) = \frac - \frac\int_0^\infty \frac\,dt where the imaginary part of a complex number z is given by \mathrm(z) = (z - z^*)/2i. And its density function is: : f_X(x) = \frac\int_0^\infty \operatorname ^\varphi_X(t),dt The integral may be not Lebesgue-integrable; for example, when is the discrete random variable that is always 0, it becomes the Dirichlet integral. Inversion formulas for multivariate distributions are available.


Criteria for characteristic functions

The set of all characteristic functions is closed under certain operations: *A convex linear combination \sum_n a_n\varphi_n(t) (with a_n\geq0,\ \sum_n a_n=1) of a finite or a countable number of characteristic functions is also a characteristic function. * The product of a finite number of characteristic functions is also a characteristic function. The same holds for an infinite product provided that it converges to a function continuous at the origin. *If is a characteristic function and is a real number, then \bar, , and are also characteristic functions. It is well known that any non-decreasing càdlàg function with limits , corresponds to a
cumulative distribution function In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable X, or just distribution function of X, evaluated at x, is the probability that X will take a value less than or equal to x. Ever ...
of some random variable. There is also interest in finding similar simple criteria for when a given function could be the characteristic function of some random variable. The central result here is Bochner’s theorem, although its usefulness is limited because the main condition of the theorem, non-negative definiteness, is very hard to verify. Other theorems also exist, such as Khinchine’s, Mathias’s, or Cramér’s, although their application is just as difficult. Pólya’s theorem, on the other hand, provides a very simple convexity condition which is sufficient but not necessary. Characteristic functions which satisfy this condition are called Pólya-type. Bochner’s theorem. An arbitrary function is the characteristic function of some random variable if and only if is positive definite, continuous at the origin, and if . Khinchine’s criterion. A complex-valued, absolutely continuous function , with , is a characteristic function if and only if it admits the representation : \varphi(t) = \int_ g(t+\theta)\overline \, d\theta . Mathias’ theorem. A real-valued, even, continuous, absolutely integrable function , with , is a characteristic function if and only if :(-1)^n \left ( \int_ \varphi(pt)e^ H_(t) \, dt \right ) \geq 0 for , and all . Here denotes the Hermite polynomial of degree . Pólya’s theorem. If \varphi is a real-valued, even, continuous function which satisfies the conditions * \varphi(0) = 1 , * \varphi is
convex Convex or convexity may refer to: Science and technology * Convex lens, in optics Mathematics * Convex set, containing the whole line segment that joins points ** Convex polygon, a polygon which encloses a convex set of points ** Convex polytop ...
for t>0 , * \varphi(\infty) = 0 , then is the characteristic function of an absolutely continuous distribution symmetric about 0.


Uses

Because of the continuity theorem, characteristic functions are used in the most frequently seen proof of the central limit theorem. The main technique involved in making calculations with a characteristic function is recognizing the function as the characteristic function of a particular distribution.


Basic manipulations of distributions

Characteristic functions are particularly useful for dealing with linear functions of independent random variables. For example, if , , ..., is a sequence of independent (and not necessarily identically distributed) random variables, and :S_n = \sum_^n a_i X_i,\,\! where the are constants, then the characteristic function for is given by :\varphi_(t)=\varphi_(a_1t)\varphi_(a_2t)\cdots \varphi_(a_nt) \,\! In particular, . To see this, write out the definition of characteristic function: : \varphi_(t)= \operatorname\left ^\right \operatorname\left ^e^\right= \operatorname\left ^\right\operatorname\left ^\right=\varphi_X(t) \varphi_Y(t) The independence of and is required to establish the equality of the third and fourth expressions. Another special case of interest for identically distributed random variables is when and then ''Sn'' is the sample mean. In this case, writing for the mean, : \varphi_(t)= \varphi_X\!\left(\tfrac \right)^n


Moments

Characteristic functions can also be used to find moments of a random variable. Provided that the -th moment exists, the characteristic function can be differentiated times: \operatorname\left X^n\right= i^\left frac\varphi_X(t)\right = i^\varphi_X^(0) ,\! This can be formally written using the derivatives of the
Dirac delta function In mathematical analysis, the Dirac delta function (or distribution), also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line ...
:f_X(x) = \sum_^\infty \frac\delta^(x)\operatorname ^nwhich allows a formal solution to the moment problem. For example, suppose has a standard Cauchy distribution. Then . This is not differentiable at , showing that the Cauchy distribution has no expectation. Also, the characteristic function of the sample mean of independent observations has characteristic function , using the result from the previous section. This is the characteristic function of the standard Cauchy distribution: thus, the sample mean has the same distribution as the population itself. As a further example, suppose follows a
Gaussian distribution In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real number, real-valued random variable. The general form of its probability density function is f(x ...
i.e. X \sim \mathcal(\mu,\sigma^2). Then \varphi_(t) = e^ and :\operatorname\left X\right= i^ \left frac\varphi_X(t)\right = i^ \left i \mu - \sigma^2 t) \varphi_X(t) \right = \mu A similar calculation shows \operatorname\left X^2\right= \mu^2 + \sigma^2 and is easier to carry out than applying the definition of expectation and using integration by parts to evaluate \operatorname\left X^2\right. The logarithm of a characteristic function is a
cumulant generating function In probability theory and statistics, the cumulants of a probability distribution are a set of quantities that provide an alternative to the '' moments'' of the distribution. Any two probability distributions whose moments are identical will have ...
, which is useful for finding cumulants; some instead define the cumulant generating function as the logarithm of the moment-generating function, and call the logarithm of the characteristic function the ''second'' cumulant generating function.


Data analysis

Characteristic functions can be used as part of procedures for fitting probability distributions to samples of data. Cases where this provides a practicable option compared to other possibilities include fitting the stable distribution since closed form expressions for the density are not available which makes implementation of maximum likelihood estimation difficult. Estimation procedures are available which match the theoretical characteristic function to the empirical characteristic function, calculated from the data. Paulson et al. (1975) and Heathcote (1977) provide some theoretical background for such an estimation procedure. In addition, Yu (2004) describes applications of empirical characteristic functions to fit
time series In mathematics, a time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data. ...
models where likelihood procedures are impractical. Empirical characteristic functions have also been used by Ansari et al. (2020) and Li et al. (2020) for training generative adversarial networks.


Example

The gamma distribution with scale parameter θ and a shape parameter has the characteristic function : (1 - \theta i t)^. Now suppose that we have : X ~\sim \Gamma(k_1,\theta) \mbox Y \sim \Gamma(k_2,\theta) with and independent from each other, and we wish to know what the distribution of is. The characteristic functions are : \varphi_X(t)=(1 - \theta i t)^,\,\qquad \varphi_Y(t)=(1 - \theta it)^ which by independence and the basic properties of characteristic function leads to : \varphi_(t)=\varphi_X(t)\varphi_Y(t)=(1 - \theta i t)^(1 - \theta i t)^=\left(1 - \theta i t\right)^. This is the characteristic function of the gamma distribution scale parameter and shape parameter , and we therefore conclude : X+Y \sim \Gamma(k_1+k_2,\theta) The result can be expanded to independent gamma distributed random variables with the same scale parameter and we get : \forall i \in \ : X_i \sim \Gamma(k_i,\theta) \qquad \Rightarrow \qquad \sum_^n X_i \sim \Gamma\left(\sum_^nk_i,\theta\right).


Entire characteristic functions

As defined above, the argument of the characteristic function is treated as a real number: however, certain aspects of the theory of characteristic functions are advanced by extending the definition into the complex plane by
analytic continuation In complex analysis, a branch of mathematics, analytic continuation is a technique to extend the domain of definition of a given analytic function. Analytic continuation often succeeds in defining further values of a function, for example in a ne ...
, in cases where this is possible.


Related concepts

Related concepts include the moment-generating function and the probability-generating function. The characteristic function exists for all probability distributions. This is not the case for the moment-generating function. The characteristic function is closely related to the
Fourier transform In mathematics, the Fourier transform (FT) is an integral transform that takes a function as input then outputs another function that describes the extent to which various frequencies are present in the original function. The output of the tr ...
: the characteristic function of a probability density function is the
complex conjugate In mathematics, the complex conjugate of a complex number is the number with an equal real part and an imaginary part equal in magnitude but opposite in sign. That is, if a and b are real numbers, then the complex conjugate of a + bi is a - ...
of the
continuous Fourier transform In mathematics, the Fourier transform (FT) is an integral transform that takes a function (mathematics), function as input then outputs another function that describes the extent to which various Frequency, frequencies are present in the origin ...
of (according to the usual convention; see continuous Fourier transform – other conventions). : \varphi_X(t) = \langle e^ \rangle = \int_ e^p(x)\, dx = \overline = \overline, where denotes the
continuous Fourier transform In mathematics, the Fourier transform (FT) is an integral transform that takes a function (mathematics), function as input then outputs another function that describes the extent to which various Frequency, frequencies are present in the origin ...
of the probability density function . Likewise, may be recovered from through the inverse Fourier transform: :p(x) = \frac \int_ e^ P(t)\, dt = \frac \int_ e^ \overline\, dt. Indeed, even when the random variable does not have a density, the characteristic function may be seen as the Fourier transform of the measure corresponding to the random variable. Another related concept is the representation of probability distributions as elements of a
reproducing kernel Hilbert space In functional analysis, a reproducing kernel Hilbert space (RKHS) is a Hilbert space of functions in which point evaluation is a continuous linear functional. Specifically, a Hilbert space H of functions from a set X (to \mathbb or \mathbb) is ...
via the kernel embedding of distributions. This framework may be viewed as a generalization of the characteristic function under specific choices of the kernel function.


See also

* Subindependence, a weaker condition than independence, that is defined in terms of characteristic functions. * Cumulant, a term of the ''cumulant generating functions'', which are logs of the characteristic functions.


Notes


References


Citations


Sources

* * * * * * * * * * * * * * * * * * *


External links

* {{Theory of probability distributions Functions related to probability distributions