Product Distribution
A product distribution is a probability distribution constructed as the distribution of the product of random variables having two other known distributions. Given two statistically independent random variables ''X'' and ''Y'', the distribution of the random variable ''Z'' that is formed as the product Z = XY is a ''product distribution''. The product distribution is the PDF of the product of sample values. This is not the same as the product of their PDFs yet the concepts are often ambiguously termed as in "product of Gaussians". Algebra of random variables The product is one type of algebra for random variables: Related to the product distribution are the ratio distribution, sum distribution (see List of convolutions of probability distributions) and difference distribution. More generally, one may talk of combinations of sums, differences, products and ratios. Many of these distributions are described in Melvin D. Springer's book from 1979 ''The Algebra of Random Variable ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Probability Distribution
In probability theory and statistics, a probability distribution is a Function (mathematics), function that gives the probabilities of occurrence of possible events for an Experiment (probability theory), experiment. It is a mathematical description of a Randomness, random phenomenon in terms of its sample space and the Probability, probabilities of Event (probability theory), events (subsets of the sample space). For instance, if is used to denote the outcome of a coin toss ("the experiment"), then the probability distribution of would take the value 0.5 (1 in 2 or 1/2) for , and 0.5 for (assuming that fair coin, the coin is fair). More commonly, probability distributions are used to compare the relative occurrence of many different random values. Probability distributions can be defined in different ways and for discrete or for continuous variables. Distributions with special properties or for especially important applications are given specific names. Introduction A prob ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Saddlepoint Approximation Method
The saddlepoint approximation method, initially proposed by Daniels (1954) is a specific example of the mathematical saddlepoint technique applied to statistics, in particular to the distribution of the sum of N independent random variables. It provides a highly accurate approximation formula for any PDF or probability mass function of a distribution, based on the moment generating function. There is also a formula for the CDF of the distribution, proposed by Lugannani and Rice (1980). Definition If the moment generating function of a random variable X = \sum_^ X_i is written as M(t)=E\left ^\right= E\left ^\right/math> and the cumulant generating function as K(t) = \log(M(t)) = \sum_^\log E\left ^\right/math> then the saddlepoint approximation to the PDF of the distribution X is defined as: :\hat_X (x) = \frac \exp(K(\hat) - \hatx) \,\left(1+\mathcal\right) where \mathcal contains higher order terms to refine the approximation and the saddlepoint approximation to the CD ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Journal Of Multivariate Analysis
The ''Journal of Multivariate Analysis'' is a monthly peer-reviewed scientific journal that covers applications and research in the field of multivariate statistical analysis. The journal's scope includes theoretical results as well as applications of new theoretical methods in the field. Some of the research areas covered include copula modeling, functional data analysis, graphical modeling, high-dimensional data analysis, image analysis, multivariate extreme-value theory, sparse modeling, and spatial statistics. According to the ''Journal Citation Reports'', the journal has a 2017 impact factor The impact factor (IF) or journal impact factor (JIF) of an academic journal is a type of journal ranking. Journals with higher impact factor values are considered more prestigious or important within their field. The Impact Factor of a journa ... of 1.009. See also * List of statistics journals References External links * Statistics journals Monthly journals Engli ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Double Factorial
In mathematics, the double factorial of a number , denoted by , is the product of all the positive integers up to that have the same Parity (mathematics), parity (odd or even) as . That is, n!! = \prod_^ (n-2k) = n (n-2) (n-4) \cdots. Restated, this says that for even , the double factorial is n!! = \prod_^\frac (2k) = n(n-2)(n-4)\cdots 4\cdot 2 \,, while for odd it is n!! = \prod_^\frac (2k-1) = n(n-2)(n-4)\cdots 3\cdot 1 \,. For example, . The zero double factorial as an empty product. The sequence of double factorials for even = starts as The sequence of double factorials for odd = starts as The term odd factorial is sometimes used for the double factorial of an odd number. The term semifactorial is also used by Donald Knuth, Knuth as a synonym of double factorial. History and usage In a 1902 paper, the physicist Arthur Schuster wrote: states that the double factorial was originally introduced in order to simplify the expression of certain List of integrals of ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Normal Distribution
In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is f(x) = \frac e^\,. The parameter is the mean or expectation of the distribution (and also its median and mode), while the parameter \sigma^2 is the variance. The standard deviation of the distribution is (sigma). A random variable with a Gaussian distribution is said to be normally distributed, and is called a normal deviate. Normal distributions are important in statistics and are often used in the natural and social sciences to represent real-valued random variables whose distributions are not known. Their importance is partly due to the central limit theorem. It states that, under some conditions, the average of many samples (observations) of a random variable with finite mean and variance is itself a random variable—whose distribution c ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Fisher Transformation
In statistics, the Fisher transformation (or Fisher ''z''-transformation) of a Pearson correlation coefficient is its inverse hyperbolic tangent (artanh). When the sample correlation coefficient ''r'' is near 1 or -1, its distribution is highly skewed, which makes it difficult to estimate confidence intervals and apply tests of significance for the population correlation coefficient ρ. The Fisher transformation solves this problem by yielding a variable whose distribution is approximately normally distributed, with a variance that is stable over different values of ''r''. Definition Given a set of ''N'' bivariate sample pairs (''X''''i'', ''Y''''i''), ''i'' = 1, ..., ''N'', the sample correlation coefficient ''r'' is given by :r = \frac = \frac. Here \operatorname(X,Y) stands for the covariance between the variables X and Y and \sigma stands for the standard deviation of the respective variable. Fisher's z-transformation of ''r'' is defined a ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Chi-squared Distribution
In probability theory and statistics, the \chi^2-distribution with k Degrees of freedom (statistics), degrees of freedom is the distribution of a sum of the squares of k Independence (probability theory), independent standard normal random variables. The chi-squared distribution \chi^2_k is a special case of the gamma distribution and the univariate Wishart distribution. Specifically if X \sim \chi^2_k then X \sim \text(\alpha=\frac, \theta=2) (where \alpha is the shape parameter and \theta the scale parameter of the gamma distribution) and X \sim \text_1(1,k) . The scaled chi-squared distribution s^2 \chi^2_k is a reparametrization of the gamma distribution and the univariate Wishart distribution. Specifically if X \sim s^2 \chi^2_k then X \sim \text(\alpha=\frac, \theta=2 s^2) and X \sim \text_1(s^2,k) . The chi-squared distribution is one of the most widely used probability distributions in inferential statistics, notably in hypothesis testing and in constru ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Modified Bessel Function
Bessel functions, named after Friedrich Bessel who was the first to systematically study them in 1824, are canonical solutions of Bessel's differential equation x^2 \frac + x \frac + \left(x^2 - \alpha^2 \right)y = 0 for an arbitrary complex number \alpha, which represents the ''order'' of the Bessel function. Although \alpha and -\alpha produce the same differential equation, it is conventional to define different Bessel functions for these two values in such a way that the Bessel functions are mostly smooth functions of \alpha. The most important cases are when \alpha is an integer or half-integer. Bessel functions for integer \alpha are also known as cylinder functions or the cylindrical harmonics because they appear in the solution to Laplace's equation in cylindrical coordinates. #Spherical Bessel functions, Spherical Bessel functions with half-integer \alpha are obtained when solving the Helmholtz equation in spherical coordinates. Applications Bessel's equation arise ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Characteristic Function (probability Theory)
In probability theory and statistics, the characteristic function of any real-valued random variable completely defines its probability distribution. If a random variable admits a probability density function, then the characteristic function is the Fourier transform (with sign reversal) of the probability density function. Thus it provides an alternative route to analytical results compared with working directly with probability density functions or cumulative distribution functions. There are particularly simple results for the characteristic functions of distributions defined by the weighted sums of random variables. In addition to univariate distributions, characteristic functions can be defined for vector- or matrix-valued random variables, and can also be extended to more generic cases. The characteristic function always exists when treated as a function of a real-valued argument, unlike the moment-generating function. There are relations between the behavior of the charact ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Copula (probability Theory)
In probability theory and statistics, a copula is a multivariate cumulative distribution function for which the marginal probability distribution of each variable is uniform on the interval , 1 Copulas are used to describe / model the dependence (inter-correlation) between random variables. Their name, introduced by applied mathematician Abe Sklar in 1959, comes from the Latin for "link" or "tie", similar but only metaphoricly related to grammatical copulas in linguistics. Copulas have been used widely in quantitative finance to model and minimize tail risk and portfolio-optimization applications. Sklar's theorem states that any multivariate joint distribution can be written in terms of univariate marginal distribution functions and a copula which describes the dependence structure between the variables. Copulas are popular in high-dimensional statistical applications as they allow one to easily model and estimate the distribution of random vectors by estimati ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
List Of Convolutions Of Probability Distributions
In probability theory, the probability distribution of the sum of two or more independent random variables is the convolution of their individual distributions. The term is motivated by the fact that the probability mass function or probability density function of a sum of independent random variables is the convolution of their corresponding probability mass functions or probability density functions respectively. Many well known distributions have simple convolutions. The following is a list of these convolutions. Each statement is of the form :\sum_^n X_i \sim Y where X_1, X_2,\dots, X_n are independent random variables, and Y is the distribution that results from the convolution of X_1, X_2,\dots, X_n. In place of X_i and Y the names of the corresponding distributions and their parameters have been indicated. Discrete distributions * \sum_^n \mathrm(p) \sim \mathrm(n,p) \qquad 0 * |