HOME





Beta Negative Binomial Distribution
In probability theory, a beta negative binomial distribution is the probability distribution of a discrete probability distribution, discrete random variable X equal to the number of failures needed to get r successes in a sequence of independence (probability theory), independent Bernoulli trials. The probability p of success on each trial stays constant within any given experiment but varies across different experiments following a beta distribution. Thus the distribution is a compound probability distribution. This distribution has also been called both the inverse Markov-Pólya distribution and the generalized Waring distributionJohnson et al. (1993) or simply abbreviated as the BNB distribution. A shifted form of the distribution has been called the beta-Pascal distribution. If parameters of the beta distribution are \alpha and \beta, and if : X \mid p \sim \mathrm(r,p), where : p \sim \textrm(\alpha,\beta), then the marginal distribution of X (i.e. the posterior pre ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon]


Shape Parameter
In probability theory and statistics, a shape parameter (also known as form parameter) is a kind of numerical parameter of a parametric family of probability distributionsEveritt B.S. (2002) Cambridge Dictionary of Statistics. 2nd Edition. CUP. that is neither a location parameter nor a scale parameter (nor a function of these, such as a rate parameter). Such a parameter must affect the ''shape (geometry), shape'' of a distribution rather than simply shifting it (as a location parameter does) or stretching/shrinking it (as a scale parameter does). For example, "peakedness" refers to how round the main peak is. Estimation Many estimators measure location or scale; however, estimators for shape parameters also exist. Most simply, they can be estimated in terms of the higher moment (mathematics), moments, using the Method of moments (statistics), method of moments, as in the ''skewness'' (3rd moment) or ''kurtosis'' (4th moment), if the higher moments are defined and finite. Estimato ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon]


Pochammer Symbol
In mathematics, the falling factorial (sometimes called the descending factorial, falling sequential product, or lower factorial) is defined as the polynomial \begin (x)_n = x^\underline &= \overbrace^ \\ &= \prod_^n(x-k+1) = \prod_^(x-k) . \end The rising factorial (sometimes called the Pochhammer function, Pochhammer polynomial, ascending factorial, — A reprint of the 1950 edition by Chelsea Publishing. rising sequential product, or upper factorial) is defined as \begin x^ = x^\overline &= \overbrace^ \\ &= \prod_^n(x+k-1) = \prod_^(x+k) . \end The value of each is taken to be 1 (an empty product) when n=0. These symbols are collectively called factorial powers. The Pochhammer symbol, introduced by Leo August Pochhammer, is the notation (x)_n, where is a non-negative integer. It may represent ''either'' the rising or the falling factorial, with different articles and authors using different conventions. Pochhammer himself actually used (x)_n with yet another meaning, ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon]


picture info

Urn Model
In probability and statistics, an urn problem is an idealized mental exercise in which some objects of real interest (such as atoms, people, cars, etc.) are represented as colored balls in an urn or other container. One pretends to remove one or more balls from the urn; the goal is to determine the probability of drawing one color or another, or some other properties. A number of important variations are described below. An urn model is either a set of probabilities that describe events within an urn problem, or it is a probability distribution, or a family of such distributions, of random variables associated with urn problems. History In ''Ars Conjectandi'' (1713), Jacob Bernoulli considered the problem of determining, given a number of pebbles drawn from an urn, the proportions of different colored pebbles within the urn. This problem was known as the ''inverse probability'' problem, and was a topic of research in the eighteenth century, attracting the attention of Abraham d ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon]


picture info

Yule–Simon Distribution
In probability and statistics, the Yule–Simon distribution is a discrete probability distribution named after Udny Yule and Herbert A. Simon. Simon originally called it the ''Yule distribution''. The probability mass function (pmf) of the Yule–Simon (''ρ'') distribution is :f(k;\rho) = \rho\operatorname(k, \rho+1), for integer k \geq 1 and real number, real \rho > 0, where \operatorname is the beta function. Equivalently the pmf can be written in terms of the Pochhammer symbol, rising factorial as : f(k;\rho) = \frac, where \Gamma is the gamma function. Thus, if \rho is an integer, : f(k;\rho) = \frac. The parameter \rho can be estimated using a fixed point algorithm. The probability mass function ''f'' has the property that for sufficiently large ''k'' we have : f(k;\rho) \approx \frac \propto \frac 1 . This means that the tail of the Yule–Simon distribution is a realization of Zipf's law: f(k;\rho) can be used to model, for example, the relative fre ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon]


Buy Till You Die
The Buy Till You Die (BTYD) class of statistical models are designed to capture the behavioral characteristics of non-contractual customers, or when the company is not able to directly observe when a customer stops being a customer of a brand. The goal is typically to model and forecast customer lifetime value. BTYD models all jointly model two processes: (1) a repeat purchase process, that explains how frequently customers make purchases while they are still "alive"; and (2) a dropout process, which models how likely a customer is to churn in any given time period. Common versions of the BTYD model include: * The Pareto/NBD model, which models the dropout process as a Pareto Type II distribution and the purchase frequency process as a negative binomial distribution * The Beta-Geometric/NBD model, which models the dropout process, as a geometric distribution with a beta mixing distribution, and models the purchase frequency process as a negative binomial distribution. The ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon]


picture info

Moment (mathematics)
In mathematics, the moments of a function are certain quantitative measures related to the shape of the function's graph. If the function represents mass density, then the zeroth moment is the total mass, the first moment (normalized by total mass) is the center of mass, and the second moment is the moment of inertia. If the function is a probability distribution, then the first moment is the expected value, the second central moment is the variance, the third standardized moment is the skewness, and the fourth standardized moment is the kurtosis. For a distribution of mass or probability on a bounded interval, the collection of all the moments (of all orders, from to ) uniquely determines the distribution ( Hausdorff moment problem). The same is not true on unbounded intervals ( Hamburger moment problem). In the mid-nineteenth century, Pafnuty Chebyshev became the first person to think systematically in terms of the moments of random variables. Significance of th ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon]


picture info

Heavy Tail Distribution
In probability theory, heavy-tailed distributions are probability distributions whose tails are not exponentially bounded: that is, they have heavier tails than the exponential distribution. Roughly speaking, “heavy-tailed” means the distribution decreases more slowly than an exponential distribution, so extreme values are more likely. In many applications it is the right tail of the distribution that is of interest, but a distribution may have a heavy left tail, or both tails may be heavy. There are three important subclasses of heavy-tailed distributions: the fat-tailed distributions, the long-tailed distributions, and the subexponential distributions. In practice, all commonly used heavy-tailed distributions belong to the subexponential class, introduced by Jozef Teugels. There is still some discrepancy over the use of the term heavy-tailed. There are two other definitions in use. Some authors use the term to refer to those distributions which do not have all their pow ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon]


picture info

Stirling's Approximation
In mathematics, Stirling's approximation (or Stirling's formula) is an asymptotic approximation for factorials. It is a good approximation, leading to accurate results even for small values of n. It is named after James Stirling, though a related but less precise result was first stated by Abraham de Moivre. One way of stating the approximation involves the logarithm of the factorial: \ln(n!) = n\ln n - n +O(\ln n), where the big O notation means that, for all sufficiently large values of n, the difference between \ln(n!) and n\ln n-n will be at most proportional to the logarithm of n. In computer science applications such as the worst-case lower bound for comparison sorting, it is convenient to instead use the binary logarithm, giving the equivalent form \log_2 (n!) = n\log_2 n - n\log_2 e +O(\log_2 n). The error term in either base can be expressed more precisely as \tfrac12\log(2\pi n)+O(\tfrac1n), corresponding to an approximate formula for the factorial itself, n! \sim \sqr ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon]


picture info

Poisson Distribution
In probability theory and statistics, the Poisson distribution () is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time if these events occur with a known constant mean rate and independently of the time since the last event. It can also be used for the number of events in other types of intervals than time, and in dimension greater than 1 (e.g., number of events in a given area or volume). The Poisson distribution is named after French mathematician Siméon Denis Poisson. It plays an important role for discrete-stable distributions. Under a Poisson distribution with the expectation of ''λ'' events in a given interval, the probability of ''k'' events in the same interval is: :\frac . For instance, consider a call center which receives an average of ''λ ='' 3 calls per minute at all times of day. If the calls are independent, receiving one does not change the probability of when the next on ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon]




Geometric Distribution
In probability theory and statistics, the geometric distribution is either one of two discrete probability distributions: * The probability distribution of the number X of Bernoulli trials needed to get one success, supported on \mathbb = \; * The probability distribution of the number Y=X-1 of failures before the first success, supported on \mathbb_0 = \ . These two different geometric distributions should not be confused with each other. Often, the name ''shifted'' geometric distribution is adopted for the former one (distribution of X); however, to avoid ambiguity, it is considered wise to indicate which is intended, by mentioning the support explicitly. The geometric distribution gives the probability that the first occurrence of success requires k independent trials, each with success probability p. If the probability of success on each trial is p, then the probability that the k-th trial is the first success is :\Pr(X = k) = (1-p)^p for k=1,2,3,4,\dots The above form of ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon]


Constraint (mathematics)
In mathematics, a constraint is a condition of an optimization problem that the solution must satisfy. There are several types of constraints—primarily equality constraints, inequality constraints, and integer constraints. The set of candidate solutions that satisfy all constraints is called the feasible set. Example The following is a simple optimization problem: :\min f(\mathbf x) = x_1^2+x_2^4 subject to :x_1 \ge 1 and :x_2 = 1, where \mathbf x denotes the vector (''x''1, ''x''2). In this example, the first line defines the function to be minimized (called the objective function, loss function, or cost function). The second and third lines define two constraints, the first of which is an inequality constraint and the second of which is an equality constraint. These two constraints are hard constraints, meaning that it is required that they be satisfied; they define the feasible set of candidate solutions. Without the constraints, the solution would be (0,0), ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon]


Estimation Theory
Estimation theory is a branch of statistics that deals with estimating the values of Statistical parameter, parameters based on measured empirical data that has a random component. The parameters describe an underlying physical setting in such a way that their value affects the distribution of the measured data. An ''estimator'' attempts to approximate the unknown parameters using the measurements. In estimation theory, two approaches are generally considered: * The probabilistic approach (described in this article) assumes that the measured data is random with probability distribution dependent on the parameters of interest * The set estimation, set-membership approach assumes that the measured data vector belongs to a set which depends on the parameter vector. Examples For example, it is desired to estimate the proportion of a population of voters who will vote for a particular candidate. That proportion is the parameter sought; the estimate is based on a small random sa ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   [Amazon]