In
probability
Probability is the branch of mathematics concerning numerical descriptions of how likely an Event (probability theory), event is to occur, or how likely it is that a proposition is true. The probability of an event is a number between 0 and ...
and
statistics, the Dirichlet distribution (after
Peter Gustav Lejeune Dirichlet
Johann Peter Gustav Lejeune Dirichlet (; 13 February 1805 – 5 May 1859) was a German mathematician who made deep contributions to number theory (including creating the field of analytic number theory), and to the theory of Fourier series and ...
), often denoted
, is a family of
continuous multivariate probability distribution
In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomeno ...
s parameterized by a vector
of positive
reals. It is a multivariate generalization of the
beta distribution,
[ (Chapter 49: Dirichlet and Inverted Dirichlet Distributions)] hence its alternative name of multivariate beta distribution (MBD). Dirichlet distributions are commonly used as
prior distributions in
Bayesian statistics
Bayesian statistics is a theory in the field of statistics based on the Bayesian interpretation of probability where probability expresses a ''degree of belief'' in an event. The degree of belief may be based on prior knowledge about the event, ...
, and in fact, the Dirichlet distribution is the
conjugate prior of the
categorical distribution and
multinomial distribution
In probability theory, the multinomial distribution is a generalization of the binomial distribution. For example, it models the probability of counts for each side of a ''k''-sided dice rolled ''n'' times. For ''n'' independent trials each of w ...
.
The infinite-dimensional generalization of the Dirichlet distribution is the ''
Dirichlet process
In probability theory, Dirichlet processes (after the distribution associated with Peter Gustav Lejeune Dirichlet) are a family of stochastic processes whose realization (probability), realizations are probability distributions. In other words, a ...
''.
Definitions
Probability density function
The Dirichlet distribution of order ''K'' ≥ 2 with parameters ''α''
1, ..., ''α''
''K'' > 0 has a
probability density function
In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) c ...
with respect to
Lebesgue measure
In measure theory, a branch of mathematics, the Lebesgue measure, named after French mathematician Henri Lebesgue, is the standard way of assigning a measure to subsets of ''n''-dimensional Euclidean space. For ''n'' = 1, 2, or 3, it coincides ...
on the
Euclidean space
Euclidean space is the fundamental space of geometry, intended to represent physical space. Originally, that is, in Euclid's ''Elements'', it was the three-dimensional space of Euclidean geometry, but in modern mathematics there are Euclidean sp ...
R
''K-1'' given by
:
:where
belong to the standard
simplex
In geometry, a simplex (plural: simplexes or simplices) is a generalization of the notion of a triangle or tetrahedron to arbitrary dimensions. The simplex is so-named because it represents the simplest possible polytope in any given dimension ...
, or in other words:
The
normalizing constant is the multivariate
beta function, which can be expressed in terms of the
gamma function
In mathematics, the gamma function (represented by , the capital letter gamma from the Greek alphabet) is one commonly used extension of the factorial function to complex numbers. The gamma function is defined for all complex numbers except th ...
:
:
Support
The
support of the Dirichlet distribution is the set of ''K''-dimensional vectors
whose entries are real numbers in the interval
,1such that
, i.e. the sum of the coordinates is equal to 1. These can be viewed as the probabilities of a ''K''-way
categorical event. Another way to express this is that the domain of the Dirichlet distribution is itself a set of
probability distribution
In probability theory and statistics, a probability distribution is the mathematical function that gives the probabilities of occurrence of different possible outcomes for an experiment. It is a mathematical description of a random phenomeno ...
s, specifically the set of ''K''-dimensional
discrete distributions. The technical term for the set of points in the support of a ''K''-dimensional Dirichlet distribution is the
open
Open or OPEN may refer to:
Music
* Open (band), Australian pop/rock band
* The Open (band), English indie rock band
* ''Open'' (Blues Image album), 1969
* ''Open'' (Gotthard album), 1999
* ''Open'' (Cowboy Junkies album), 2001
* ''Open'' (Y ...
standard (''K'' − 1)-simplex,
which is a generalization of a
triangle
A triangle is a polygon with three edges and three vertices. It is one of the basic shapes in geometry. A triangle with vertices ''A'', ''B'', and ''C'' is denoted \triangle ABC.
In Euclidean geometry, any three points, when non- colli ...
, embedded in the next-higher dimension. For example, with ''K'' = 3, the support is an
equilateral triangle
In geometry, an equilateral triangle is a triangle in which all three sides have the same length. In the familiar Euclidean geometry, an equilateral triangle is also equiangular; that is, all three internal angles are also congruent to each oth ...
embedded in a downward-angle fashion in three-dimensional space, with vertices at (1,0,0), (0,1,0) and (0,0,1), i.e. touching each of the coordinate axes at a point 1 unit away from the origin.
Special cases
A common special case is the symmetric Dirichlet distribution, where all of the elements making up the parameter vector
have the same value. The symmetric case might be useful, for example, when a Dirichlet prior over components is called for, but there is no prior knowledge favoring one component over another. Since all elements of the parameter vector have the same value, the symmetric Dirichlet distribution can be parametrized by a single scalar value ''α'', called the
concentration parameter. In terms of ''α,'' the density function has the form
:
When ''α''=1, the symmetric Dirichlet distribution is equivalent to a uniform distribution over the open
standard (''K'' − 1)-simplex, i.e. it is uniform over all points in its
support. This particular distribution is known as the flat Dirichlet distribution. Values of the concentration parameter above 1 prefer
variates that are dense, evenly distributed distributions, i.e. all the values within a single sample are similar to each other. Values of the concentration parameter below 1 prefer sparse distributions, i.e. most of the values within a single sample will be close to 0, and the vast majority of the mass will be concentrated in a few of the values.
More generally, the parameter vector is sometimes written as the product
of a (
scalar)
concentration parameter ''α'' and a (
vector)
base measure where
lies within the (''K'' − 1)-simplex (i.e.: its coordinates
sum to one). The concentration parameter in this case is larger by a factor of ''K'' than the concentration parameter for a symmetric Dirichlet distribution described above. This construction ties in with concept of a base measure when discussing
Dirichlet process
In probability theory, Dirichlet processes (after the distribution associated with Peter Gustav Lejeune Dirichlet) are a family of stochastic processes whose realization (probability), realizations are probability distributions. In other words, a ...
es and is often used in the topic modelling literature.
: If we define the concentration parameter as the sum of the Dirichlet parameters for each dimension, the Dirichlet distribution with concentration parameter ''K'', the dimension of the distribution, is the uniform distribution on the (''K'' − 1)-simplex.
Properties
Moments
Let
.
Let
:
Then
:
:
Furthermore, if
:
The matrix is thus
singular
Singular may refer to:
* Singular, the grammatical number that denotes a unit quantity, as opposed to the plural and other forms
* Singular homology
* SINGULAR, an open source Computer Algebra System (CAS)
* Singular or sounder, a group of boar ...
.
More generally, moments of Dirichlet-distributed random variables can be expressed as
:
Mode
The
mode of the distribution is
the vector (''x''
1, ..., ''x
K'') with
:
Marginal distributions
The
marginal distribution
In probability theory and statistics, the marginal distribution of a subset of a collection of random variables is the probability distribution of the variables contained in the subset. It gives the probabilities of various values of the variables ...
s are
beta distributions:
:
Conjugate to categorical/multinomial
The Dirichlet distribution is the
conjugate prior distribution of the
categorical distribution (a generic
discrete probability distribution with a given number of possible outcomes) and
multinomial distribution
In probability theory, the multinomial distribution is a generalization of the binomial distribution. For example, it models the probability of counts for each side of a ''k''-sided dice rolled ''n'' times. For ''n'' independent trials each of w ...
(the distribution over observed counts of each possible category in a set of categorically distributed observations). This means that if a data point has either a categorical or multinomial distribution, and the
prior distribution of the distribution's parameter (the vector of probabilities that generates the data point) is distributed as a Dirichlet, then the
posterior distribution of the parameter is also a Dirichlet. Intuitively, in such a case, starting from what we know about the parameter prior to observing the data point, we then can update our knowledge based on the data point and end up with a new distribution of the same form as the old one. This means that we can successively update our knowledge of a parameter by incorporating new observations one at a time, without running into mathematical difficulties.
Formally, this can be expressed as follows. Given a model
:
then the following holds:
:
This relationship is used in
Bayesian statistics
Bayesian statistics is a theory in the field of statistics based on the Bayesian interpretation of probability where probability expresses a ''degree of belief'' in an event. The degree of belief may be based on prior knowledge about the event, ...
to estimate the underlying parameter p of a
categorical distribution given a collection of ''N'' samples. Intuitively, we can view the
hyperprior vector α as
pseudocounts, i.e. as representing the number of observations in each category that we have already seen. Then we simply add in the counts for all the new observations (the vector c) in order to derive the posterior distribution.
In Bayesian
mixture models and other
hierarchical Bayesian models with mixture components, Dirichlet distributions are commonly used as the prior distributions for the
categorical variables appearing in the models. See the section on
applications below for more information.
Relation to Dirichlet-multinomial distribution
In a model where a Dirichlet prior distribution is placed over a set of
categorical-valued observations, the
marginal joint distribution of the observations (i.e. the joint distribution of the observations, with the prior parameter
marginalized out
In probability theory and statistics, the marginal distribution of a subset of a collection of random variables is the probability distribution of the variables contained in the subset. It gives the probabilities of various values of the variables ...
) is a
Dirichlet-multinomial distribution. This distribution plays an important role in
hierarchical Bayesian models, because when doing
inference
Inferences are steps in reasoning, moving from premises to logical consequences; etymologically, the word '' infer'' means to "carry forward". Inference is theoretically traditionally divided into deduction and induction, a distinction that ...
over such models using methods such as
Gibbs sampling or
variational Bayes, Dirichlet prior distributions are often marginalized out. See the
article on this distribution for more details.
Entropy
If ''X'' is a Dir(''α'') random variable, the
differential entropy of ''X'' (in
nat units) is
:
where
is the
digamma function
In mathematics, the digamma function is defined as the logarithmic derivative of the gamma function:
:\psi(x)=\frac\ln\big(\Gamma(x)\big)=\frac\sim\ln-\frac.
It is the first of the polygamma functions. It is strictly increasing and strict ...
.
The following formula for
">'prior'':
\boldsymbol\sim\operatorname(\cdot \mid \boldsymbol,\eta)and
\boldsymbol\mid\boldsymbol\sim\operatorname(\cdot \mid \boldsymbol)">'observation'': \boldsymbol\mid\boldsymbol\sim\operatorname(\cdot \mid \boldsymbol)then
\boldsymbol\mid\boldsymbol\sim\operatorname(\cdot \mid \boldsymbol-\log \boldsymbol, \eta+1)">'posterior'': \boldsymbol\mid\boldsymbol\sim\operatorname(\cdot \mid \boldsymbol-\log \boldsymbol, \eta+1)
In the published literature there is no practical algorithm to efficiently generate samples from
\operatorname(\boldsymbol \mid \boldsymbol,\eta).
Occurrence and applications
Bayesian models
Dirichlet distributions are most commonly used as the
prior distribution of
categorical variables or
multinomial variables in Bayesian
mixture models and other
hierarchical Bayesian models. (In many fields, such as in
natural language processing
Natural language processing (NLP) is an interdisciplinary subfield of linguistics, computer science, and artificial intelligence concerned with the interactions between computers and human language, in particular how to program computers to proc ...
, categorical variables are often imprecisely called "multinomial variables". Such a usage is unlikely to cause confusion, just as when
Bernoulli distribution
In probability theory and statistics, the Bernoulli distribution, named after Swiss mathematician Jacob Bernoulli,James Victor Uspensky: ''Introduction to Mathematical Probability'', McGraw-Hill, New York 1937, page 45 is the discrete probab ...
s and
binomial distribution
In probability theory and statistics, the binomial distribution with parameters ''n'' and ''p'' is the discrete probability distribution of the number of successes in a sequence of ''n'' independent experiments, each asking a yes–no qu ...
s are commonly conflated.)
Inference over hierarchical Bayesian models is often done using
Gibbs sampling, and in such a case, instances of the Dirichlet distribution are typically
marginalized out
In probability theory and statistics, the marginal distribution of a subset of a collection of random variables is the probability distribution of the variables contained in the subset. It gives the probabilities of various values of the variables ...
of the model by integrating out the Dirichlet
random variable
A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the p ...
. This causes the various categorical variables drawn from the same Dirichlet random variable to become correlated, and the joint distribution over them assumes a
Dirichlet-multinomial distribution, conditioned on the hyperparameters of the Dirichlet distribution (the
concentration parameters). One of the reasons for doing this is that Gibbs sampling of the
Dirichlet-multinomial distribution is extremely easy; see that article for more information.
Intuitive interpretations of the parameters
The concentration parameter
Dirichlet distributions are very often used as
prior distributions in
Bayesian inference. The simplest and perhaps most common type of Dirichlet prior is the symmetric Dirichlet distribution, where all parameters are equal. This corresponds to the case where you have no prior information to favor one component over any other. As described above, the single value ''α'' to which all parameters are set is called the
concentration parameter. If the sample space of the Dirichlet distribution is interpreted as a
discrete probability distribution, then intuitively the concentration parameter can be thought of as determining how "concentrated" the probability mass of the Dirichlet distribution to its center, leading to samples with mass dispersed almost equally among all components, i.e., with a value much less than 1, the mass will be highly concentrated in a few components, and all the rest will have almost no mass, and with a value much greater than 1, the mass will be dispersed almost equally among all the components. See the article on the
concentration parameter for further discussion.
String cutting
One example use of the Dirichlet distribution is if one wanted to cut strings (each of initial length 1.0) into ''K'' pieces with different lengths, where each piece had a designated average length, but allowing some variation in the relative sizes of the pieces. Recall that
\alpha_0 = \sum_^K \alpha_i. The ''α''/''α''
0 values specify the mean lengths of the cut pieces of string resulting from the distribution. The variance around this mean varies inversely with ''α''
0.
Pólya's urn
Consider an urn containing balls of ''K'' different colors. Initially, the urn contains ''α''
1 balls of color 1, ''α''
2 balls of color 2, and so on. Now perform ''N'' draws from the urn, where after each draw, the ball is placed back into the urn with an additional ball of the same color. In the limit as ''N'' approaches infinity, the proportions of different colored balls in the urn will be distributed as Dir(''α''
1,...,''α
K'').
For a formal proof, note that the proportions of the different colored balls form a bounded
,1sup>''K''-valued
martingale
Martingale may refer to:
* Martingale (probability theory), a stochastic process in which the conditional expectation of the next value, given the current and preceding values, is the current value
* Martingale (tack) for horses
* Martingale (coll ...
, hence by the
martingale convergence theorem, these proportions converge
almost surely
In probability theory, an event is said to happen almost surely (sometimes abbreviated as a.s.) if it happens with probability 1 (or Lebesgue measure 1). In other words, the set of possible exceptions may be non-empty, but it has probability 0 ...
and
in mean
IN, In or in may refer to:
Places
* India (country code IN)
* Indiana, United States (postal code IN)
* Ingolstadt, Germany (license plate code IN)
* In, Russia, a town in the Jewish Autonomous Oblast
Businesses and organizations
* In ...
to a limiting random vector. To see that this limiting vector has the above Dirichlet distribution, check that all mixed
moments agree.
Each draw from the urn modifies the probability of drawing a ball of any one color from the urn in the future. This modification diminishes with the number of draws, since the relative effect of adding a new ball to the urn diminishes as the urn accumulates increasing numbers of balls.
Random variate generation
From gamma distribution
With a source of Gamma-distributed random variates, one can easily sample a random vector
x=(x_1, \ldots, x_K) from the ''K''-dimensional Dirichlet distribution with parameters
(\alpha_1, \ldots, \alpha_K) . First, draw ''K'' independent random samples
y_1, \ldots, y_K from
Gamma distribution
In probability theory and statistics, the gamma distribution is a two- parameter family of continuous probability distributions. The exponential distribution, Erlang distribution, and chi-square distribution are special cases of the gamma dis ...
s each with density
:
\operatorname(\alpha_i, 1) = \frac, \!
and then set
:
x_i = \frac.
The joint distribution of
\ is given by:
:
e^ \prod _^ \frac
Next, one uses a change of variables, parametrising
\ in terms of
y_, y_, \ldots , y_ and
\sum _^y_ , and performs a change of variables from
y \to x such that
x_ = \sum_^y_ , x_ = \frac, x_ = \frac, \ldots , x_ = \frac
One must then use the change of variables formula,
P(x) = P(y(x))\bigg, \frac\bigg, in which
\bigg, \frac\bigg, is the transformation Jacobian.
Writing y explicitly as a function of x, one obtains
y_ = x_x_, y_ = x_x_ \ldots y_ = x_x_, y_ = x_(1-\sum_^x_)
The Jacobian now looks like
:
\beginx_ & 0 & \ldots & x_ \\ 0 & x_ & \ldots & x_ \\ \vdots & \vdots & \ddots & \vdots \\ -x_ & -x_ & \ldots & 1-\sum_^x_ \end
The determinant can be evaluated by noting that it remains unchanged if multiples of a row are added to another row, and adding each of the first K-1 rows to the bottom row to obtain
:
\beginx_ & 0 & \ldots & x_ \\ 0 & x_ & \ldots & x_ \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \ldots & 1 \end
which can be expanded about the bottom row to obtain
x_^
Substituting for x in the joint pdf and including the Jacobian, one obtains:
:
\fracx_^e^
Each of the variables
0 \leq x_, x_, \ldots , x_ \leq 1 and likewise
0 \leq \sum _^x_ \leq 1 .
Finally, integrate out the extra degree of freedom
x_ and one obtains:
:
x_, x_, \ldots, x_ \sim \frac
Which is equivalent to
:
\frac with support
\sum_^x_=1
Below is example Python code to draw the sample:
params = 1, a2, ..., aksample = andom.gammavariate(a, 1) for a in paramssample = / sum(sample) for v in sample
This formulation is correct regardless of how the Gamma distributions are parameterized (shape/scale vs. shape/rate) because they are equivalent when scale and rate equal 1.0.
From marginal beta distributions
A less efficient algorithm
relies on the univariate marginal and conditional distributions being beta and proceeds as follows. Simulate
x_1 from
:
\textrm\left(\alpha_1, \sum_^K \alpha_i \right)
Then simulate
x_2, \ldots, x_ in order, as follows. For
j=2, \ldots, K-1, simulate
\phi_j from
:
\textrm \left(\alpha_j, \sum_^K \alpha_i \right ),
and let
:
x_j= \left(1-\sum_^ x_i \right )\phi_j.
Finally, set
:
x_K=1-\sum_^ x_i.
This iterative procedure corresponds closely to the "string cutting" intuition described above.
Below is example Python code to draw the sample:
params = 1, a2, ..., akxs = andom.betavariate(params[0 sum(params[1:">.html" ;"title="andom.betavariate(params[0">andom.betavariate(params[0 sum(params[1:)]
for j in range(1, len(params) - 1):
phi = random.betavariate(params[j], sum(params[j + 1 :]))
xs.append((1 - sum(xs)) * phi)
xs.append(1 - sum(xs))
See also
*
Generalized Dirichlet distribution
*
Grouped Dirichlet distribution
*
Inverted Dirichlet distribution
*
Latent Dirichlet allocation
*
Dirichlet process
In probability theory, Dirichlet processes (after the distribution associated with Peter Gustav Lejeune Dirichlet) are a family of stochastic processes whose realization (probability), realizations are probability distributions. In other words, a ...
*
Matrix variate Dirichlet distribution
References
External links
*
Dirichlet DistributionHow to estimate the parameters of the compound Dirichlet distribution (Pólya distribution) using expectation-maximization (EM)*
Dirichlet Random Measures, Method of Construction via Compound Poisson Random Variables, and Exchangeability Properties of the resulting Gamma Distribution R package that contains functions for simulating parameters of the Dirichlet distribution.
{{Peter Gustav Lejeune Dirichlet
Multivariate continuous distributions
Conjugate prior distributions
Exponential family distributions
Continuous distributions