HOME

TheInfoList



OR:

Slice sampling is a type of
Markov chain Monte Carlo In statistics, Markov chain Monte Carlo (MCMC) methods comprise a class of algorithms for sampling from a probability distribution. By constructing a Markov chain that has the desired distribution as its equilibrium distribution, one can obtain ...
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
for
pseudo-random number sampling Non-uniform random variate generation or pseudo-random number sampling is the numerical practice of generating pseudo-random numbers (PRN) that follow a given probability distribution. Methods are typically based on the availability of a unifo ...
, i.e. for drawing random samples from a statistical distribution. The method is based on the observation that to sample a
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the p ...
one can sample uniformly from the region under the graph of its density function.


Motivation

Suppose you want to sample some random variable ''X'' with distribution ''f''(''x''). Suppose that the following is the graph of ''f''(''x''). The height of ''f''(''x'') corresponds to the likelihood at that point. If you were to uniformly sample ''X'', each value would have the same likelihood of being sampled, and your distribution would be of the form ''f''(''x'') = ''y'' for some ''y'' value instead of some non-uniform function ''f''(''x''). Instead of the original black line, your new distribution would look more like the blue line. In order to sample ''X'' in a manner which will retain the distribution ''f''(''x''), some sampling technique must be used which takes into account the varied likelihoods for each range of ''f''(''x'').


Method

Slice sampling, in its simplest form, samples uniformly from underneath the curve ''f''(''x'') without the need to reject any points, as follows: # Choose a starting value x0 for which ''f''(''x''0) > 0. # Sample a value uniformly between 0 and ''f''(''x''0). # Draw a horizontal line across the curve at this position. # Sample a point (, ) from the line segments within the curve. # Repeat from step 2 using the new value. The motivation here is that one way to sample a point uniformly from within an arbitrary curve is first to draw thin uniform-height horizontal slices across the whole curve. Then, we can sample a point within the curve by randomly selecting a slice that falls at or below the curve at the x-position from the previous iteration, then randomly picking an x-position somewhere along the slice. By using the x-position from the previous iteration of the algorithm, in the long run we select slices with probabilities proportional to the lengths of their segments within the curve. The most difficult part of this algorithm is finding the bounds of the horizontal slice, which involves inverting the function describing the distribution being sampled from. This is especially problematic for multi-modal distributions, where the slice may consist of multiple discontinuous parts. It is often possible to use a form of rejection sampling to overcome this, where we sample from a larger slice that is known to include the desired slice in question, and then discard points outside of the desired slice. This algorithm can be used to sample from the area under ''any'' curve, regardless of whether the function integrates to 1. In fact, scaling a function by a constant has no effect on the sampled x-positions. This means that the algorithm can be used to sample from a distribution whose
probability density function In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) c ...
is only known up to a constant (i.e. whose
normalizing constant The concept of a normalizing constant arises in probability theory and a variety of other areas of mathematics. The normalizing constant is used to reduce any probability function to a probability density function with total probability of one. ...
is unknown), which is common in
computational statistics Computational statistics, or statistical computing, is the bond between statistics and computer science. It means statistical methods that are enabled by using computational methods. It is the area of computational science (or scientific comput ...
.


Implementation

Slice sampling gets its name from the first step: defining a ''slice'' by sampling from an auxiliary variable Y. This variable is sampled from
, f(x) The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline o ...
/math>, where f(x) is either the
probability density function In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) c ...
(PDF) of ''X'' or is at least proportional to its PDF. This defines a slice of ''X'' where f(x) \ge Y. In other words, we are now looking at a region of ''X'' where the probability density is at least Y. Then the next value of ''X'' is sampled uniformly from this slice. A new value of Y is sampled, then ''X'', and so on. This can be visualized as alternatively sampling the y-position and then the x-position of points under PDF, thus the ''X''s are from the desired distribution. The Y values have no particular consequences or interpretations outside of their usefulness for the procedure. If both the PDF and its inverse are available, and the distribution is unimodal, then finding the slice and sampling from it are simple. If not, a stepping-out procedure can be used to find a region whose endpoints fall outside the slice. Then, a sample can be drawn from the slice using
rejection sampling In numerical analysis and computational statistics, rejection sampling is a basic technique used to generate observations from a distribution. It is also commonly called the acceptance-rejection method or "accept-reject algorithm" and is a type of ...
. Various procedures for this are described in detail by Neal. Note that, in contrast to many available methods for generating random numbers from non-uniform distributions, random variates generated directly by this approach will exhibit serial statistical dependence. This is because to draw the next sample, we define the slice based on the value of ''f''(''x'') for the current sample.


Compared to other methods

Slice sampling is a Markov chain method and as such serves the same purpose as Gibbs sampling and Metropolis. Unlike Metropolis, there is no need to manually tune the candidate function or candidate standard deviation. Recall that Metropolis is sensitive to step size. If the step size is too small
random walk In mathematics, a random walk is a random process that describes a path that consists of a succession of random steps on some mathematical space. An elementary example of a random walk is the random walk on the integer number line \mathbb ...
causes slow decorrelation. If the step size is too large there is great inefficiency due to a high rejection rate. In contrast to Metropolis, slice sampling automatically adjusts the step size to match the local shape of the density function. Implementation is arguably easier and more efficient than Gibbs sampling or simple Metropolis updates. Note that, in contrast to many available methods for generating random numbers from non-uniform distributions, random variates generated directly by this approach will exhibit serial statistical dependence. In other words, not all points have the same independent likelihood of selection. This is because to draw the next sample, we define the slice based on the value of f(x) for the current sample. However, the generated samples are markovian, and are therefore expected to converge to the correct distribution in long run. Slice Sampling requires that the distribution to be sampled be evaluable. One way to relax this requirement is to substitute an evaluable distribution which is proportional to the true unevaluable distribution.


Univariate case

To sample a random variable ''X'' with density ''f''(''x'') we introduce an auxiliary variable ''Y'' and iterate as follows: * Given a sample ''x'' we choose ''y'' uniformly at random from the interval
, ''f''(''x'') The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline ...
* given ''y'' we choose ''x'' uniformly at random from the set f^

Slice-within-Gibbs sampling

In a
Gibbs sampler In statistics, Gibbs sampling or a Gibbs sampler is a Markov chain Monte Carlo (MCMC) algorithm for obtaining a sequence of observations which are approximated from a specified multivariate probability distribution, when direct sampling is diff ...
, one needs to draw efficiently from all the full-conditional distributions. When sampling from a full-conditional density is not easy, a single iteration of slice sampling or the Metropolis-Hastings algorithm can be used within-Gibbs to sample from the variable in question. If the full-conditional density is log-concave, a more efficient alternative is the application of Rejection sampling">adaptive rejection sampling In numerical analysis and computational statistics, rejection sampling is a basic technique used to generate observations from a distribution. It is also commonly called the acceptance-rejection method or "accept-reject algorithm" and is a type of ...
(ARS) methods. When the ARS techniques cannot be applied (since the full-conditional is non-log-concave), the adaptive rejection Metropolis sampling algorithms are often employed.


Multivariate methods


Treating each variable independently

Single variable slice sampling can be used in the multivariate case by sampling each variable in turn repeatedly, as in Gibbs sampling. To do so requires that we can compute, for each component x_i a function that is proportional to p(x_i, x_0...x_n). To prevent random walk behavior, overrelaxation methods can be used to update each variable in turn. Overrelaxation chooses a new value on the opposite side of the mode from the current value, as opposed to choosing a new independent value from the distribution as done in Gibbs.


Hyperrectangle slice sampling

This method adapts the univariate algorithm to the multivariate case by substituting a hyperrectangle for the one-dimensional ''w'' region used in the original. The hyperrectangle ''H'' is initialized to a random position over the slice. ''H'' is then shrunk as points from it are rejected.


Reflective slice sampling

Reflective slice sampling is a technique to suppress random walk behavior in which the successive candidate samples of distribution ''f''(''x'') are kept within the bounds of the slice by "reflecting" the direction of sampling inward toward the slice once the boundary has been hit. In this graphical representation of reflective sampling, the shape indicates the bounds of a sampling slice. The dots indicate start and stopping points of a sampling walk. When the samples hit the bounds of the slice, the direction of sampling is "reflected" back into the slice.


Example

Consider a single variable example. Suppose our true distribution is a
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu i ...
with mean 0 and standard deviation 3, g(x)\sim N(0,3^2). So: f(x) = \frac \ e^. The peak of the distribution is obviously at x = 0, at which point f(x)\approx0.1330. # We first draw a uniform random value ''y'' from the range of ''f''(''x'') in order to define our slice(es). ''f''(''x'') ranges from 0 to ~0.1330, so any value between these two extremes suffice. Suppose we take ''y'' = 0.1. The problem becomes how to sample points that have values ''y'' > 0.1. # Next, we set our width parameter ''w'' which we will use to expand our region of consideration. This value is arbitrary. Suppose ''w'' = 2. # Next, we need an initial value for ''x''. We draw ''x'' from the uniform distribution within the domain of ''f''(''x'') which satisfies ''f''(''x'') > 0.1 (our ''y'' parameter). Suppose ''x'' = 2. This works because ''f''(2) = ~0.1065 > 0.1.Note that if we didn't know how to select ''x'' such that ''f''(''x'') > ''y'', we can still pick any random value for ''x'', evaluate ''f''(''x''), and use that as our value of ''y''. ''y'' only initializes the algorithm; as the algorithm progresses it will find higher and higher values of ''y''. # Because ''x'' = 2 and ''w'' = 2, our current region of interest is bounded by (1, 3). # Now, each endpoint of this area is tested to see if it lies outside the given slice. Our right bound lies outside our slice (''f''(3) = ~0.0807 < 0.1), but the left value does not (''f''(1) = ~0.1258 > 0.1). We expand the left bound by adding ''w'' to it until it extends past the limit of the slice. After this process, the new bounds of our region of interest are (−3, 3). # Next, we take a uniform sample within (−3, 3). Suppose this sample yields x = −2.9. Though this sample is within our region of interest, it does not lie within our slice (f(2.9) = ~0.08334 < 0.1), so we modify the left bound of our region of interest to this point. Now we take a uniform sample from (−2.9, 3). Suppose this time our sample yields x = 1, which is within our slice, and thus is the accepted sample output by slice sampling. Had our new ''x'' not been within our slice, we would continue the shrinking/resampling process until a valid ''x'' within bounds is found. If we're interested in the peak of the distribution, we can keep repeating this process since the new point corresponds to a higher ''f''(''x'') than the original point.


Another example

To sample from the
normal distribution In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu i ...
N(0,1) we first choose an initial ''x''—say 0. After each sample of ''x'' we choose ''y'' uniformly at random from (0, e^/\sqrt], which is bounded the pdf of N(0,1). After each ''y'' sample we choose ''x'' uniformly at random from \alpha, \alpha/math> where \alpha = \sqrt{-2\ln(y\sqrt{2\pi})}. This is the slice where f(x) > y. An implementation in the
Macsyma Macsyma (; "Project MAC's SYmbolic MAnipulator") is one of the oldest general-purpose computer algebra systems still in wide use. It was originally developed from 1968 to 1982 at MIT's Project MAC. In 1982, Macsyma was licensed to Symbolics and ...
language is: slice(x) := block(
, alpha The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline of ...
y:random(exp(-x^2 / 2.0) / sqrt(2.0 * dfloat(%pi))), alpha:sqrt(-2.0 * ln(y * sqrt(2.0 * dfloat(%pi)))), x:signum(random()) * random(alpha) );


See also

*
Markov chain Monte Carlo In statistics, Markov chain Monte Carlo (MCMC) methods comprise a class of algorithms for sampling from a probability distribution. By constructing a Markov chain that has the desired distribution as its equilibrium distribution, one can obtain ...


References


External links

* http://www.probability.ca/jeff/java/slice.html Markov chain Monte Carlo Non-uniform random numbers