Additive Smoothing
   HOME

TheInfoList



OR:

In
statistics Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
, additive smoothing, also called
Laplace Pierre-Simon, Marquis de Laplace (; ; 23 March 1749 – 5 March 1827) was a French polymath, a scholar whose work has been instrumental in the fields of physics, astronomy, mathematics, engineering, statistics, and philosophy. He summariz ...
smoothing or Lidstone smoothing, is a technique used to smooth count data, eliminating issues caused by certain values having 0 occurrences. Given a set of observation counts \mathbf = \langle x_1, x_2, \ldots, x_d \rangle from a d-dimensional
multinomial distribution In probability theory, the multinomial distribution is a generalization of the binomial distribution. For example, it models the probability of counts for each side of a ''k''-sided die rolled ''n'' times. For ''n'' statistical independence, indepen ...
with N trials, a "smoothed" version of the counts gives the
estimator In statistics, an estimator is a rule for calculating an estimate of a given quantity based on Sample (statistics), observed data: thus the rule (the estimator), the quantity of interest (the estimand) and its result (the estimate) are distinguish ...
: \hat\theta_i = \frac \qquad (i = 1, \ldots, d), where the smoothed count \hat x_i = N \hat\theta_i, and the "pseudocount" ''α'' > 0 is a smoothing
parameter A parameter (), generally, is any characteristic that can help in defining or classifying a particular system (meaning an event, project, object, situation, etc.). That is, a parameter is an element of a system that is useful, or critical, when ...
, with ''α'' = 0 corresponding to no smoothing (this parameter is explained in below). Additive smoothing is a type of shrinkage estimator, as the resulting estimate will be between the empirical probability ( relative frequency) x_i/N and the uniform probability 1/d. Common choices for ''α'' are 0 (no smoothing), (the
Jeffreys prior In Bayesian statistics, the Jeffreys prior is a non-informative prior distribution for a parameter space. Named after Sir Harold Jeffreys, its density function is proportional to the square root of the determinant of the Fisher information matri ...
), or 1 (Laplace's
rule of succession In probability theory, the rule of succession is a formula introduced in the 18th century by Pierre-Simon Laplace in the course of treating the sunrise problem. The formula is still used, particularly to estimate underlying probabilities when ...
), but the parameter may also be set empirically based on the observed data. From a Bayesian point of view, this corresponds to the
expected value In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first Moment (mathematics), moment) is a generalization of the weighted average. Informa ...
of the
posterior distribution The posterior probability is a type of conditional probability that results from updating the prior probability with information summarized by the likelihood via an application of Bayes' rule. From an epistemological perspective, the posterior ...
, using a symmetric
Dirichlet distribution In probability and statistics, the Dirichlet distribution (after Peter Gustav Lejeune Dirichlet), often denoted \operatorname(\boldsymbol\alpha), is a family of continuous multivariate probability distributions parameterized by a vector of pos ...
with parameter ''α'' as a
prior distribution A prior probability distribution of an uncertain quantity, simply called the prior, is its assumed probability distribution before some evidence is taken into account. For example, the prior could be the probability distribution representing the ...
. In the special case where the number of categories is 2, this is equivalent to using a
beta distribution In probability theory and statistics, the beta distribution is a family of continuous probability distributions defined on the interval
, 1 The comma is a punctuation mark that appears in several variants in different languages. Some typefaces render it as a small line, slightly curved or straight, but inclined from the vertical; others give it the appearance of a miniature fille ...
or (0, 1) in terms of two positive Statistical parameter, parameters, denoted by ''alpha'' (''α'') an ...
as the conjugate prior for the parameters of the
binomial distribution In probability theory and statistics, the binomial distribution with parameters and is the discrete probability distribution of the number of successes in a sequence of statistical independence, independent experiment (probability theory) ...
.


History

Laplace came up with this smoothing technique when he tried to estimate the chance that the sun will rise tomorrow. His rationale was that even given a large sample of days with the rising sun, we still can not be completely sure that the sun will still rise tomorrow (known as the
sunrise problem The sunrise problem can be expressed as follows: "What is the probability that the sun will rise tomorrow?" The sunrise problem illustrates the difficulty of using probability theory when evaluating the plausibility of statements or beliefs. Acc ...
).Lecture 5 , Machine Learning (Stanford)
at 1h10m into the lecture


Pseudocount

A pseudocount is an amount (not generally an integer, despite its name) added to the number of observed cases in order to change the expected
probability Probability is a branch of mathematics and statistics concerning events and numerical descriptions of how likely they are to occur. The probability of an event is a number between 0 and 1; the larger the probability, the more likely an e ...
in a
model A model is an informative representation of an object, person, or system. The term originally denoted the plans of a building in late 16th-century English, and derived via French and Italian ultimately from Latin , . Models can be divided in ...
of those data, when not known to be zero. It is so named because, roughly speaking, a pseudo-count of value \alpha weighs into the
posterior distribution The posterior probability is a type of conditional probability that results from updating the prior probability with information summarized by the likelihood via an application of Bayes' rule. From an epistemological perspective, the posterior ...
similarly to each category having an additional count of \alpha. If the frequency of each item i is x_i out of N samples, the empirical probability of event i is : p_ = \frac, but the posterior probability when additively smoothed is : p_ = \frac, as if to increase each count x_i by \alpha a priori. Depending on the prior knowledge, which is sometimes a subjective value, a pseudocount may have any non-negative finite value. It may only be zero (or the possibility ignored) if impossible by definition, such as the possibility of a decimal digit of ' being a letter, or a physical possibility that would be rejected and so not counted, such as a computer printing a letter when a valid program for ' is run, or excluded and not counted because of no interest, such as if only interested in the zeros and ones. Generally, there is also a possibility that no value may be computable or observable in a finite time (see the
halting problem In computability theory (computer science), computability theory, the halting problem is the problem of determining, from a description of an arbitrary computer program and an input, whether the program will finish running, or continue to run for ...
). But at least one possibility must have a non-zero pseudocount, otherwise no prediction could be computed before the first observation. The relative values of pseudocounts represent the relative prior expected probabilities of their possibilities. The sum of the pseudocounts, which may be very large, represents the estimated weight of the prior knowledge compared with all the actual observations (one for each) when determining the expected probability. In any observed data set or sample there is the possibility, especially with low-probability events and with small data sets, of a possible event not occurring. Its observed frequency is therefore zero, apparently implying a probability of zero. This oversimplification is inaccurate and often unhelpful, particularly in probability-based
machine learning Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of Computational statistics, statistical algorithms that can learn from data and generalise to unseen data, and thus perform Task ( ...
techniques such as
artificial neural networks In machine learning, a neural network (also artificial neural network or neural net, abbreviated ANN or NN) is a computational model inspired by the structure and functions of biological neural networks. A neural network consists of connected ...
and hidden Markov models. By artificially adjusting the probability of rare (but not impossible) events so those probabilities are not exactly zero, zero-frequency problems are avoided. Also see Cromwell's rule.


Choice of pseudocount


Weakly-informative prior

One common approach is to add 1 to each observed number of events, including the zero-count possibilities. This is sometimes called Laplace's
rule of succession In probability theory, the rule of succession is a formula introduced in the 18th century by Pierre-Simon Laplace in the course of treating the sunrise problem. The formula is still used, particularly to estimate underlying probabilities when ...
. This approach is equivalent to assuming a uniform prior distribution over the probabilities for each possible event (spanning the simplex where each probability is between 0 and 1, and they all sum to 1). Using the
Jeffreys prior In Bayesian statistics, the Jeffreys prior is a non-informative prior distribution for a parameter space. Named after Sir Harold Jeffreys, its density function is proportional to the square root of the determinant of the Fisher information matri ...
approach, a pseudocount of one half should be added to each possible outcome. Pseudocounts should be set to one or one-half only when there is no prior knowledge at allsee the
principle of indifference The principle of indifference (also called principle of insufficient reason) is a rule for assigning epistemic probabilities. The principle of indifference states that in the absence of any relevant evidence, agents should distribute their cre ...
. However, given appropriate prior knowledge, the sum should be adjusted in proportion to the expectation that the prior probabilities should be considered correct, despite evidence to the contrarysee further analysis. Higher values are appropriate inasmuch as there is prior knowledge of the true values (for a mint-condition coin, say); lower values inasmuch as there is prior knowledge that there is probable bias, but of unknown degree (for a bent coin, say).


Frequentist interval

One way to motivate pseudocounts, particularly for binomial data, is via a formula for the midpoint of an interval estimate, particularly a binomial proportion confidence interval. The best-known is due to
Edwin Bidwell Wilson Edwin Bidwell Wilson (April 25, 1879 – December 28, 1964) was an American mathematician, statistician, physicist and general polymath. He was the sole protégé of Yale University physicist Josiah Willard Gibbs and was mentor to MIT economist ...
, in : the midpoint of the Wilson score interval corresponding to standard deviations on either side is : \frac Taking z = 2 standard deviations to approximate a 95% confidence interval () yields pseudocount of 2 for each outcome, so 4 in total, colloquially known as the "plus four rule": : \frac This is also the midpoint of the Agresti–Coull interval .


Known incidence rates

Often the bias of an unknown trial population is tested against a control population with known parameters (incidence rates) \boldsymbol \mu = \langle \mu_1, \mu_2, \ldots, \mu_d \rangle. In this case the uniform probability 1/d should be replaced by the known incidence rate of the control population \mu_i to calculate the smoothed estimator: : \hat\theta_i = \frac \qquad (i = 1, \ldots, d). As a consistency check, if the empirical estimator happens to equal the incidence rate, i.e. \mu_i = x_i/N, the smoothed estimator is independent of \alpha and also equals the incidence rate.


Applications


Classification

Additive smoothing is commonly a component of naive Bayes classifiers.


Statistical language modelling

In a bag of words model of natural language processing and information retrieval, the data consists of the number of occurrences of each word in a document. Additive smoothing allows the assignment of non-zero probabilities to words which do not occur in the sample. Studies have shown that additive smoothing is more effective than other probability smoothing methods in several retrieval tasks such as language-model-based pseudo-relevance feedback and
recommender system A recommender system (RecSys), or a recommendation system (sometimes replacing ''system'' with terms such as ''platform'', ''engine'', or ''algorithm'') and sometimes only called "the algorithm" or "algorithm", is a subclass of information fi ...
s.


See also

* Bayesian average *
Prediction by partial matching Prediction by partial matching (PPM) is an adaptive statistical data compression technique based on context modeling and prediction. PPM models use a set of previous symbols in the uncompressed symbol stream to predict the next symbol in the str ...
*
Categorical distribution In probability theory and statistics, a categorical distribution (also called a generalized Bernoulli distribution, multinoulli distribution) is a discrete probability distribution that describes the possible results of a random variable that can ...


References


Sources

* * {{refend


External links

* SF Chen, J Goodman (1996).
An empirical study of smoothing techniques for language modeling
. ''Proceedings of the 34th annual meeting on Association for Computational Linguistics''.


A video explaining the use of Additive smoothing in a Naïve Bayes classifier
Statistical natural language processing Categorical data Probability theory