Good–Turing frequency estimation is a
statistical
Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
technique for estimating the
probability
Probability is a branch of mathematics and statistics concerning events and numerical descriptions of how likely they are to occur. The probability of an event is a number between 0 and 1; the larger the probability, the more likely an e ...
of
encountering an object of a hitherto unseen species, given a set of past observations of objects from different species. In drawing balls from an urn, the 'objects' would be balls and the 'species' would be the distinct colours of the balls (finite but unknown in number). After drawing
red balls,
black balls and
green balls, we would ask what is the probability of drawing a red ball, a black ball, a green ball or one of a previously unseen colour.
Historical background
Good–Turing frequency estimation was developed by
Alan Turing
Alan Mathison Turing (; 23 June 1912 – 7 June 1954) was an English mathematician, computer scientist, logician, cryptanalyst, philosopher and theoretical biologist. He was highly influential in the development of theoretical computer ...
and his assistant
I. J. Good as part of their methods used at
Bletchley Park
Bletchley Park is an English country house and Bletchley Park estate, estate in Bletchley, Milton Keynes (Buckinghamshire), that became the principal centre of Allies of World War II, Allied World War II cryptography, code-breaking during the S ...
for cracking
German cipher
In cryptography, a cipher (or cypher) is an algorithm for performing encryption or decryption—a series of well-defined steps that can be followed as a procedure. An alternative, less common term is ''encipherment''. To encipher or encode i ...
s for the
Enigma machine
The Enigma machine is a cipher device developed and used in the early- to mid-20th century to protect commercial, diplomatic, and military communication. It was employed extensively by Nazi Germany during World War II, in all branches of the W ...
during
World War II
World War II or the Second World War (1 September 1939 – 2 September 1945) was a World war, global conflict between two coalitions: the Allies of World War II, Allies and the Axis powers. World War II by country, Nearly all of the wo ...
. Turing at first modelled the frequencies as a
multinomial distribution In probability theory, the multinomial distribution is a generalization of the binomial distribution. For example, it models the probability of counts for each side of a ''k''-sided die rolled ''n'' times. For ''n'' statistical independence, indepen ...
, but found it inaccurate. Good developed
smoothing
In statistics and image processing, to smooth a data set is to create an approximating function that attempts to capture important patterns in the data, while leaving out noise or other fine-scale structures/rapid phenomena. In smoothing, the d ...
algorithms to improve the estimator's accuracy.
The discovery was recognised as significant when published by Good in 1953,
but the calculations were difficult so it was not used as widely as it might have been. The method even gained some literary fame due to the
Robert Harris novel ''
Enigma''.
In the 1990s,
Geoffrey Sampson
Geoffrey Sampson (born 1944) is Professor of Natural Language Computing in the Department of Informatics (academic field), Informatics, University of Sussex. worked with William A. Gale of
AT&T
AT&T Inc., an abbreviation for its predecessor's former name, the American Telephone and Telegraph Company, is an American multinational telecommunications holding company headquartered at Whitacre Tower in Downtown Dallas, Texas. It is the w ...
to create and implement a simplified and easier-to-use variant of the Good–Turing method described below. Various heuristic justifications and a simple combinatorial derivation have been provided.
The method
The Good–Turing estimator is largely independent of the distribution of species frequencies.
Notation
Suppose that
distinct species have been observed, enumerated
. Then the frequency vector,
, has elements
that give the number of individuals that have been observed for species
. The frequency of frequencies vector,
, shows how many times the frequency
occurs in the vector
(i.e., among the elements
):
For example,
is the number of species for which only one individual was observed. Note that the total number of objects observed,
, can be found from
Calculation
The first step in the calculation is to estimate the probability that a future observed individual (or the next observed individual) is a member of a thus far unseen species. This estimate is
[
]
The next step is to estimate the probability that the next observed individual is from a species which has been seen
times. For a species this estimate is
Here, the notation
means the ''smoothed'', or ''adjusted'' value of the frequency shown in parentheses. An overview of how to perform this smoothing
follows in the next section (see also
empirical Bayes method
Empirical Bayes methods are procedures for statistical inference in which the prior probability distribution is estimated from the data. This approach stands in contrast to standard Bayesian methods, for which the prior distribution is fixed ...
).
To estimate the probability that the next observed individual is from any species from this group (i.e., the group of species seen
times) one can use the following formula:
Smoothing
For smoothing the erratic values in
for large
, we would like to make a plot of
versus
but this is problematic because for large
many
will be zero. Instead a revised quantity,
, is plotted versus
, where
is defined as
and where
,
, and
are three consecutive subscripts with non-zero counts
,
,
. For the special case when
is 1, take
to be 0. In the opposite special case, when
is the index of the non-zero count, replace the divisor
with
, so
.
A
simple linear regression
In statistics, simple linear regression (SLR) is a linear regression model with a single explanatory variable. That is, it concerns two-dimensional sample points with one independent variable and one dependent variable (conventionally, the ''x ...
is then fitted to the log–log plot.
For small values of
it is reasonable to set
– that is, no smoothing is performed.
For large values of
, values of
are read off the regression line. An automatic procedure (not described here) can be used to specify at what point the switch from no smoothing to linear smoothing should take place.
Code for the method is available in the public domain.
Derivation
Many different derivations of the above formula for
have been given.
One of the simplest ways to motivate the formula is by assuming the next item will behave similarly to the previous item. The overall idea of the estimator is that currently we are seeing never-seen items at a certain frequency, seen-once items at a certain frequency, seen-twice items at a certain frequency, and so on. Our goal is to estimate just how likely each of these categories is, for the ''next'' item we will see. Put another way, we want to know the current rate at which seen-twice items are becoming seen-thrice items, and so on. Since we don't assume anything about the underlying probability distribution, it does sound a bit mysterious at first. But it is extremely easy to calculate these probabilities ''empirically'' for the ''previous'' item we saw, even assuming we don't remember exactly which item that was: Take all the items we have seen so far (including multiplicities) — the last item we saw was a random one of these, all equally likely. Specifically, the chance that we saw an item for the
th time is simply the chance that it was one of the items that we have now seen
times, namely
. In other words, our chance of seeing an item that had been seen ''r'' times before was
. So now we simply assume that this chance will be about the same for the next item we see. This immediately gives us the formula above for
, by setting
. And for
, to get the probability that ''a particular one'' of the
items is going to be the next one seen, we need to divide this probability (of seeing ''some'' item that has been seen ''r'' times) among the
possibilities for which particular item that could be. This gives us the formula
. Of course, your actual data will probably be a bit noisy, so you will want to smooth the values first to get a better estimate of how quickly the category counts are growing, and this gives the formula as shown above. This approach is in the same spirit as deriving the standard
Bernoulli estimator
In statistics, an estimator is a rule for calculating an estimate of a given quantity based on Sample (statistics), observed data: thus the rule (the estimator), the quantity of interest (the estimand) and its result (the estimate) are distinguish ...
by simply asking what the two probabilities were for the previous coin flip (after scrambling the trials seen so far), given only the current result counts, while assuming nothing about the underlying distribution.
See also
*
Ewens sampling formula
*
Pseudocount
References
Bibliography
* David A. McAllester, Robert Schapire (2000
''On the Convergence Rate of Good–Turing Estimators'' ''Proceedings of the Thirteenth Annual Conference on Computational Learning Theory'' pp. 1–6
* David A. McAllester, Ortiz, Luis (2003
''Journal of Machine Learning Research'' pp. 895–911
{{DEFAULTSORT:Good-Turing Frequency Estimation
1953 introductions
Categorical data
Probability assessment
Alan Turing