HOME

TheInfoList



OR:

In
statistics Statistics (from German: '' Statistik'', "description of a state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, indust ...
, the Lehmann–Scheffé theorem is a prominent statement, tying together the ideas of completeness, sufficiency, uniqueness, and best unbiased estimation. The theorem states that any
estimator In statistics, an estimator is a rule for calculating an estimate of a given quantity based on observed data: thus the rule (the estimator), the quantity of interest (the estimand) and its result (the estimate) are distinguished. For example, the ...
which is unbiased for a given unknown quantity and that depends on the data only through a
complete Complete may refer to: Logic * Completeness (logic) * Completeness of a theory, the property of a theory that every formula in the theory's language or its negation is provable Mathematics * The completeness of the real numbers, which implies t ...
,
sufficient statistic In statistics, a statistic is ''sufficient'' with respect to a statistical model and its associated unknown parameter if "no other statistic that can be calculated from the same sample provides any additional information as to the value of the para ...
is the unique best unbiased estimator of that quantity. The Lehmann–Scheffé theorem is named after Erich Leo Lehmann and Henry Scheffé, given their two early papers. If ''T'' is a complete sufficient statistic for ''θ'' and E(''g''(''T'')) = ''τ''(''θ'') then ''g''(''T'') is the
uniformly minimum-variance unbiased estimator In statistics a minimum-variance unbiased estimator (MVUE) or uniformly minimum-variance unbiased estimator (UMVUE) is an unbiased estimator that has lower variance than any other unbiased estimator for all possible values of the parameter. For pra ...
(UMVUE) of ''τ''(''θ'').


Statement

Let \vec= X_1, X_2, \dots, X_n be a random sample from a distribution that has p.d.f (or p.m.f in the discrete case) f(x:\theta) where \theta \in \Omega is a parameter in the parameter space. Suppose Y = u(\vec) is a sufficient statistic for ''θ'', and let \ be a complete family. If \varphi:\operatorname varphi(Y)= \theta then \varphi(Y) is the unique MVUE of ''θ''.


Proof

By the
Rao–Blackwell theorem In statistics, the Rao–Blackwell theorem, sometimes referred to as the Rao–Blackwell–Kolmogorov theorem, is a result which characterizes the transformation of an arbitrarily crude estimator into an estimator that is optimal by the mean-squ ...
, if Z is an unbiased estimator of ''θ'' then \varphi(Y):= \operatorname \mid Y/math> defines an unbiased estimator of ''θ'' with the property that its variance is not greater than that of Z. Now we show that this function is unique. Suppose W is another candidate MVUE estimator of ''θ''. Then again \psi(Y):= \operatorname \mid Y/math> defines an unbiased estimator of ''θ'' with the property that its variance is not greater than that of W. Then : \operatorname varphi(Y) - \psi(Y)= 0, \theta \in \Omega. Since \ is a complete family : \operatorname varphi(Y) - \psi(Y)= 0 \implies \varphi(y) - \psi(y) = 0, \theta \in \Omega and therefore the function \varphi is the unique function of Y with variance not greater than that of any other unbiased estimator. We conclude that \varphi(Y) is the MVUE.


Example for when using a non-complete minimal sufficient statistic

An example of an improvable Rao–Blackwell improvement, when using a minimal sufficient statistic that is not complete, was provided by Galili and Meilijson in 2016. Let X_1, \ldots, X_n be a random sample from a scale-uniform distribution X \sim U ( (1-k) \theta, (1+k) \theta), with unknown mean \operatorname \theta and known design parameter k \in (0,1). In the search for "best" possible unbiased estimators for \theta, it is natural to consider X_1 as an initial (crude) unbiased estimator for \theta and then try to improve it. Since X_1 is not a function of T = \left( X_, X_ \right), the minimal sufficient statistic for \theta (where X_ = \min_i X_i and X_ = \max_i X_i ), it may be improved using the Rao–Blackwell theorem as follows: :\hat_ =\operatorname_\theta _1\mid X_, X_= \frac 2. However, the following unbiased estimator can be shown to have lower variance: :\hat_ = \frac 1 \cdot \frac 2. And in fact, it could be even further improved when using the following estimator: :\hat_\text=\frac n \left - \frac \right\frac The model is a
scale model A scale model is a physical model which is geometrically similar to an object (known as the prototype). Scale models are generally smaller than large prototypes such as vehicles, buildings, or people; but may be larger than small prototypes ...
. Optimal equivariant estimators can then be derived for
loss function In mathematical optimization and decision theory, a loss function or cost function (sometimes also called an error function) is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cos ...
s that are invariant.


See also

* Basu's theorem *
Complete class theorem In statistics, completeness is a property of a statistic in relation to a model for a set of observed data. In essence, it ensures that the distributions corresponding to different values of the parameters are distinct. It is closely related to ...
*
Rao–Blackwell theorem In statistics, the Rao–Blackwell theorem, sometimes referred to as the Rao–Blackwell–Kolmogorov theorem, is a result which characterizes the transformation of an arbitrarily crude estimator into an estimator that is optimal by the mean-squ ...


References

{{DEFAULTSORT:Lehmann-Scheffe theorem Theorems in statistics Estimation theory