Neyman Construction
   HOME

TheInfoList



OR:

Neyman construction, named after Jerzy Spława-Neyman, is a
frequentist Frequentist inference is a type of statistical inference based in frequentist probability, which treats “probability” in equivalent terms to “frequency” and draws conclusions from sample-data by means of emphasizing the frequency or pro ...
method to construct an interval at a confidence level C, \, such that if we repeat the experiment many times the interval will contain the true value of some parameter a fraction C\, of the time.


Theory

Assume X_,X_,...X_ are random variables with joint pdf f(x_,x_,...x_ , \theta_,\theta_,...,\theta_), which depends on k unknown parameters. For convenience, let \Theta be the sample space defined by the n random variables and subsequently define a sample point in the sample space as X=(X_,X_,...X_)
Neyman originally proposed defining two functions L(x) and U(x) such that for any sample point,X, *L(X)\leq U(X) \forall X\in\Theta * L and U are single valued and defined. Given an observation, X^', the probability that \theta_ lies between L(X^') and U(X^') is defined as P(L(X^')\leq\theta_\leq U(X^') , X^') with probability of 0 or 1. These calculated probabilities fail to draw meaningful inference about \theta_ since the probability is simply zero or unity. Furthermore, under the frequentist construct the model parameters are unknown constants and not permitted to be random variables. For example if \theta_=5, then P(2 \leq 5\leq 10)=1. Likewise, if \theta_=11, then P(2 \leq 11 \leq 10)=0 As Neyman describes in his 1937 paper, suppose that we consider all points in the sample space, that is, \forall X\in\Theta, which are a system of random variables defined by the joint pdf described above. Since L and U are functions of X they too are random variables and one can examine the meaning of the following probability statement:
:Under the frequentist construct the model parameters are unknown constants and not permitted to be random variables. Considering all the sample points in the sample space as random variables defined by the joint pdf above, that is all X\in\Theta it can be shown that L and U are functions of random variables and hence random variables. Therefore one can look at the probability of L(X) and U(X) for some X\in\Theta. If \theta_^' is the true value of \theta_, we can define L and U such that the probability L(X) \leq\theta_^' and \theta_^'\leq U(X) is equal to pre-specified confidence level, C. That is, P(L(X)\leq\theta_^'\leq U(X) , \theta_^')=C where 0\leq C \leq1 and L(X) and U(X) are the upper and lower confidence limits for \theta_


Coverage probability

The
coverage probability In statistical estimation theory, the coverage probability, or coverage for short, is the probability that a confidence interval or confidence region will include the true value (parameter) of interest. It can be defined as the proportion of i ...
, C, for Neyman construction is the frequency of experiments in which the confidence interval contains the actual value of interest. Generally, the coverage probability is set to a 95\% confidence. For Neyman construction, the coverage probability is set to some value C where 0 < C < 1.


Implementation

A Neyman construction can be carried out by performing multiple experiments that construct data sets corresponding to a given value of the parameter. The experiments are fitted with conventional methods, and the space of fitted parameter values constitutes the band which the confidence interval can be selected from.


Classic example

Suppose X \sim N( \theta,\sigma^2), where \theta and \sigma^2 are unknown constants where we wish to estimate \theta. We can define (2) single value functions, L and U, defined by the process above such that given a pre-specified confidence level, C, and random sample X^*=(x_1,x_2,...x_n) : L(X^*)=\bar - t \frac : U(X^*)=\bar + t \frac where s/\sqrt is the
standard error The standard error (SE) of a statistic (usually an estimator of a parameter, like the average or mean) is the standard deviation of its sampling distribution or an estimate of that standard deviation. In other words, it is the standard deviati ...
, and the
sample mean The sample mean (sample average) or empirical mean (empirical average), and the sample covariance or empirical covariance are statistics computed from a sample of data on one or more random variables. The sample mean is the average value (or me ...
and standard deviation are: : \bar=\frac \sum_^n x_i=\frac(x_1,x_2,...x_n) : s=\sqrt The factor t follows a ''t'' distribution with (n-1) degrees of freedom, t~t(/2,n-1)


Another Example

X_1, X_2, ... , X_n are iid random variables, and let T = (X_1, X_2,..., X_n) . Suppose T\sim N(\mu, \sigma^2) . Now to construct a confidence interval with C level of confidence. We know \bar is sufficient for \mu . So, : p(-Z_\frac \le \frac \le Z_\frac ) = C : p(-Z_\frac \sigma^2 \le \bar - \mu \le Z_\frac \sigma^2 ) = C : p(\bar - Z_\frac \sigma^2 \le \mu \le \bar + Z_\frac \sigma^2 ) = C This produces a 100(C)\% confidence interval for \mu where, : L(T) = \bar - Z_\frac \sigma^2 : U(T) = \bar + Z_\frac \sigma^2 .


See also

*
Probability interpretations The word "probability" has been used in a variety of ways since it was first applied to the mathematical study of games of chance. Does probability measure the real, physical, tendency of something to occur, or is it a measure of how strongly on ...


References

{{Reflist Estimation methods