Park Test
   HOME

TheInfoList



OR:

In
econometrics Econometrics is an application of statistical methods to economic data in order to give empirical content to economic relationships. M. Hashem Pesaran (1987). "Econometrics", '' The New Palgrave: A Dictionary of Economics'', v. 2, p. 8 p. 8 ...
, the Park test is a test for
heteroscedasticity In statistics, a sequence of random variables is homoscedastic () if all its random variables have the same finite variance; this is also known as homogeneity of variance. The complementary notion is called heteroscedasticity, also known as hete ...
. The test is based on the method proposed by Rolla Edward Park for estimating
linear regression In statistics, linear regression is a statistical model, model that estimates the relationship between a Scalar (mathematics), scalar response (dependent variable) and one or more explanatory variables (regressor or independent variable). A mode ...
parameters in the presence of
heteroscedastic In statistics, a sequence of random variables is homoscedastic () if all its random variables have the same finite variance; this is also known as homogeneity of variance. The complementary notion is called heteroscedasticity, also known as hete ...
error terms.


Background

In regression analysis,
heteroscedasticity In statistics, a sequence of random variables is homoscedastic () if all its random variables have the same finite variance; this is also known as homogeneity of variance. The complementary notion is called heteroscedasticity, also known as hete ...
refers to unequal
variance In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion ...
s of the random error terms \epsilon_i, such that :\operatorname(\epsilon_i)=E(\epsilon_i^2)-E(\epsilon_i)^2=E(\epsilon_i^2)=\sigma_i^2. It is assumed that \operatorname(\epsilon_i)=0. The above variance varies with i, or the i^ trial in an experiment or the i^case or observation in a dataset. Equivalently, heteroscedasticity refers to unequal conditional variances in the response variables Y_i, such that :\operatorname(Y_i, X_i)=\sigma_i^2, again a value that depends on i – or, more specifically, a value that is conditional on the values of one or more of the regressors X.
Homoscedasticity In statistics, a sequence of random variables is homoscedastic () if all its random variables have the same finite variance; this is also known as homogeneity of variance. The complementary notion is called heteroscedasticity, also known as hete ...
, one of the basic Gauss–Markov assumptions of
ordinary least squares In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression In statistics, linear regression is a statistical model, model that estimates the relationship ...
linear regression In statistics, linear regression is a statistical model, model that estimates the relationship between a Scalar (mathematics), scalar response (dependent variable) and one or more explanatory variables (regressor or independent variable). A mode ...
modeling, refers to equal variance in the random error terms regardless of the trial or observation, such that :\operatorname(\epsilon_i)=\sigma^2, a constant.


Test description

Park, on noting a standard recommendation of assuming proportionality between error term variance and the square of the regressor, suggested instead that analysts 'assume a structure for the variance of the error term' and suggested one such structure: :\operatorname(\sigma_^2)=\operatorname(\sigma^2)=\gamma\operatorname(X_i)+v_i in which the error terms v_i are considered well behaved. This relationship is used as the basis for this test. The modeler first runs the unadjusted regression :Y_i=\beta_0+\beta_1X_+...+\beta_X_+\epsilon_i where the latter contains ''p'' − 1 regressors, and then squares and takes the natural logarithm of each of the residuals (\hat), which serve as estimators of the \epsilon_i. The squared residuals \hat^2 in turn estimate \sigma_^2. If, then, in a regression of \ln on the natural logarithm of one or more of the regressors X_i, we arrive at statistical significance for non-zero values on one or more of the \hat\gamma_i, we reveal a connection between the residuals and the regressors. We reject the null hypothesis of homoscedasticity and conclude that heteroscedasticity is present.


See also

* Breusch–Pagan test * Glejser test * Goldfeld–Quandt test *
White test White test is a statistical test that establishes whether the variance of the errors in a regression model is constant: that is for homoskedasticity. This test, and an estimator for heteroscedasticity-consistent standard errors, were proposed ...


Notes

The test has been discussed in econometrics textbooks. Stephen Goldfeld and Richard E. Quandt raise concerns about the assumed structure, cautioning that the vi may be heteroscedastic and otherwise violate assumptions of ordinary least squares regression.Goldfeld, Stephen M.; Quandt, Richard E. (1972) ''Nonlinear Methods in Econometrics'', Amsterdam: North Holland Publishing Company, pp. 93–94. Referred to in: Gujarati, Damodar (1988) ''Basic Econometrics'' (2nd Edition), New York: McGraw-Hill, p. 329.


Notes

{{Reflist Statistical tests Regression diagnostics