HOME

TheInfoList



OR:

In
econometrics Econometrics is the application of statistical methods to economic data in order to give empirical content to economic relationships.M. Hashem Pesaran (1987). "Econometrics," '' The New Palgrave: A Dictionary of Economics'', v. 2, p. 8 p. 8� ...
, Prais–Winsten estimation is a procedure meant to take care of the
serial correlation Autocorrelation, sometimes known as serial correlation in the discrete time case, is the correlation of a signal with a delayed copy of itself as a function of delay. Informally, it is the similarity between observations of a random variable as ...
of type
AR(1) In statistics, econometrics and signal processing, an autoregressive (AR) model is a representation of a type of random process; as such, it is used to describe certain time-varying processes in nature, economics, etc. The autoregressive model spe ...
in a
linear model In statistics, the term linear model is used in different ways according to the context. The most common occurrence is in connection with regression models and the term is often taken as synonymous with linear regression model. However, the term ...
. Conceived by
Sigbert Prais Sigbert Jon Prais, FBA (19 December 1928 – 22 February 2014) was an economist and had been the senior research fellow at the National Institute of Economic and Social Research (NIESR) since 1970. Life On 19 December 1928, Sigbert Jon Prais ...
and Christopher Winsten in 1954, it is a modification of
Cochrane–Orcutt estimation Cochrane–Orcutt estimation is a procedure in econometrics, which adjusts a linear model for serial correlation in the error term. Developed in the 1940s, it is named after statisticians Donald Cochrane and Guy Orcutt. Theory Consider the model ...
in the sense that it does not lose the first observation, which leads to more efficiency as a result and makes it a special case of
feasible generalized least squares In statistics, generalized least squares (GLS) is a technique for estimating the unknown parameters in a linear regression model when there is a certain degree of correlation between the residuals in a regression model. In these cases, ordina ...
.


Theory

Consider the model :y_t = \alpha + X_t \beta+\varepsilon_t,\, where y_ is the
time series In mathematics, a time series is a series of data points indexed (or listed or graphed) in time order. Most commonly, a time series is a sequence taken at successive equally spaced points in time. Thus it is a sequence of discrete-time data. E ...
of interest at time ''t'', \beta is a
vector Vector most often refers to: *Euclidean vector, a quantity with a magnitude and a direction *Vector (epidemiology), an agent that carries and transmits an infectious pathogen into another living organism Vector may also refer to: Mathematic ...
of coefficients, X_ is a matrix of
explanatory variable Dependent and independent variables are variables in mathematical modeling, statistical modeling and experimental sciences. Dependent variables receive this name because, in an experiment, their values are studied under the supposition or demand ...
s, and \varepsilon_t is the
error term In mathematics and statistics, an error term is an additive type of error. Common examples include: * errors and residuals in statistics, e.g. in linear regression In statistics, linear regression is a linear approach for modelling the relati ...
. The error term can be serially correlated over time: \varepsilon_t =\rho \varepsilon_+e_t,\ , \rho, <1 and e_t is white noise. In addition to the Cochrane–Orcutt transformation, which is :y_t - \rho y_ = \alpha(1-\rho)+(X_t - \rho X_)\beta + e_t, \, for ''t'' = 2,3,...,''T'', the Prais-Winsten procedure makes a reasonable transformation for ''t'' = 1 in the following form: :\sqrty_1 = \alpha\sqrt+\left(\sqrtX_1\right)\beta + \sqrt\varepsilon_1. \, Then the usual
least squares The method of least squares is a standard approach in regression analysis to approximate the solution of overdetermined systems (sets of equations in which there are more equations than unknowns) by minimizing the sum of the squares of the r ...
estimation is done.


Estimation procedure

First notice that \mathrm(\varepsilon_t)=\mathrm(\rho\varepsilon_+e_)=\rho^2 \mathrm(\varepsilon_) +\mathrm(e_) Noting that for a stationary process, variance is constant over time, (1-\rho^2 )\mathrm(\varepsilon_t)= \mathrm(e_) and thus, \mathrm(\varepsilon_t)=\frac Without loss of generality suppose the variance of the white noise is 1. To do the estimation in a compact way one must look at the autocovariance function of the error term considered in the model blow: : \mathrm(\varepsilon_t,\varepsilon_)=\rho^h \mathrm(\varepsilon_t)=\frac, \text h=0,\pm 1, \pm 2, \dots \, . It is easy to see that the variance–covariance matrix, \mathbf , of the model is : \mathbf = \begin \frac & \frac & \frac & \cdots & \frac \\ pt\frac & \frac & \frac & \cdots & \frac \\ pt\frac & \frac & \frac & \cdots & \frac \\ pt\vdots & \vdots & \vdots & \ddots & \vdots \\ pt\frac & \frac & \frac & \cdots & \frac \end. Having \rho (or an estimate of it), we see that, : \hat=(\mathbf^\mathbf^\mathbf)^(\mathbf^\mathbf^\mathbf), \, where \mathbf is a matrix of observations on the independent variable (''X''''t'', ''t'' = 1, 2, ..., ''T'') including a vector of ones, \mathbf is a vector stacking the observations on the dependent variable (''y''''t'', ''t'' = 1, 2, ..., ''T'') and \hat includes the model parameters.


Note

To see why the initial observation assumption stated by Prais–Winsten (1954) is reasonable, considering the mechanics of generalized least square estimation procedure sketched above is helpful. The inverse of \mathbf can be decomposed as \mathbf^=\mathbf^\mathbf with : \mathbf = \begin \sqrt & 0 & 0 & \cdots & 0 \\ -\rho & 1 & 0 & \cdots & 0 \\ 0 & -\rho & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & 1 \end. A pre-multiplication of model in a matrix notation with this matrix gives the transformed model of Prais–Winsten.


Restrictions

The
error term In mathematics and statistics, an error term is an additive type of error. Common examples include: * errors and residuals in statistics, e.g. in linear regression In statistics, linear regression is a linear approach for modelling the relati ...
is still restricted to be of an AR(1) type. If \rho is not known, a recursive procedure (
Cochrane–Orcutt estimation Cochrane–Orcutt estimation is a procedure in econometrics, which adjusts a linear model for serial correlation in the error term. Developed in the 1940s, it is named after statisticians Donald Cochrane and Guy Orcutt. Theory Consider the model ...
) or grid-search (
Hildreth–Lu estimation Hildreth–Lu estimation, named for Clifford Hildreth and John Y. Lu, is a method for adjusting a linear model in response to the presence of serial correlation in the error term. It is an iterative procedure related to the Cochrane–Orcutt estima ...
) may be used to make the estimation feasible. Alternatively, a
full information maximum likelihood In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statisti ...
procedure that estimates all parameters simultaneously has been suggested by Beach and MacKinnon.


References


Further reading

* * {{DEFAULTSORT:Prais-Winsten Transformation Estimation methods Regression with time series structure