In
statistics
Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
, model specification is part of the process of building a
statistical model
A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of Sample (statistics), sample data (and similar data from a larger Statistical population, population). A statistical model repre ...
: specification consists of selecting an appropriate
functional form for the model and choosing which variables to include. For example, given
personal income together with years of schooling
and on-the-job experience
, we might specify a functional relationship
as follows:
:
where
is the unexplained
error term that is supposed to comprise
independent and identically distributed Gaussian variables.
The statistician
Sir David Cox has said, "How
hetranslation from subject-matter problem to statistical model is done is often the most critical part of an analysis".
Specification error and bias
Specification error occurs when the functional form or the choice of
independent variables poorly represent relevant aspects of the true data-generating process. In particular,
bias
Bias is a disproportionate weight ''in favor of'' or ''against'' an idea or thing, usually in a way that is inaccurate, closed-minded, prejudicial, or unfair. Biases can be innate or learned. People may develop biases for or against an individ ...
(the
expected value
In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first Moment (mathematics), moment) is a generalization of the weighted average. Informa ...
of the difference of an estimated
parameter
A parameter (), generally, is any characteristic that can help in defining or classifying a particular system (meaning an event, project, object, situation, etc.). That is, a parameter is an element of a system that is useful, or critical, when ...
and the true underlying value) occurs if an independent variable is correlated with the errors inherent in the underlying process. There are several different possible causes of specification error; some are listed below.
*An inappropriate functional form could be employed.
*A variable omitted from the model may have a relationship with both the
dependent variable
A variable is considered dependent if it depends on (or is hypothesized to depend on) an independent variable. Dependent variables are studied under the supposition or demand that they depend, by some law or rule (e.g., by a mathematical functio ...
and one or more of the independent variables (causing
omitted-variable bias).
*An irrelevant variable may be included in the model (although this does not create bias, it involves
overfitting and so can lead to poor predictive performance).
*The dependent variable may be part of a system of
simultaneous equations (giving simultaneity bias).
Additionally,
measurement errors may affect the independent variables: while this is not a specification error, it can create statistical bias.
Note that all models will have some specification error. Indeed, in statistics there is a common aphorism that "
all models are wrong". In the words of Burnham & Anderson,
"Modeling is an art as well as a science and is directed toward finding a good approximating model ... as the basis for statistical inference".
Detection of misspecification
The
Ramsey RESET test can help test for specification error in
regression analysis.
In the example given above relating personal income to schooling and job experience, if the assumptions of the model are correct, then the
least squares estimates of the parameters
and
will be
efficient and
unbiased. Hence specification diagnostics usually involve testing the first to fourth
moment of the
residuals.
Model building
Building a model involves finding a set of relationships to represent the process that is generating the data. This requires avoiding all the sources of misspecification mentioned above.
One approach is to start with a model in general form that relies on a theoretical understanding of the data-generating process. Then the model can be fit to the data and checked for the various sources of misspecification, in a task called ''
statistical model validation''. Theoretical understanding can then guide the modification of the model in such a way as to retain theoretical validity while removing the sources of misspecification. But if it proves impossible to find a theoretically acceptable specification that fits the data, the theoretical model may have to be rejected and replaced with another one.
A quotation from
Karl Popper
Sir Karl Raimund Popper (28 July 1902 – 17 September 1994) was an Austrian–British philosopher, academic and social commentator. One of the 20th century's most influential philosophers of science, Popper is known for his rejection of the ...
is apposite here: "Whenever a theory appears to you as the only possible one, take this as a sign that you have neither understood the theory nor the problem which it was intended to solve".
[.]
Another approach to model building is to specify several different models as candidates, and then compare those candidate models to each other. The purpose of the comparison is to determine which candidate model is most appropriate for statistical inference. Common criteria for comparing models include the following:
''R''2,
Bayes factor, and the
likelihood-ratio test together with its generalization
relative likelihood. For more on this topic, see ''
statistical model selection''.
See also
*
Abductive reasoning
Abductive reasoning (also called abduction,For example: abductive inference, or retroduction) is a form of logical inference that seeks the simplest and most likely conclusion from a set of observations. It was formulated and advanced by Ameri ...
*
Conceptual model
*
Data analysis
*
Data transformation (statistics)
*
Design of experiments
*
Durbin–Wu–Hausman test
*
Exploratory data analysis
In statistics, exploratory data analysis (EDA) is an approach of data analysis, analyzing data sets to summarize their main characteristics, often using statistical graphics and other data visualization methods. A statistical model can be used or ...
*
Feature selection
*
Heteroscedasticity
In statistics, a sequence of random variables is homoscedastic () if all its random variables have the same finite variance; this is also known as homogeneity of variance. The complementary notion is called heteroscedasticity, also known as hete ...
, second-order statistical misspecification
*
Information matrix test
*
Model identification
*
Principle of Parsimony
*
Spurious relationship
*
Statistical conclusion validity
*
Statistical inference
*
Statistical learning theory
Statistical learning theory is a framework for machine learning drawing from the fields of statistics and functional analysis. Statistical learning theory deals with the statistical inference problem of finding a predictive function based on da ...
Notes
Further reading
* .
*
*
*
* .
*
*
*
*
* {{cite journal, last = Sapra, first = Sunil, title = A regression error specification test (RESET) for generalized linear models, journal =
Economics Bulletin, volume = 3, issue = 1, year = 2005, pages = 1–6, url = http://economicsbulletin.vanderbilt.edu/2005/volume3/EB-04C50033A.pdf
Regression variable selection
Statistical models