HOME





Information Matrix Test
In econometrics, the information matrix test is used to determine whether a regression model is misspecified. The test was developed by Halbert White, who observed that in a correctly specified model and under standard regularity assumptions, the Fisher information matrix can be expressed in either of two ways: as the outer product of the gradient of the log-likelihood function, or as a function of its Hessian matrix. Consider a linear model \mathbf = \mathbf \mathbf + \mathbf, where the errors \mathbf are assumed to be distributed \mathrm(0, \sigma^2 \mathbf). If the parameters \beta and \sigma^2 are stacked in the vector \mathbf^ = \begin \beta & \sigma^2 \end, the resulting log-likelihood function is :\ell (\mathbf) = - \frac \log \sigma^2 - \frac \left( \mathbf - \mathbf \mathbf \right)^ \left( \mathbf - \mathbf \mathbf \right) The information matrix can then be expressed as : \mathbf (\mathbf) = \operatorname \left \left( \frac \right) \left( \frac \right)^ \right/math> t ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Econometrics
Econometrics is an application of statistical methods to economic data in order to give empirical content to economic relationships. M. Hashem Pesaran (1987). "Econometrics", '' The New Palgrave: A Dictionary of Economics'', v. 2, p. 8 p. 8–22 Reprinted in J. Eatwell ''et al.'', eds. (1990). ''Econometrics: The New Palgrave''p. 1 p. 1–34Abstract ( 2008 revision by J. Geweke, J. Horowitz, and H. P. Pesaran). More precisely, it is "the quantitative analysis of actual economic phenomena based on the concurrent development of theory and observation, related by appropriate methods of inference." An introductory economics textbook describes econometrics as allowing economists "to sift through mountains of data to extract simple relationships." Jan Tinbergen is one of the two founding fathers of econometrics. The other, Ragnar Frisch, also coined the term in the sense in which it is used today. A basic tool for econometrics is the multiple linear regression model. ''Econome ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Statistical Model Specification
In statistics, model specification is part of the process of building a statistical model: specification consists of selecting an appropriate functional form for the model and choosing which variables to include. For example, given personal income y together with years of schooling s and on-the-job experience x, we might specify a functional relationship y = f(s,x) as follows: : \ln y = \ln y_0 + \rho s + \beta_1 x + \beta_2 x^2 + \varepsilon where \varepsilon is the unexplained error term that is supposed to comprise independent and identically distributed Gaussian variables. The statistician Sir David Cox has said, "How hetranslation from subject-matter problem to statistical model is done is often the most critical part of an analysis". Specification error and bias Specification error occurs when the functional form or the choice of independent variables poorly represent relevant aspects of the true data-generating process. In particular, bias (the expected value of th ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Halbert White
Halbert Lynn White Jr. (November 19, 1950 – March 31, 2012) was the Chancellor's Associates Distinguished Professor of Economics at the University of California, San Diego, and a Fellow of the Econometric Society and the American Academy of Arts and Sciences. Education and career White, a native of Kansas City, Missouri, graduated salutatorian from Southwest High School in 1968. He went on to study at Princeton University, receiving his BA in economics in 1972. He earned his PhD in economics at the Massachusetts Institute of Technology in 1976, under the supervision of Jerry A. Hausman and Robert Solow. White spent his first years as an assistant professor in the University of Rochester before moving to University of California, San Diego (UCSD) in 1979. He remained at UCSD until his untimely death from cancer. Research White was well known in the field of econometrics for his 1980 paper on robust standard errors (which is among the most-cited paper in economics since 1 ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Econometrica
''Econometrica'' is a peer-reviewed academic journal of economics, publishing articles in many areas of economics, especially econometrics. It is published by Wiley-Blackwell on behalf of the Econometric Society. The current editor-in-chief is Guido Imbens. History ''Econometrica'' was established in 1933. Its first editor was Ragnar Frisch, recipient of the first Nobel Memorial Prize in Economic Sciences in 1969, who served as an editor from 1933 to 1954. Although ''Econometrica'' is currently published entirely in English, the first few issues also contained scientific articles written in French. Indexing and abstracting ''Econometrica'' is abstracted and indexed in: * Scopus * EconLit * Social Sciences Citation Index According to the ''Journal Citation Reports'', the journal has a 2020 impact factor The impact factor (IF) or journal impact factor (JIF) of an academic journal is a type of journal ranking. Journals with higher impact factor values are considered mo ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Fisher Information Matrix
In mathematical statistics, the Fisher information is a way of measuring the amount of information that an observable random variable ''X'' carries about an unknown parameter ''θ'' of a distribution that models ''X''. Formally, it is the variance of the score, or the expected value of the observed information. The role of the Fisher information in the asymptotic theory of maximum-likelihood estimation was emphasized and explored by the statistician Sir Ronald Fisher (following some initial results by Francis Ysidro Edgeworth). The Fisher information matrix is used to calculate the covariance matrices associated with maximum-likelihood estimates. It can also be used in the formulation of test statistics, such as the Wald test. In Bayesian statistics, the Fisher information plays a role in the derivation of non-informative prior distributions according to Jeffreys' rule. It also appears as the large-sample covariance of the posterior distribution, provided that the prior is ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Outer Product
In linear algebra, the outer product of two coordinate vectors is the matrix whose entries are all products of an element in the first vector with an element in the second vector. If the two coordinate vectors have dimensions ''n'' and ''m'', then their outer product is an ''n'' × ''m'' matrix. More generally, given two tensors (multidimensional arrays of numbers), their outer product is a tensor. The outer product of tensors is also referred to as their tensor product, and can be used to define the tensor algebra. The outer product contrasts with: * The dot product (a special case of "inner product"), which takes a pair of coordinate vectors as input and produces a scalar * The Kronecker product, which takes a pair of matrices as input and produces a block matrix * Standard matrix multiplication Definition Given two vectors of size m \times 1 and n \times 1 respectively :\mathbf = \begin u_1 \\ u_2 \\ \vdots \\ u_m \end, \quad \mathbf = \begin v_1 \\ v_2 \\ \vdots \\ v_n \en ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Gradient
In vector calculus, the gradient of a scalar-valued differentiable function f of several variables is the vector field (or vector-valued function) \nabla f whose value at a point p gives the direction and the rate of fastest increase. The gradient transforms like a vector under change of basis of the space of variables of f. If the gradient of a function is non-zero at a point p, the direction of the gradient is the direction in which the function increases most quickly from p, and the magnitude of the gradient is the rate of increase in that direction, the greatest absolute directional derivative. Further, a point where the gradient is the zero vector is known as a stationary point. The gradient thus plays a fundamental role in optimization theory, where it is used to minimize a function by gradient descent. In coordinate-free terms, the gradient of a function f(\mathbf) may be defined by: df=\nabla f \cdot d\mathbf where df is the total infinitesimal change in f for a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Hessian Matrix
In mathematics, the Hessian matrix, Hessian or (less commonly) Hesse matrix is a square matrix of second-order partial derivatives of a scalar-valued Function (mathematics), function, or scalar field. It describes the local curvature of a function of many variables. The Hessian matrix was developed in the 19th century by the German mathematician Otto Hesse, Ludwig Otto Hesse and later named after him. Hesse originally used the term "functional determinants". The Hessian is sometimes denoted by H or \nabla\nabla or \nabla^2 or \nabla\otimes\nabla or D^2. Definitions and properties Suppose f : \R^n \to \R is a function taking as input a vector \mathbf \in \R^n and outputting a scalar f(\mathbf) \in \R. If all second-order partial derivatives of f exist, then the Hessian matrix \mathbf of f is a square n \times n matrix, usually defined and arranged as \mathbf H_f= \begin \dfrac & \dfrac & \cdots & \dfrac \\[2.2ex] \dfrac & \dfrac & \cdots & \dfrac \\[2.2ex] \vdots & \vdot ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Likelihood Function
A likelihood function (often simply called the likelihood) measures how well a statistical model explains observed data by calculating the probability of seeing that data under different parameter values of the model. It is constructed from the joint probability distribution of the random variable that (presumably) generated the observations. When evaluated on the actual data points, it becomes a function solely of the model parameters. In maximum likelihood estimation, the argument that maximizes the likelihood function serves as a point estimate for the unknown parameter, while the Fisher information (often approximated by the likelihood's Hessian matrix at the maximum) gives an indication of the estimate's precision. In contrast, in Bayesian statistics, the estimate of interest is the ''converse'' of the likelihood, the so-called posterior probability of the parameter given the observed data, which is calculated via Bayes' rule. Definition The likelihood function, ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Score (statistics)
In statistics, the score (or informant) is the gradient of the log-likelihood function with respect to the statistical parameter, parameter vector. Evaluated at a particular value of the parameter vector, the score indicates the steepness of the log-likelihood function and thereby the sensitivity to infinitesimal changes to the parameter values. If the log-likelihood function is Continuous function, continuous over the parameter space, the score will vanish (mathematics), vanish at a local Maxima and minima, maximum or minimum; this fact is used in maximum likelihood estimation to find the parameter values that maximize the likelihood function. Since the score is a function of the Realization (probability), observations, which are subject to sampling error, it lends itself to a test statistic known as ''score test'' in which the parameter is held at a particular value. Further, the likelihood ratio, ratio of two likelihood functions evaluated at two distinct parameter values can ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Random Matrix
In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all of its entries are sampled randomly from a probability distribution. Random matrix theory (RMT) is the study of properties of random matrices, often as they become large. RMT provides techniques like mean-field theory, diagrammatic methods, the cavity method, or the replica method to compute quantities like traces, spectral densities, or scalar products between eigenvectors. Many physical phenomena, such as the spectrum of nuclei of heavy atoms, the thermal conductivity of a lattice, or the emergence of quantum chaos, can be modeled mathematically as problems concerning large, random matrices. Applications Physics In nuclear physics, random matrices were introduced by Eugene Wigner to model the nuclei of heavy atoms. Wigner postulated that the spacings between the lines in the spectrum of a heavy atom nucleus should resemble the ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]