Bayesian Multivariate Linear Regression
In statistics, Bayesian multivariate linear regression is a Bayesian approach to multivariate linear regression, i.e. linear regression where the predicted outcome is a vector of correlated random variables rather than a single scalar random variable. A more general treatment of this approach can be found in the article MMSE estimator. Details Consider a regression problem where the dependent variable to be predicted is not a single real-valued scalar but an ''m''-length vector of correlated real numbers. As in the standard regression setup, there are ''n'' observations, where each observation ''i'' consists of ''k''−1 explanatory variables, grouped into a vector \mathbf_i of length ''k'' (where a dummy variable with a value of 1 has been added to allow for an intercept coefficient). This can be viewed as a set of ''m'' related regression problems for each observation ''i'': \begin y_ &= \mathbf_i^\mathsf\boldsymbol\beta_ + \epsilon_ \\ &\;\;\vdots \\ y_ &= \mathbf_i^\mathsf ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Statistics
Statistics (from German: '' Statistik'', "description of a state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model to be studied. Populations can be diverse groups of people or objects such as "all people living in a country" or "every atom composing a crystal". Statistics deals with every aspect of data, including the planning of data collection in terms of the design of surveys and experiments.Dodge, Y. (2006) ''The Oxford Dictionary of Statistical Terms'', Oxford University Press. When census data cannot be collected, statisticians collect data by developing specific experiment designs and survey samples. Representative sampling assures that inferences and conclusions can reasonably extend from the sample to the population as a whole. An ex ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
The Annals Of Mathematical Statistics
The ''Annals of Mathematical Statistics'' was a peer-reviewed statistics journal published by the Institute of Mathematical Statistics from 1930 to 1972. It was superseded by the ''Annals of Statistics'' and the ''Annals of Probability''. In 1938, Samuel Wilks became editor-in-chief of the ''Annals'' and recruited a remarkable editorial staff: Fisher, Neyman, Cramér, Hotelling, Egon Pearson, Georges Darmois, Allen T. Craig, Deming, von Mises, H. L. Rietz, and Shewhart Walter Andrew Shewhart (pronounced like "shoe-heart"; March 18, 1891 – March 11, 1967) was an American physicist, engineer and statistician, sometimes known as the ''father of statistical quality control'' and also related to the Shewhart cycl .... References {{reflist External links Annals of Mathematical Statistics at Project Euclid Statistics journals Probability journals ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Matrix Normal Distribution
In statistics, the matrix normal distribution or matrix Gaussian distribution is a probability distribution that is a generalization of the multivariate normal distribution to matrix-valued random variables. Definition The probability density function for the random matrix X (''n'' × ''p'') that follows the matrix normal distribution \mathcal_(\mathbf, \mathbf, \mathbf) has the form: : p(\mathbf\mid\mathbf, \mathbf, \mathbf) = \frac where \mathrm denotes trace and M is ''n'' × ''p'', U is ''n'' × ''n'' and V is ''p'' × ''p'', and the density is understood as the probability density function with respect to the standard Lebesgue measure in \mathbb^, i.e.: the measure corresponding to integration with respect to dx_ dx_\dots dx_ dx_\dots dx_\dots dx_. The matrix normal is related to the multivariate normal distribution in the following way: :\mathbf \sim \mathcal_(\mathbf, \mathbf, \mathbf), if and only if : ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Outer Product
In linear algebra, the outer product of two coordinate vectors is a matrix. If the two vectors have dimensions ''n'' and ''m'', then their outer product is an ''n'' × ''m'' matrix. More generally, given two tensors (multidimensional arrays of numbers), their outer product is a tensor. The outer product of tensors is also referred to as their tensor product, and can be used to define the tensor algebra. The outer product contrasts with: * The dot product (a special case of "inner product"), which takes a pair of coordinate vectors as input and produces a scalar * The Kronecker product, which takes a pair of matrices as input and produces a block matrix * Standard matrix multiplication Definition Given two vectors of size m \times 1 and n \times 1 respectively \mathbf = \begin u_1 \\ u_2 \\ \vdots \\ u_m \end, \quad \mathbf = \begin v_1 \\ v_2 \\ \vdots \\ v_n \end their outer product, denoted \mathbf \otimes \mathbf, is defined as the m \times n matrix \mathbf obtained by ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Normal Distribution
In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is : f(x) = \frac e^ The parameter \mu is the mean or expectation of the distribution (and also its median and mode), while the parameter \sigma is its standard deviation. The variance of the distribution is \sigma^2. A random variable with a Gaussian distribution is said to be normally distributed, and is called a normal deviate. Normal distributions are important in statistics and are often used in the natural and social sciences to represent real-valued random variables whose distributions are not known. Their importance is partly due to the central limit theorem. It states that, under some conditions, the average of many samples (observations) of a random variable with finite mean and variance is itself a random variable—whose distribution converges to a normal dist ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Inverse-Wishart Distribution
In statistics, the inverse Wishart distribution, also called the inverted Wishart distribution, is a probability distribution defined on real-valued positive-definite matrices. In Bayesian statistics it is used as the conjugate prior for the covariance matrix of a multivariate normal distribution. We say \mathbf follows an inverse Wishart distribution, denoted as \mathbf\sim \mathcal^(\mathbf\Psi,\nu), if its inverse \mathbf^ has a Wishart distribution \mathcal(\mathbf \Psi^, \nu) . Important identities have been derived for the inverse-Wishart distribution. Density The probability density function of the inverse Wishart is: : f_(; , \nu) = \frac \left, \mathbf\^ e^ where \mathbf and are p\times p positive definite matrices, , \cdot , is the determinant, and Γ''p''(·) is the multivariate gamma function. Theorems Distribution of the inverse of a Wishart-distributed matrix If \sim \mathcal(,\nu) and is of size p \times p, then \mathbf=^ has an invers ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Vectorization (mathematics)
In mathematics, especially in linear algebra and matrix theory, the vectorization of a matrix is a linear transformation which converts the matrix into a column vector. Specifically, the vectorization of a matrix ''A'', denoted vec(''A''), is the column vector obtained by stacking the columns of the matrix ''A'' on top of one another: :\operatorname(A) = _, \ldots, a_, a_, \ldots, a_, \ldots, a_, \ldots, a_\mathrm Here, a_ represents A(i,j) and the superscript ^\mathrm denotes the transpose. Vectorization expresses, through coordinates, the isomorphism \mathbf^ := \mathbf^m \otimes \mathbf^n \cong \mathbf^ between these (i.e., of matrices and vectors) as vector spaces. For example, for the 2×2 matrix A = \begin a & b \\ c & d \end, the vectorization is \operatorname(A) = \begin a \\ c \\ b \\ d \end. The connection between the vectorization of ''A'' and the vectorization of its transpose is given by the commutation matrix. Compatibility with Kronecker products The ve ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Kronecker Product
In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation Operation or Operations may refer to: Arts, entertainment and media * ''Operation'' (game), a battery-operated board game that challenges dexterity * Operation (music), a term used in musical set theory * ''Operations'' (magazine), Multi-Man ... on two matrix (mathematics), matrices of arbitrary size resulting in a block matrix. It is a generalization of the outer product (which is denoted by the same symbol) from vectors to matrices, and gives the matrix of the tensor product linear map with respect to a standard choice of Basis (linear algebra), basis. The Kronecker product is to be distinguished from the usual matrix multiplication, which is an entirely different operation. The Kronecker product is also sometimes called matrix direct product. The Kronecker product is named after the German mathematician Leopold Kronecker (1823–1891), even though there is little evidence that he was th ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Bayesian Linear Regression
Bayesian linear regression is a type of conditional modeling in which the mean of one variable is described by a linear combination of other variables, with the goal of obtaining the posterior probability of the regression coefficients (as well as other parameters describing the distribution of the regressand) and ultimately allowing the out-of-sample prediction of the regressand (often labelled y) '' conditional on'' observed values of the regressors (usually X). The simplest and most widely used version of this model is the ''normal linear model'', in which y given X is distributed Gaussian. In this model, and under a particular choice of prior probabilities for the parameters—so-called conjugate priors—the posterior can be found analytically. With more arbitrarily chosen priors, the posteriors generally have to be approximated. Model setup Consider a standard linear regression problem, in which for i = 1, \ldots, n we specify the mean of the conditional distribution o ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Pseudoinverse
In mathematics, and in particular, algebra, a generalized inverse (or, g-inverse) of an element ''x'' is an element ''y'' that has some properties of an inverse element but not necessarily all of them. The purpose of constructing a generalized inverse of a matrix is to obtain a matrix that can serve as an inverse in some sense for a wider class of matrices than invertible matrices. Generalized inverses can be defined in any mathematical structure that involves associative multiplication, that is, in a semigroup. This article describes generalized inverses of a matrix A. A matrix A^\mathrm \in \mathbb^ is a generalized inverse of a matrix A \in \mathbb^ if AA^\mathrmA = A. A generalized inverse exists for an arbitrary matrix, and when a matrix has a regular inverse, this inverse is its unique generalized inverse. Motivation Consider the linear system :Ax = y where A is an n \times m matrix and y \in \mathcal R(A), the column space of A. If A is nonsingular (which implie ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Linear Least Squares (mathematics)
Linear least squares (LLS) is the least squares approximation of linear functions to data. It is a set of formulations for solving statistical problems involved in linear regression, including variants for ordinary (unweighted), weighted, and generalized (correlated) residuals. Numerical methods for linear least squares include inverting the matrix of the normal equations and orthogonal decomposition methods. Main formulations The three main linear least squares formulations are: * Ordinary least squares (OLS) is the most common estimator. OLS estimates are commonly used to analyze both experimental and observational data. The OLS method minimizes the sum of squared residuals, and leads to a closed-form expression for the estimated value of the unknown parameter vector ''β'': \hat = (\mathbf^\mathsf\mathbf)^ \mathbf^\mathsf \mathbf, where \mathbf is a vector whose ''i''th element is the ''i''th observation of the dependent variable, and \mathbf is a matrix whose ''ij ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |