Matrix Normal Distribution
In statistics, the matrix normal distribution or matrix Gaussian distribution is a probability distribution that is a generalization of the multivariate normal distribution to matrix-valued random variables. Definition The probability density function for the random matrix X (''n'' × ''p'') that follows the matrix normal distribution \mathcal_(\mathbf, \mathbf, \mathbf) has the form: : p(\mathbf\mid\mathbf, \mathbf, \mathbf) = \frac where \mathrm denotes Trace (linear algebra), trace and M is ''n'' × ''p'', U is ''n'' × ''n'' and V is ''p'' × ''p'', and the density is understood as the probability density function with respect to the standard Lebesgue measure in \mathbb^, i.e.: the measure corresponding to integration with respect to dx_ dx_\dots dx_ dx_\dots dx_\dots dx_. The matrix normal is related to the multivariate normal distribution in the following way: :\mathbf \sim \mathcal_(\mathbf, \mathbf, \mathb ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
Location Parameter
In statistics, a location parameter of a probability distribution is a scalar- or vector-valued parameter x_0, which determines the "location" or shift of the distribution. In the literature of location parameter estimation, the probability distributions with such parameter are found to be formally defined in one of the following equivalent ways: * either as having a probability density function or probability mass function f(x - x_0); or * having a cumulative distribution function F(x - x_0); or * being defined as resulting from the random variable transformation x_0 + X, where X is a random variable with a certain, possibly unknown, distribution. See also . A direct example of a location parameter is the parameter \mu of the normal distribution. To see this, note that the probability density function f(x , \mu, \sigma) of a normal distribution \mathcal(\mu,\sigma^2) can have the parameter \mu factored out and be written as: : g(x' = x - \mu , \sigma) = \frac \exp\left(-\f ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
Rank (linear Algebra)
In linear algebra, the rank of a matrix is the dimension of the vector space generated (or spanned) by its columns. p. 48, § 1.16 This corresponds to the maximal number of linearly independent columns of . This, in turn, is identical to the dimension of the vector space spanned by its rows. Rank is thus a measure of the " nondegenerateness" of the system of linear equations and linear transformation encoded by . There are multiple equivalent definitions of rank. A matrix's rank is one of its most fundamental characteristics. The rank is commonly denoted by or ; sometimes the parentheses are not written, as in .Alternative notation includes \rho (\Phi) from and . Main definitions In this section, we give some definitions of the rank of a matrix. Many definitions are possible; see Alternative definitions for several of these. The column rank of is the dimension of the column space of , while the row rank of is the dimension of the row space of . A fundamental resul ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
Random Matrices
In probability theory and mathematical physics, a random matrix is a matrix-valued random variable—that is, a matrix in which some or all of its entries are sampled randomly from a probability distribution. Random matrix theory (RMT) is the study of properties of random matrices, often as they become large. RMT provides techniques like mean-field theory, diagrammatic methods, the cavity method, or the replica method to compute quantities like traces, spectral densities, or scalar products between eigenvectors. Many physical phenomena, such as the spectrum of nuclei of heavy atoms, the thermal conductivity of a lattice, or the emergence of quantum chaos, can be modeled mathematically as problems concerning large, random matrices. Applications Physics In nuclear physics, random matrices were introduced by Eugene Wigner to model the nuclei of heavy atoms. Wigner postulated that the spacings between the lines in the spectrum of a heavy atom nucleus should resemble the sp ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
John Wiley & Sons
John Wiley & Sons, Inc., commonly known as Wiley (), is an American Multinational corporation, multinational Publishing, publishing company that focuses on academic publishing and instructional materials. The company was founded in 1807 and produces books, Academic journal, journals, and encyclopedias, in print and electronically, as well as online products and services, training materials, and educational materials for undergraduate, graduate, and continuing education students. History The company was established in 1807 when Charles Wiley opened a print shop in Manhattan. The company was the publisher of 19th century American literary figures like James Fenimore Cooper, Washington Irving, Herman Melville, and Edgar Allan Poe, as well as of legal, religious, and other non-fiction titles. The firm took its current name in 1865. Wiley later shifted its focus to scientific, Technology, technical, and engineering subject areas, abandoning its literary interests. Wiley's son Joh ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
Journal Of Statistical Computation And Simulation
The ''Journal of Statistical Computation and Simulation'' is a peer-reviewed scientific journal that covers computational statistics. It is published by Taylor & Francis and was established in 1972. The editors-in-chief are Richard Krutchkoff (Virginia Polytechnic Institute and State University, Blacksburg) and Andrei Volodin (University of Regina). Abstracting and indexing The journal is abstracted and indexed in: *Current Index to Statistics *Science Citation Index Expanded *Zentralblatt MATH According to the ''Journal Citation Reports'', the journal has a 2018 impact factor The impact factor (IF) or journal impact factor (JIF) of an academic journal is a type of journal ranking. Journals with higher impact factor values are considered more prestigious or important within their field. The Impact Factor of a journa ... of 0.767. References External links * Computational statistics journals Statistics journals Academic journals established in 1972 {{statistics- ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
Biometrika
''Biometrika'' is a peer-reviewed scientific journal published by Oxford University Press for the Biometrika Trust. The editor-in-chief is Paul Fearnhead (Lancaster University). The principal focus of this journal is theoretical statistics. It was established in 1901 and originally appeared quarterly. It changed to three issues per year in 1977 but returned to quarterly publication in 1992. History ''Biometrika'' was established in 1901 by Francis Galton, Karl Pearson, and Raphael Weldon to promote the study of biometrics. The history of ''Biometrika'' is covered by Cox (2001). The name of the journal was chosen by Pearson, but Francis Edgeworth insisted that it be spelt with a "k" and not a "c". Since the 1930s, it has been a journal for statistical theory and methodology. Galton's role in the journal was essentially that of a patron and the journal was run by Pearson and Weldon and after Weldon's death in 1906 by Pearson alone until he died in 1936. In the early days, the Ameri ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
![]() |
Multivariate Normal Distribution
In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional ( univariate) normal distribution to higher dimensions. One definition is that a random vector is said to be ''k''-variate normally distributed if every linear combination of its ''k'' components has a univariate normal distribution. Its importance derives mainly from the multivariate central limit theorem. The multivariate normal distribution is often used to describe, at least approximately, any set of (possibly) correlated real-valued random variables, each of which clusters around a mean value. Definitions Notation and parametrization The multivariate normal distribution of a ''k''-dimensional random vector \mathbf = (X_1,\ldots,X_k)^ can be written in the following notation: : \mathbf\ \sim\ \mathcal(\boldsymbol\mu,\, \boldsymbol\Sigma), or to make it explicitly known that \mathb ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
Matrix T-distribution
In statistics, the matrix ''t''-distribution (or matrix variate ''t''-distribution) is the generalization of the multivariate ''t''-distribution from vectors to matrices.Zhu, Shenghuo and Kai Yu and Yihong Gong (2007)"Predictive Matrix-Variate ''t'' Models."In J. C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, ''NIPS '07: Advances in Neural Information Processing Systems'' 20, pages 1721–1728. MIT Press, Cambridge, MA, 2008. The notation is changed a bit in this article for consistency with the matrix normal distribution article. The matrix ''t''-distribution shares the same relationship with the multivariate ''t''-distribution that the matrix normal distribution shares with the multivariate normal distribution: If the matrix has only one row, or only one column, the distributions become equivalent to the corresponding (vector-)multivariate distribution. The matrix ''t''-distribution is the compound distribution that results from an infinite mixture of a matrix norm ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
Inverse-Wishart Distribution
In statistics, the inverse Wishart distribution, also called the inverted Wishart distribution, is a probability distribution defined on real-valued positive-definite matrices. In Bayesian statistics it is used as the conjugate prior for the covariance matrix of a multivariate normal distribution. We say \mathbf follows an inverse Wishart distribution, denoted as \mathbf\sim \mathcal^(\mathbf\Psi,\nu), if its inverse \mathbf^ has a Wishart distribution \mathcal(\mathbf \Psi^, \nu) . Important identities have been derived for the inverse-Wishart distribution. Density The probability density function of the inverse Wishart is: : f_(; , \nu) = \frac \left, \mathbf\^ e^ where \mathbf and are p\times p positive definite matrices, , \cdot , is the determinant, and \Gamma_p(\cdot) is the multivariate gamma function. Theorems Distribution of the inverse of a Wishart-distributed matrix If \sim \mathcal(,\nu) and is of size p \times p, then \mathbf=^ has an inverse Wish ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
Wishart Distribution
In statistics, the Wishart distribution is a generalization of the gamma distribution to multiple dimensions. It is named in honor of John Wishart (statistician), John Wishart, who first formulated the distribution in 1928. Other names include Wishart ensemble (in random matrix theory, probability distributions over matrices are usually called "ensembles"), or Wishart–Laguerre ensemble (since its eigenvalue distribution involve Laguerre polynomials), or LOE, LUE, LSE (in analogy with Random matrix#Gaussian ensembles, GOE, GUE, GSE). It is a family of probability distributions defined over symmetric, positive-definite random matrices (i.e. matrix (mathematics), matrix-valued random variables). These distributions are of great importance in the estimation of covariance matrices in multivariate statistics. In Bayesian inference, Bayesian statistics, the Wishart distribution is the conjugate prior of the matrix inverse, inverse covariance matrix, covariance-matrix of a multivariate ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
Cholesky Decomposition
In linear algebra, the Cholesky decomposition or Cholesky factorization (pronounced ) is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for efficient numerical solutions, e.g., Monte Carlo simulations. It was discovered by André-Louis Cholesky for real matrices, and posthumously published in 1924. When it is applicable, the Cholesky decomposition is roughly twice as efficient as the LU decomposition for solving systems of linear equations. Statement The Cholesky decomposition of a Hermitian positive-definite matrix , is a decomposition of the form \mathbf = \mathbf^, where is a lower triangular matrix with real and positive diagonal entries, and * denotes the conjugate transpose of . Every Hermitian positive-definite matrix (and thus also every real-valued symmetric positive-definite matrix) has a unique Cholesky decomposition. The converse holds trivially: if can be ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
Maximum Likelihood Estimate
In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed statistical model, the observed data is most probable. The point in the parameter space that maximizes the likelihood function is called the maximum likelihood estimate. The logic of maximum likelihood is both intuitive and flexible, and as such the method has become a dominant means of statistical inference. If the likelihood function is differentiable, the derivative test for finding maxima can be applied. In some cases, the first-order conditions of the likelihood function can be solved analytically; for instance, the ordinary least squares estimator for a linear regression model maximizes the likelihood when the random errors are assumed to have normal distributions with the same variance. From the perspective of Bayesian inference, ML ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |