Bayesian Ridge Regression
   HOME

TheInfoList



OR:

Bayesian linear regression is a type of
conditional model Discriminative models, also referred to as conditional models, are a class of models frequently used for statistical classification, classification. They are typically used to solve binary classification problems, i.e. assign labels, such as pass/f ...
ing in which the mean of one variable is described by a
linear combination In mathematics, a linear combination or superposition is an Expression (mathematics), expression constructed from a Set (mathematics), set of terms by multiplying each term by a constant and adding the results (e.g. a linear combination of ''x'' a ...
of other variables, with the goal of obtaining the
posterior probability The posterior probability is a type of conditional probability that results from updating the prior probability with information summarized by the likelihood via an application of Bayes' rule. From an epistemological perspective, the posteri ...
of the regression coefficients (as well as other parameters describing the
distribution Distribution may refer to: Mathematics *Distribution (mathematics), generalized functions used to formulate solutions of partial differential equations *Probability distribution, the probability of a particular value or value range of a varia ...
of the regressand) and ultimately allowing the out-of-sample prediction of the
regressand A variable is considered dependent if it depends on (or is hypothesized to depend on) an independent variable. Dependent variables are studied under the supposition or demand that they depend, by some law or rule (e.g., by a mathematical function ...
(often labelled y) '' conditional on'' observed values of the regressors (usually X). The simplest and most widely used version of this model is the ''normal linear model'', in which y given X is distributed
Gaussian Carl Friedrich Gauss (1777–1855) is the eponym of all of the topics listed below. There are over 100 topics all named after this German mathematician and scientist, all in the fields of mathematics, physics, and astronomy. The English eponymo ...
. In this model, and under a particular choice of
prior probabilities A prior probability distribution of an uncertain quantity, simply called the prior, is its assumed probability distribution before some evidence is taken into account. For example, the prior could be the probability distribution representing the ...
for the parameters—so-called
conjugate prior In Bayesian probability theory, if, given a likelihood function p(x \mid \theta), the posterior distribution p(\theta \mid x) is in the same probability distribution family as the prior probability distribution p(\theta), the prior and posteri ...
s—the posterior can be found analytically. With more arbitrarily chosen priors, the posteriors generally have to be approximated.


Model setup

Consider a standard
linear regression In statistics, linear regression is a statistical model, model that estimates the relationship between a Scalar (mathematics), scalar response (dependent variable) and one or more explanatory variables (regressor or independent variable). A mode ...
problem, in which for i = 1, \ldots, n we specify the mean of the
conditional distribution Conditional (if then) may refer to: * Causal conditional, if X then Y, where X is a cause of Y *Conditional probability, the probability of an event A given that another event B * Conditional proof, in logic: a proof that asserts a conditional, ...
of y_i given a k \times 1 predictor vector \mathbf_i: y_ = \mathbf_i^\mathsf \boldsymbol\beta + \varepsilon_i, where \boldsymbol\beta is a k \times 1 vector, and the \varepsilon_i are independent and identically
normally distributed In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real number, real-valued random variable. The general form of its probability density function is f(x ...
random variables: \varepsilon_ \sim N(0, \sigma^2). This corresponds to the following
likelihood function A likelihood function (often simply called the likelihood) measures how well a statistical model explains observed data by calculating the probability of seeing that data under different parameter values of the model. It is constructed from the ...
: \rho(\mathbf\mid\mathbf,\boldsymbol\beta,\sigma^) \propto (\sigma^2)^ \exp\left(-\frac (\mathbf- \mathbf \boldsymbol\beta)^\mathsf(\mathbf- \mathbf \boldsymbol\beta)\right). The
ordinary least squares In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression In statistics, linear regression is a statistical model, model that estimates the relationship ...
solution is used to estimate the coefficient vector using the Moore–Penrose pseudoinverse: \hat = (\mathbf^\mathsf\mathbf)^\mathbf^\mathsf\mathbf where \mathbf is the n \times k
design matrix In statistics and in particular in regression analysis, a design matrix, also known as model matrix or regressor matrix and often denoted by X, is a matrix of values of explanatory variables of a set of objects. Each row represents an individual o ...
, each row of which is a predictor vector \mathbf_i^\mathsf; and \mathbf is the column n-vector _1 \; \cdots \; y_n\mathsf. This is a
frequentist Frequentist inference is a type of statistical inference based in frequentist probability, which treats “probability” in equivalent terms to “frequency” and draws conclusions from sample-data by means of emphasizing the frequency or pro ...
approach, and it assumes that there are enough measurements to say something meaningful about \boldsymbol\beta. In the
Bayesian Thomas Bayes ( ; c. 1701 – 1761) was an English statistician, philosopher, and Presbyterian minister. Bayesian ( or ) may be either any of a range of concepts and approaches that relate to statistical methods based on Bayes' theorem Bayes ...
approach, the data are supplemented with additional information in the form of a
prior probability distribution A prior probability distribution of an uncertain quantity, simply called the prior, is its assumed probability distribution before some evidence is taken into account. For example, the prior could be the probability distribution representing the ...
. The prior belief about the parameters is combined with the data's likelihood function according to
Bayes' theorem Bayes' theorem (alternatively Bayes' law or Bayes' rule, after Thomas Bayes) gives a mathematical rule for inverting Conditional probability, conditional probabilities, allowing one to find the probability of a cause given its effect. For exampl ...
to yield the posterior belief about the parameters \boldsymbol\beta and \sigma. The prior can take different functional forms depending on the domain and the information that is available ''a priori''. Since the data comprise both \mathbf and \mathbf, the focus only on the distribution of \mathbf conditional on \mathbf needs justification. In fact, a "full" Bayesian analysis would require a joint likelihood \rho(\mathbf,\mathbf\mid\boldsymbol\beta,\sigma^,\gamma) along with a prior \rho(\beta,\sigma^,\gamma), where \gamma symbolizes the parameters of the distribution for \mathbf. Only under the assumption of (weak) exogeneity can the joint likelihood be factored into \rho(\mathbf\mid\boldsymbol\mathbf,\beta,\sigma^)\rho(\mathbf\mid\gamma). The latter part is usually ignored under the assumption of disjoint parameter sets. More so, under classic assumptions \mathbf are considered chosen (for example, in a designed experiment) and therefore has a known probability without parameters.


With conjugate priors


Conjugate prior distribution

For an arbitrary prior distribution, there may be no analytical solution for the
posterior distribution The posterior probability is a type of conditional probability that results from updating the prior probability with information summarized by the likelihood via an application of Bayes' rule. From an epistemological perspective, the posterior ...
. In this section, we will consider a so-called
conjugate prior In Bayesian probability theory, if, given a likelihood function p(x \mid \theta), the posterior distribution p(\theta \mid x) is in the same probability distribution family as the prior probability distribution p(\theta), the prior and posteri ...
for which the posterior distribution can be derived analytically. A prior \rho(\boldsymbol\beta,\sigma^) is conjugate to this likelihood function if it has the same functional form with respect to \boldsymbol\beta and \sigma. Since the log-likelihood is quadratic in \boldsymbol\beta, the log-likelihood is re-written such that the likelihood becomes normal in (\boldsymbol\beta-\hat). Write \begin (\mathbf- \mathbf \boldsymbol\beta)^\mathsf(\mathbf- \mathbf \boldsymbol\beta) &= \mathbf- \mathbf \hat) + (\mathbf \hat - \mathbf \boldsymbol\beta)\mathsf \mathbf- \mathbf \hat) + (\mathbf \hat - \mathbf \boldsymbol\beta)\\ &= (\mathbf- \mathbf \hat)^\mathsf(\mathbf- \mathbf \hat) + (\boldsymbol\beta - \hat)^\mathsf(\mathbf^\mathsf\mathbf)(\boldsymbol\beta - \hat) + \underbrace_\\ &= (\mathbf- \mathbf \hat)^\mathsf(\mathbf- \mathbf \hat) + (\boldsymbol\beta - \hat)^\mathsf(\mathbf^\mathsf\mathbf)(\boldsymbol\beta - \hat)\,. \end The likelihood is now re-written as \rho(\mathbf, \mathbf,\boldsymbol\beta,\sigma^) \propto (\sigma^2)^ \exp\left(-\frac\right)(\sigma^2)^ \exp\left(-\frac(\boldsymbol\beta - \hat)^\mathsf(\mathbf^\mathsf\mathbf)(\boldsymbol\beta - \hat)\right), where vs^2 =(\mathbf- \mathbf \hat)^\mathsf(\mathbf- \mathbf \hat) \quad \text \quad v = n-k, where k is the number of regression coefficients. This suggests a form for the prior: \rho(\boldsymbol\beta,\sigma^2) = \rho(\sigma^2)\rho(\boldsymbol\beta\mid\sigma^2), where \rho(\sigma^2) is an
inverse-gamma distribution In probability theory and statistics, the inverse gamma distribution is a two-parameter family of continuous probability distributions on the positive real line, which is the distribution of the reciprocal of a variable distributed according to ...
\rho(\sigma^2) \propto (\sigma^2)^ \exp\left(-\frac\right). In the notation introduced in the
inverse-gamma distribution In probability theory and statistics, the inverse gamma distribution is a two-parameter family of continuous probability distributions on the positive real line, which is the distribution of the reciprocal of a variable distributed according to ...
article, this is the density of an \text( a_0, b_0) distribution with a_0=\tfrac and b_0=\tfrac v_0s_0^2 with v_0 and s_0^2 as the prior values of v and s^, respectively. Equivalently, it can also be described as a
scaled inverse chi-squared distribution The scaled inverse chi-squared distribution \psi \, \mbox \chi^2(\nu), where \psi is the scale parameter, equals the univariate inverse Wishart distribution \mathcal^(\psi,\nu) with degrees of freedom \nu. This family of scaled inverse chi- ...
, \text\chi^2(v_0, s_0^2). Further the conditional prior density \rho(\boldsymbol\beta, \sigma^) is a
normal distribution In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is f(x) = \frac ...
, \rho(\boldsymbol\beta\mid\sigma^2) \propto (\sigma^2)^ \exp\left(-\frac(\boldsymbol\beta - \boldsymbol\mu_0)^\mathsf \mathbf_0 (\boldsymbol\beta - \boldsymbol\mu_0)\right). In the notation of the
normal distribution In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is f(x) = \frac ...
, the conditional prior distribution is \mathcal\left(\boldsymbol\mu_0, \sigma^2 \boldsymbol\Lambda_0^\right).


Posterior distribution

With the prior now specified, the posterior distribution can be expressed as \begin \rho(\boldsymbol\beta,\sigma^2\mid\mathbf,\mathbf) &\propto \rho(\mathbf\mid\mathbf,\boldsymbol\beta,\sigma^2)\rho(\boldsymbol\beta\mid\sigma^2)\rho(\sigma^2) \\ & \propto (\sigma^2)^ \exp\left(-\frac(\mathbf- \mathbf \boldsymbol\beta)^\mathsf(\mathbf- \mathbf \boldsymbol\beta)\right) (\sigma^2)^ \exp\left(-\frac(\boldsymbol\beta -\boldsymbol\mu_0)^\mathsf \boldsymbol\Lambda_0 (\boldsymbol\beta - \boldsymbol\mu_0)\right) (\sigma^2)^ \exp\left(-\frac\right) \end With some re-arrangement, the posterior can be re-written so that the posterior mean \boldsymbol\mu_n of the parameter vector \boldsymbol\beta can be expressed in terms of the least squares estimator \hat and the prior mean \boldsymbol\mu_0, with the strength of the prior indicated by the prior precision matrix \boldsymbol\Lambda_0 \boldsymbol\mu_n = (\mathbf^\mathsf\mathbf+\boldsymbol\Lambda_0)^(\mathbf^\mathsf \mathbf\hat+\boldsymbol\Lambda_0\boldsymbol\mu_0) . To justify that \boldsymbol\mu_n is indeed the posterior mean, the quadratic terms in the exponential can be re-arranged as a
quadratic form In mathematics, a quadratic form is a polynomial with terms all of degree two (" form" is another name for a homogeneous polynomial). For example, 4x^2 + 2xy - 3y^2 is a quadratic form in the variables and . The coefficients usually belong t ...
in \boldsymbol\beta - \boldsymbol\mu_n. (\mathbf- \mathbf \boldsymbol\beta)^\mathsf(\mathbf- \mathbf \boldsymbol\beta) + (\boldsymbol\beta - \boldsymbol\mu_0)^\mathsf\boldsymbol\Lambda_0(\boldsymbol\beta - \boldsymbol\mu_0) =(\boldsymbol\beta-\boldsymbol\mu_n)^\mathsf(\mathbf^\mathsf\mathbf+\boldsymbol\Lambda_0)(\boldsymbol\beta-\boldsymbol\mu_n)+\mathbf^\mathsf\mathbf-\boldsymbol\mu_n^\mathsf(\mathbf^\mathsf\mathbf+\boldsymbol\Lambda_0)\boldsymbol\mu_n+\boldsymbol\mu_0^\mathsf \boldsymbol\Lambda_0\boldsymbol\mu_0 . Now the posterior can be expressed as a
normal distribution In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is f(x) = \frac ...
times an
inverse-gamma distribution In probability theory and statistics, the inverse gamma distribution is a two-parameter family of continuous probability distributions on the positive real line, which is the distribution of the reciprocal of a variable distributed according to ...
: \rho(\boldsymbol\beta,\sigma^2\mid\mathbf,\mathbf) \propto (\sigma^2)^ \exp\left(-\frac(\boldsymbol\beta - \boldsymbol\mu_n)^\mathsf(\mathbf^\mathsf \mathbf+\mathbf_0)(\boldsymbol\beta - \boldsymbol\mu_n)\right) (\sigma^2)^ \exp\left(-\frac\right) . Therefore, the posterior distribution can be parametrized as follows. \rho(\boldsymbol\beta,\sigma^2\mid\mathbf,\mathbf) \propto \rho(\boldsymbol\beta \mid \sigma^2,\mathbf,\mathbf) \rho(\sigma^2\mid\mathbf,\mathbf), where the two factors correspond to the densities of \mathcal\left( \boldsymbol\mu_n, \sigma^2\boldsymbol\Lambda_n^ \right)\, and \text\left(a_n,b_n \right) distributions, with the parameters of these given by \boldsymbol\Lambda_n=(\mathbf^\mathsf\mathbf+\mathbf_0), \quad \boldsymbol\mu_n = (\boldsymbol\Lambda_n)^(\mathbf^\mathsf \mathbf \hat + \boldsymbol\Lambda_0 \boldsymbol\mu_0) , a_n= a_0 + \frac, \qquad b_n=b_0+\frac(\mathbf^\mathsf \mathbf + \boldsymbol\mu_0^\mathsf \boldsymbol\Lambda_0\boldsymbol\mu_0-\boldsymbol\mu_n^\mathsf \boldsymbol\Lambda_n \boldsymbol\mu_n) . which illustrates Bayesian inference being a compromise between the information contained in the prior and the information contained in the sample.


Model evidence

The
model evidence A marginal likelihood is a likelihood function that has been integrated over the parameter space. In Bayesian statistics, it represents the probability of generating the observed sample for all possible values of the parameters; it can be unders ...
p(\mathbf\mid m) is the probability of the data given the model m. It is also known as the
marginal likelihood A marginal likelihood is a likelihood function that has been integrated over the parameter space. In Bayesian statistics, it represents the probability of generating the observed sample for all possible values of the parameters; it can be under ...
, and as the ''prior predictive density''. Here, the model is defined by the likelihood function p(\mathbf\mid\mathbf,\boldsymbol\beta,\sigma) and the prior distribution on the parameters, i.e. p(\boldsymbol\beta,\sigma). The model evidence captures in a single number how well such a model explains the observations. The model evidence of the Bayesian linear regression model presented in this section can be used to compare competing linear models by
Bayes factors The Bayes factor is a ratio of two competing statistical models represented by their marginal likelihood, evidence, and is used to quantify the support for one model over the other. The models in question can have a common set of parameters, such ...
. These models may differ in the number and values of the predictor variables as well as in their priors on the model parameters. Model complexity is already taken into account by the model evidence, because it marginalizes out the parameters by integrating p(\mathbf,\boldsymbol\beta,\sigma\mid\mathbf) over all possible values of \boldsymbol\beta and \sigma. p(\mathbf, m)=\int p(\mathbf\mid\mathbf,\boldsymbol\beta,\sigma)\, p(\boldsymbol\beta,\sigma)\, d\boldsymbol\beta\, d\sigma This integral can be computed analytically and the solution is given in the following equation. p(\mathbf\mid m)=\frac\sqrt \cdot \frac \cdot \frac Here \Gamma denotes the
gamma function In mathematics, the gamma function (represented by Γ, capital Greek alphabet, Greek letter gamma) is the most common extension of the factorial function to complex numbers. Derived by Daniel Bernoulli, the gamma function \Gamma(z) is defined ...
. Because we have chosen a conjugate prior, the marginal likelihood can also be easily computed by evaluating the following equality for arbitrary values of \boldsymbol\beta and \sigma. p(\mathbf\mid m)=\frac Note that this equation follows from a re-arrangement of
Bayes' theorem Bayes' theorem (alternatively Bayes' law or Bayes' rule, after Thomas Bayes) gives a mathematical rule for inverting Conditional probability, conditional probabilities, allowing one to find the probability of a cause given its effect. For exampl ...
. Inserting the formulas for the prior, the likelihood, and the posterior and simplifying the resulting expression leads to the analytic expression given above.


Other cases

In general, it may be impossible or impractical to derive the posterior distribution analytically. However, it is possible to approximate the posterior by an approximate Bayesian inference method such as
Monte Carlo sampling Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be det ...
,Carlin and Louis (2008) and Gelman, et al. (2003) explain how to use sampling methods for Bayesian linear regression. INLA or
variational Bayes Variational Bayesian methods are a family of techniques for approximating intractable integrals arising in Bayesian inference and machine learning. They are typically used in complex statistical models consisting of observed variables (usually ...
. The special case \boldsymbol\mu_0=0, \mathbf_0 = c\mathbf is called
ridge regression Ridge regression (also known as Tikhonov regularization, named for Andrey Tikhonov) is a method of estimating the coefficients of multiple- regression models in scenarios where the independent variables are highly correlated. It has been used in m ...
. A similar analysis can be performed for the general case of the multivariate regression and part of this provides for Bayesian
estimation of covariance matrices In statistics, sometimes the covariance matrix of a multivariate random variable is not known but has to be estimation theory, estimated. Estimation of covariance matrices then deals with the question of how to approximate the actual covariance ma ...
: see Bayesian multivariate linear regression.


See also

* Bayes linear statistics *
Constrained least squares In constrained least squares one solves a linear least squares problem with an additional constraint on the solution. This means, the unconstrained equation \mathbf \boldsymbol = \mathbf must be fit as closely as possible (in the least squar ...
*
Regularized least squares Regularized least squares (RLS) is a family of methods for solving the least-squares problem while using regularization to further constrain the resulting solution. RLS is used for two main reasons. The first comes up when the number of variable ...
*
Tikhonov regularization Ridge regression (also known as Tikhonov regularization, named for Andrey Tikhonov) is a method of estimating the coefficients of multiple- regression models in scenarios where the independent variables are highly correlated. It has been used in m ...
* Spike and slab variable selection *
Bayesian interpretation of kernel regularization Bayesian interpretation of kernel regularization examines how kernel methods in machine learning can be understood through the lens of Bayesian statistics, a framework that uses probability to model uncertainty. Kernel methods are founded on the con ...


Notes


References

* * * * * * *


External links

* Bayesian estimation of linear models (R programming wikibook). Bayesian linear regression as implemented in R. {{Statistics, correlation
Linear regression In statistics, linear regression is a statistical model, model that estimates the relationship between a Scalar (mathematics), scalar response (dependent variable) and one or more explanatory variables (regressor or independent variable). A mode ...
Single-equation methods (econometrics)