Weight Decay
   HOME

TheInfoList



OR:

Ridge regression (also known as Tikhonov regularization, named for
Andrey Tikhonov Andrey Tikhonov may refer to: * Andrey Tikhonov (footballer) (born 1970), Russian football manager and footbeller * Andrey Tikhonov (mathematician) (1906–1993), Soviet Russian mathematician and geophysicist * Andrey Tikhonov (runner) (born ...
) is a method of estimating the
coefficient In mathematics, a coefficient is a Factor (arithmetic), multiplicative factor involved in some Summand, term of a polynomial, a series (mathematics), series, or any other type of expression (mathematics), expression. It may be a Dimensionless qu ...
s of multiple- regression models in scenarios where the independent variables are highly correlated. It has been used in many fields including econometrics, chemistry, and engineering. It is a method of
regularization Regularization may refer to: * Regularization (linguistics) * Regularization (mathematics) * Regularization (physics) * Regularization (solid modeling) * Regularization Law, an Israeli law intended to retroactively legalize settlements See also ...
of ill-posed problems. It is particularly useful to mitigate the problem of
multicollinearity In statistics, multicollinearity or collinearity is a situation where the predictors in a regression model are linearly dependent. Perfect multicollinearity refers to a situation where the predictive variables have an ''exact'' linear rela ...
in
linear regression In statistics, linear regression is a statistical model, model that estimates the relationship between a Scalar (mathematics), scalar response (dependent variable) and one or more explanatory variables (regressor or independent variable). A mode ...
, which commonly occurs in models with large numbers of parameters. In general, the method provides improved
efficiency Efficiency is the often measurable ability to avoid making mistakes or wasting materials, energy, efforts, money, and time while performing a task. In a more general sense, it is the ability to do things well, successfully, and without waste. ...
in parameter estimation problems in exchange for a tolerable amount of
bias Bias is a disproportionate weight ''in favor of'' or ''against'' an idea or thing, usually in a way that is inaccurate, closed-minded, prejudicial, or unfair. Biases can be innate or learned. People may develop biases for or against an individ ...
(see
bias–variance tradeoff In statistics and machine learning, the bias–variance tradeoff describes the relationship between a model's complexity, the accuracy of its predictions, and how well it can make predictions on previously unseen data that were not used to train ...
). The theory was first introduced by Hoerl and Kennard in 1970 in their ''
Technometrics ''Technometrics'' is a journal of statistics for the physical, chemical, and engineering sciences, published quarterly since 1959 by the American Society for Quality The American Society for Quality (ASQ), formerly the American Society fo ...
'' papers "Ridge regressions: biased estimation of nonorthogonal problems" and "Ridge regressions: applications in nonorthogonal problems". Ridge regression was developed as a possible solution to the imprecision of least square estimators when linear regression models have some multicollinear (highly correlated) independent variables—by creating a ridge regression estimator (RR). This provides a more precise ridge parameters estimate, as its variance and mean square estimator are often smaller than the least square estimators previously derived.


Overview

In the simplest case, the problem of a near-singular moment matrix \mathbf^\mathsf\mathbf is alleviated by adding positive elements to the
diagonals In geometry, a diagonal is a line segment joining two vertices of a polygon or polyhedron, when those vertices are not on the same edge. Informally, any sloping line is called diagonal. The word ''diagonal'' derives from the ancient Greek ...
, thereby decreasing its
condition number In numerical analysis, the condition number of a function measures how much the output value of the function can change for a small change in the input argument. This is used to measure how sensitive a function is to changes or errors in the inpu ...
. Analogous to the
ordinary least squares In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression In statistics, linear regression is a statistical model, model that estimates the relationship ...
estimator, the simple ridge estimator is then given by \hat_ = \left(\mathbf^ \mathbf + \lambda \mathbf\right)^ \mathbf^ \mathbf where \mathbf is the
regressand A variable is considered dependent if it depends on (or is hypothesized to depend on) an independent variable. Dependent variables are studied under the supposition or demand that they depend, by some law or rule (e.g., by a mathematical function ...
, \mathbf is the
design matrix In statistics and in particular in regression analysis, a design matrix, also known as model matrix or regressor matrix and often denoted by X, is a matrix of values of explanatory variables of a set of objects. Each row represents an individual o ...
, \mathbf is the
identity matrix In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere. It has unique properties, for example when the identity matrix represents a geometric transformation, the obje ...
, and the ridge parameter \lambda \geq 0 serves as the constant shifting the diagonals of the moment matrix. It can be shown that this estimator is the solution to the
least squares The method of least squares is a mathematical optimization technique that aims to determine the best fit function by minimizing the sum of the squares of the differences between the observed values and the predicted values of the model. The me ...
problem subject to the constraint \boldsymbol\beta^\mathsf\boldsymbol\beta = c, which can be expressed as a Lagrangian minimization: \hat_ = \text_ \, \left(\mathbf - \mathbf \boldsymbol\right)^\mathsf \left(\mathbf - \mathbf \boldsymbol\beta\right) + \lambda \left(\boldsymbol\beta^\mathsf\boldsymbol\beta - c\right) which shows that \lambda is nothing but the
Lagrange multiplier In mathematical optimization, the method of Lagrange multipliers is a strategy for finding the local maxima and minima of a function (mathematics), function subject to constraint (mathematics), equation constraints (i.e., subject to the conditio ...
of the constraint. In fact, there is a one-to-one relationship between c and \beta and since, in practice, we do not know c, we define \lambda heuristically or find it via additional data-fitting strategies, see Determination of the Tikhonov factor. Note that, when \lambda = 0, in which case the constraint is non-binding, the ridge estimator reduces to
ordinary least squares In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression In statistics, linear regression is a statistical model, model that estimates the relationship ...
. A more general approach to Tikhonov regularization is discussed below.


History

Tikhonov regularization was invented independently in many different contexts. It became widely known through its application to integral equations in the works of
Andrey Tikhonov Andrey Tikhonov may refer to: * Andrey Tikhonov (footballer) (born 1970), Russian football manager and footbeller * Andrey Tikhonov (mathematician) (1906–1993), Soviet Russian mathematician and geophysicist * Andrey Tikhonov (runner) (born ...
and David L. Phillips. Some authors use the term Tikhonov–Phillips regularization. The finite-dimensional case was expounded by
Arthur E. Hoerl Arthur is a masculine given name of uncertain etymology. Its popularity derives from it being the name of the legendary hero King Arthur. A common spelling variant used in many Slavic, Romance, and Germanic languages is Artur. In Spanish and Ital ...
, who took a statistical approach, and by Manus Foster, who interpreted this method as a Wiener–Kolmogorov (Kriging) filter. Following Hoerl, it is known in the statistical literature as ridge regression, named after ridge analysis ("ridge" refers to the path from the constrained maximum).


Tikhonov regularization

Suppose that for a known
real matrix In mathematics, a matrix (: matrices) is a rectangular array or table of numbers, symbols, or expressions, with elements or entries arranged in rows and columns, which is used to represent a mathematical object or property of such an object. ...
A and vector \mathbf, we wish to find a vector \mathbf such that A\mathbf = \mathbf, where \mathbf and \mathbf may be of different sizes and A may be non-square. The standard approach is
ordinary least squares In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression In statistics, linear regression is a statistical model, model that estimates the relationship ...
linear regression. However, if no \mathbf satisfies the equation or more than one \mathbf does—that is, the solution is not unique—the problem is said to be ill posed. In such cases, ordinary least squares estimation leads to an overdetermined, or more often an underdetermined system of equations. Most real-world phenomena have the effect of
low-pass filters A low-pass filter is a filter that passes signals with a frequency lower than a selected cutoff frequency and attenuates signals with frequencies higher than the cutoff frequency. The exact frequency response of the filter depends on the filter d ...
in the forward direction where A maps \mathbf to \mathbf. Therefore, in solving the inverse-problem, the inverse mapping operates as a
high-pass filter A high-pass filter (HPF) is an electronic filter that passes signals with a frequency higher than a certain cutoff frequency and attenuates signals with frequencies lower than the cutoff frequency. The amount of attenuation for each frequency ...
that has the undesirable tendency of amplifying noise (
eigenvalues In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a ...
/ singular values are largest in the reverse mapping where they were smallest in the forward mapping). In addition, ordinary least squares implicitly nullifies every element of the reconstructed version of \mathbf that is in the null-space of A, rather than allowing for a model to be used as a prior for \mathbf. Ordinary least squares seeks to minimize the sum of squared residuals, which can be compactly written as \left\, A\mathbf - \mathbf\right\, _2^2, where \, \cdot\, _2 is the
Euclidean norm Euclidean space is the fundamental space of geometry, intended to represent physical space. Originally, in Euclid's ''Elements'', it was the three-dimensional space of Euclidean geometry, but in modern mathematics there are ''Euclidean spaces'' ...
. In order to give preference to a particular solution with desirable properties, a regularization term can be included in this minimization: \left\, A\mathbf - \mathbf\right\, _2^2 + \left\, \Gamma \mathbf\right\, _2^2=\left\, \beginA\\\Gamma\end\mathbf - \begin\mathbf\\\boldsymbol0\end\right\, _2^2 for some suitably chosen Tikhonov matrix \Gamma . In many cases, this matrix is chosen as a scalar multiple of the
identity matrix In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere. It has unique properties, for example when the identity matrix represents a geometric transformation, the obje ...
(\Gamma = \alpha I), giving preference to solutions with smaller norms; this is known as regularization. In other cases, high-pass operators (e.g., a
difference operator In mathematics, a recurrence relation is an equation according to which the nth term of a sequence of numbers is equal to some combination of the previous terms. Often, only k previous terms of the sequence appear in the equation, for a parameter ...
or a weighted
Fourier operator The Fourier operator is the integral kernel, kernel of the Fredholm integral equation#Equation of the first kind, Fredholm integral of the first kind that defines the continuous Fourier transform, and is a two-dimensional function when it correspon ...
) may be used to enforce smoothness if the underlying vector is believed to be mostly continuous. This regularization improves the conditioning of the problem, thus enabling a direct numerical solution. An explicit solution, denoted by \hat\mathbf, is given by \hat\mathbf = \left(A^\mathsf A + \Gamma^\mathsf \Gamma\right)^ A^\mathsf \mathbf= \left(\beginA\\\Gamma\end^\mathsf\beginA\\\Gamma\end\right)^ \beginA\\\Gamma\end^\mathsf \begin\mathbf\\\boldsymbol0\end. The effect of regularization may be varied by the scale of matrix \Gamma. For \Gamma = 0 this reduces to the unregularized least-squares solution, provided that (''A''T''A'')−1 exists. Note that in case of a complex matrix A, as usual the transpose A^\mathsf has to be replaced by the
Hermitian transpose In mathematics, the conjugate transpose, also known as the Hermitian transpose, of an m \times n complex matrix \mathbf is an n \times m matrix obtained by transposing \mathbf and applying complex conjugation to each entry (the complex conjugate ...
A^\mathsf. regularization is used in many contexts aside from linear regression, such as
classification Classification is the activity of assigning objects to some pre-existing classes or categories. This is distinct from the task of establishing the classes themselves (for example through cluster analysis). Examples include diagnostic tests, identif ...
with
logistic regression In statistics, a logistic model (or logit model) is a statistical model that models the logit, log-odds of an event as a linear function (calculus), linear combination of one or more independent variables. In regression analysis, logistic regres ...
or
support vector machine In machine learning, support vector machines (SVMs, also support vector networks) are supervised max-margin models with associated learning algorithms that analyze data for classification and regression analysis. Developed at AT&T Bell Laborato ...
s, and matrix factorization.


Application to existing fit results

Since Tikhonov Regularization simply adds a quadratic term to the objective function in optimization problems, it is possible to do so after the unregularised optimisation has taken place. E.g., if the above problem with \Gamma = 0 yields the solution \hat\mathbf_0, the solution in the presence of \Gamma \ne 0 can be expressed as: \hat\mathbf = B \hat\mathbf_0, with the "regularisation matrix" B = \left(A^\mathsf A + \Gamma^\mathsf \Gamma\right)^ A^\mathsf A. If the parameter fit comes with a covariance matrix of the estimated parameter uncertainties V_0, then the regularisation matrix will be B = (V_0^ + \Gamma^\mathsf\Gamma)^ V_0^, and the regularised result will have a new covariance V = B V_0 B^\mathsf. In the context of arbitrary likelihood fits, this is valid, as long as the quadratic approximation of the likelihood function is valid. This means that, as long as the perturbation from the unregularised result is small, one can regularise any result that is presented as a best fit point with a covariance matrix. No detailed knowledge of the underlying likelihood function is needed.


Generalized Tikhonov regularization

For general multivariate normal distributions for \mathbf x and the data error, one can apply a transformation of the variables to reduce to the case above. Equivalently, one can seek an \mathbf x to minimize \left\, A \mathbf x - \mathbf b\right\, _P^2 + \left\, \mathbf x - \mathbf x_0\right\, _Q^2, where we have used \left\, \mathbf\right\, _Q^2 to stand for the weighted norm squared \mathbf^\mathsf Q \mathbf (compare with the
Mahalanobis distance The Mahalanobis distance is a distance measure, measure of the distance between a point P and a probability distribution D, introduced by Prasanta Chandra Mahalanobis, P. C. Mahalanobis in 1936. The mathematical details of Mahalanobis distance ...
). In the Bayesian interpretation P is the inverse
covariance matrix In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of ...
of \mathbf b, \mathbf x_0 is the
expected value In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first Moment (mathematics), moment) is a generalization of the weighted average. Informa ...
of \mathbf x, and Q is the inverse covariance matrix of \mathbf x. The Tikhonov matrix is then given as a factorization of the matrix Q = \Gamma^\mathsf \Gamma (e.g. the
Cholesky factorization In linear algebra, the Cholesky decomposition or Cholesky factorization (pronounced ) is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for effi ...
) and is considered a whitening filter. This generalized problem has an optimal solution \mathbf x^* which can be written explicitly using the formula \mathbf x^* = \left(A^\mathsf PA + Q\right)^ \left(A^\mathsf P \mathbf + Q \mathbf_0\right), or equivalently, when ''Q'' is not a null matrix: \mathbf x^* = \mathbf x_0 + \left(A^\mathsf P A + Q \right)^ \left(A^\mathsf P \left(\mathbf b - A \mathbf x_0\right)\right).


Lavrentyev regularization

In some situations, one can avoid using the transpose A^\mathsf, as proposed by
Mikhail Lavrentyev Mikhail Alekseyevich Lavrentyev (or Lavrentiev, ; November 19, 1900 – October 15, 1980) was a Soviet mathematician and hydrodynamicist. Early years Lavrentyev was born in Kazan, where his father was an instructor at a college (he later became ...
. For example, if A is symmetric positive definite, i.e. A = A^\mathsf > 0, so is its inverse A^, which can thus be used to set up the weighted norm squared \left\, \mathbf x\right\, _P^2 = \mathbf x^\mathsf A^ \mathbf x in the generalized Tikhonov regularization, leading to minimizing \left\, A \mathbf x - \mathbf b\right\, _^2 + \left\, \mathbf x - \mathbf x_0 \right\, _Q^2 or, equivalently up to a constant term, \mathbf x^\mathsf \left(A+Q\right) \mathbf x - 2 \mathbf x^\mathsf \left(\mathbf b + Q \mathbf x_0\right). This minimization problem has an optimal solution \mathbf x^* which can be written explicitly using the formula \mathbf x^* = \left(A + Q\right)^ \left(\mathbf b + Q \mathbf x_0\right), which is nothing but the solution of the generalized Tikhonov problem where A = A^\mathsf = P^. The Lavrentyev regularization, if applicable, is advantageous to the original Tikhonov regularization, since the Lavrentyev matrix A + Q can be better conditioned, i.e., have a smaller
condition number In numerical analysis, the condition number of a function measures how much the output value of the function can change for a small change in the input argument. This is used to measure how sensitive a function is to changes or errors in the inpu ...
, compared to the Tikhonov matrix A^\mathsf A + \Gamma^\mathsf \Gamma.


Regularization in Hilbert space

Typically discrete linear ill-conditioned problems result from discretization of
integral equation In mathematical analysis, integral equations are equations in which an unknown function appears under an integral sign. In mathematical notation, integral equations may thus be expressed as being of the form: f(x_1,x_2,x_3,\ldots,x_n ; u(x_1,x_2 ...
s, and one can formulate a Tikhonov regularization in the original infinite-dimensional context. In the above we can interpret A as a
compact operator In functional analysis, a branch of mathematics, a compact operator is a linear operator T: X \to Y, where X,Y are normed vector spaces, with the property that T maps bounded subsets of X to relatively compact subsets of Y (subsets with compact ...
on
Hilbert space In mathematics, a Hilbert space is a real number, real or complex number, complex inner product space that is also a complete metric space with respect to the metric induced by the inner product. It generalizes the notion of Euclidean space. The ...
s, and x and b as elements in the domain and range of A. The operator A^* A + \Gamma^\mathsf \Gamma is then a
self-adjoint In mathematics, an element of a *-algebra is called self-adjoint if it is the same as its adjoint (i.e. a = a^*). Definition Let \mathcal be a *-algebra. An element a \in \mathcal is called self-adjoint if The set of self-adjoint elements ...
bounded invertible operator.


Relation to singular-value decomposition and Wiener filter

With \Gamma = \alpha I, this least-squares solution can be analyzed in a special way using the
singular-value decomposition In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix into a rotation, followed by a rescaling followed by another rotation. It generalizes the eigendecomposition of a square normal matrix w ...
. Given the singular value decomposition A = U \Sigma V^\mathsf with singular values \sigma _i, the Tikhonov regularized solution can be expressed as \hat = V D U^\mathsf b, where D has diagonal values D_ = \frac and is zero elsewhere. This demonstrates the effect of the Tikhonov parameter on the
condition number In numerical analysis, the condition number of a function measures how much the output value of the function can change for a small change in the input argument. This is used to measure how sensitive a function is to changes or errors in the inpu ...
of the regularized problem. For the generalized case, a similar representation can be derived using a
generalized singular-value decomposition In linear algebra, the generalized singular value decomposition (GSVD) is the name of two different techniques based on the singular value decomposition, singular value decomposition (SVD). The two versions differ because one version decomposes tw ...
. Finally, it is related to the
Wiener filter In signal processing, the Wiener filter is a filter used to produce an estimate of a desired or target random process by linear time-invariant ( LTI) filtering of an observed noisy process, assuming known stationary signal and noise spectra, a ...
: \hat = \sum _^q f_i \frac v_i, where the Wiener weights are f_i = \frac and q is the
rank A rank is a position in a hierarchy. It can be formally recognized—for example, cardinal, chief executive officer, general, professor—or unofficial. People Formal ranks * Academic rank * Corporate title * Diplomatic rank * Hierarchy ...
of A.


Determination of the Tikhonov factor

The optimal regularization parameter \alpha is usually unknown and often in practical problems is determined by an ''ad hoc'' method. A possible approach relies on the Bayesian interpretation described below. Other approaches include the discrepancy principle, cross-validation, L-curve method,
restricted maximum likelihood In statistics, the restricted (or residual, or reduced) maximum likelihood (REML) approach is a particular form of maximum likelihood estimation that does not base estimates on a maximum likelihood fit of all the information, but instead uses a lik ...
and unbiased predictive risk estimator.
Grace Wahba Grace Goldsmith Wahba (born August 3, 1934) is an American statistician and retired I. J. Schoenberg-Hilldale Professor of Statistics at the University of Wisconsin–Madison. She is a pioneer in methods for smoothing noisy data. Best known for t ...
proved that the optimal parameter, in the sense of
leave-one-out cross-validation Cross-validation, sometimes called rotation estimation or out-of-sample testing, is any of various similar model validation techniques for assessing how the results of a statistical analysis will generalize to an independent data set. Cross-v ...
minimizes G = \frac = \frac, where \operatorname is the
residual sum of squares In statistics, the residual sum of squares (RSS), also known as the sum of squared residuals (SSR) or the sum of squared estimate of errors (SSE), is the sum of the squares of residuals (deviations predicted from actual empirical values of dat ...
, and \tau is the
effective number of degrees of freedom In statistics, the number of degrees of freedom is the number of values in the final calculation of a statistic that are free to vary. Estimates of statistical parameters can be based upon different amounts of information or data. The number of i ...
. Using the previous SVD decomposition, we can simplify the above expression: \operatorname = \left\, y - \sum_^q (u_i' b) u_i \right\, ^2 + \left\, \sum _^q \frac (u_i' b) u_i \right\, ^2, \operatorname = \operatorname_0 + \left\, \sum_^q \frac (u_i' b) u_i \right\, ^2, and \tau = m - \sum_^q \frac = m - q + \sum_^q \frac.


Relation to probabilistic formulation

The probabilistic formulation of an
inverse problem An inverse problem in science is the process of calculating from a set of observations the causal factors that produced them: for example, calculating an image in X-ray computed tomography, sound source reconstruction, source reconstruction in ac ...
introduces (when all uncertainties are Gaussian) a covariance matrix C_M representing the ''a priori'' uncertainties on the model parameters, and a covariance matrix C_D representing the uncertainties on the observed parameters. In the special case when these two matrices are diagonal and isotropic, C_M = \sigma_M^2 I and C_D = \sigma_D^2 I , and, in this case, the equations of inverse theory reduce to the equations above, with \alpha = / .


Bayesian interpretation

Although at first the choice of the solution to this regularized problem may look artificial, and indeed the matrix \Gamma seems rather arbitrary, the process can be justified from a Bayesian point of view. Note that for an ill-posed problem one must necessarily introduce some additional assumptions in order to get a unique solution. Statistically, the
prior probability A prior probability distribution of an uncertain quantity, simply called the prior, is its assumed probability distribution before some evidence is taken into account. For example, the prior could be the probability distribution representing the ...
distribution of x is sometimes taken to be a
multivariate normal distribution In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional ( univariate) normal distribution to higher dimensions. One d ...
. For simplicity here, the following assumptions are made: the means are zero; their components are independent; the components have the same
standard deviation In statistics, the standard deviation is a measure of the amount of variation of the values of a variable about its Expected value, mean. A low standard Deviation (statistics), deviation indicates that the values tend to be close to the mean ( ...
\sigma _x. The data are also subject to errors, and the errors in b are also assumed to be
independent Independent or Independents may refer to: Arts, entertainment, and media Artist groups * Independents (artist group), a group of modernist painters based in Pennsylvania, United States * Independentes (English: Independents), a Portuguese artist ...
with zero mean and standard deviation \sigma _b. Under these assumptions the Tikhonov-regularized solution is the most probable solution given the data and the ''a priori'' distribution of x, according to
Bayes' theorem Bayes' theorem (alternatively Bayes' law or Bayes' rule, after Thomas Bayes) gives a mathematical rule for inverting Conditional probability, conditional probabilities, allowing one to find the probability of a cause given its effect. For exampl ...
. If the assumption of normality is replaced by assumptions of
homoscedasticity In statistics, a sequence of random variables is homoscedastic () if all its random variables have the same finite variance; this is also known as homogeneity of variance. The complementary notion is called heteroscedasticity, also known as hete ...
and uncorrelatedness of
errors An error (from the Latin , meaning 'to wander'Oxford English Dictionary, s.v. “error (n.), Etymology,” September 2023, .) is an inaccurate or incorrect action, thought, or judgement. In statistics, "error" refers to the difference between t ...
, and if one still assumes zero mean, then the
Gauss–Markov theorem In statistics, the Gauss–Markov theorem (or simply Gauss theorem for some authors) states that the ordinary least squares (OLS) estimator has the lowest sampling variance within the class of linear unbiased estimators, if the errors in ...
entails that the solution is the minimal unbiased linear estimator.


See also

* LASSO estimator is another regularization method in statistics. *
Elastic net regularization In statistics and, in particular, in the fitting of linear or logistic regression models, the elastic net is a regularized regression method that linearly combines the ''L''1 and ''L''2 penalties of the lasso and ridge methods. Nevertheless, el ...
* Matrix regularization *
L-curve L-curve is a visualization method used in the field of regularization in numerical analysis and mathematical optimization. It represents a logarithmic plot where the norm of a regularized solution is plotted against the norm of the correspondi ...


Notes


References


Further reading

* * * * * {{Authority control Linear algebra Estimation methods Inverse problems Regression analysis