Constrained Least Squares
   HOME

TheInfoList



OR:

In constrained least squares one solves a
linear least squares Linear least squares (LLS) is the least squares approximation of linear functions to data. It is a set of formulations for solving statistical problems involved in linear regression, including variants for ordinary (unweighted), weighted, and ...
problem with an additional constraint on the solution. This means, the unconstrained equation \mathbf \boldsymbol = \mathbf must be fit as closely as possible (in the least squares sense) while ensuring that some other property of \boldsymbol is maintained. There are often special-purpose algorithms for solving such problems efficiently. Some examples of constraints are given below: * Equality constrained least squares: the elements of \boldsymbol must exactly satisfy \mathbf \boldsymbol = \mathbf (see
Ordinary least squares In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression In statistics, linear regression is a statistical model, model that estimates the relationship ...
). * Stochastic (linearly) constrained least squares: the elements of \boldsymbol must satisfy \mathbf \boldsymbol = \mathbf + \mathbf , where \mathbf is a vector of random variables such that \operatorname(\mathbf ) = \mathbf and \operatorname(\mathbf \mathbf ^) = \tau^\mathbf. This effectively imposes a
prior distribution A prior probability distribution of an uncertain quantity, simply called the prior, is its assumed probability distribution before some evidence is taken into account. For example, the prior could be the probability distribution representing the ...
for \boldsymbol and is therefore equivalent to
Bayesian linear regression Bayesian linear regression is a type of conditional modeling in which the mean of one variable is described by a linear combination of other variables, with the goal of obtaining the posterior probability of the regression coefficients (as well ...
. * Regularized least squares: the elements of \boldsymbol must satisfy \, \mathbf \boldsymbol - \mathbf \, \le \alpha (choosing \alpha in proportion to the noise
standard deviation In statistics, the standard deviation is a measure of the amount of variation of the values of a variable about its Expected value, mean. A low standard Deviation (statistics), deviation indicates that the values tend to be close to the mean ( ...
of y prevents over-fitting). * Non-negative least squares (NNLS): The vector \boldsymbol must satisfy the vector inequality \boldsymbol \geq \boldsymbol defined componentwise—that is, each component must be either positive or zero. * Box-constrained least squares: The vector \boldsymbol must satisfy the vector inequalities \boldsymbol_\ell \leq \boldsymbol \leq \boldsymbol_u, each of which is defined componentwise. * Integer-constrained least squares: all elements of \boldsymbol must be
integer An integer is the number zero (0), a positive natural number (1, 2, 3, ...), or the negation of a positive natural number (−1, −2, −3, ...). The negations or additive inverses of the positive natural numbers are referred to as negative in ...
s (instead of
real number In mathematics, a real number is a number that can be used to measure a continuous one- dimensional quantity such as a duration or temperature. Here, ''continuous'' means that pairs of values can have arbitrarily small differences. Every re ...
s). * Phase-constrained least squares: all elements of \boldsymbol must be real numbers, or multiplied by the same complex number of unit modulus. If the constraint only applies to some of the variables, the mixed problem may be solved using separable least squares by letting \mathbf = mathbf \mathbf /math> and \mathbf ^ = mathbf ^ \mathbf ^/math> represent the unconstrained (1) and constrained (2) components. Then substituting the least-squares solution for \mathbf , i.e. :\hat_1 = \mathbf _1^+ (\mathbf - \mathbf _2 \boldsymbol _2) (where + indicates the Moore–Penrose pseudoinverse) back into the original expression gives (following some rearrangement) an equation that can be solved as a purely constrained problem in \mathbf _2. : \mathbf \mathbf _2 \boldsymbol _2 = \mathbf\mathbf , where \mathbf:=\mathbf-\mathbf _1 \mathbf _1^+ is a
projection matrix In statistics, the projection matrix (\mathbf), sometimes also called the influence matrix or hat matrix (\mathbf), maps the vector of response values (dependent variable values) to the vector of fitted values (or predicted values). It describes ...
. Following the constrained estimation of \hat_2 the vector \hat_1 is obtained from the expression above.


See also

*
Bayesian linear regression Bayesian linear regression is a type of conditional modeling in which the mean of one variable is described by a linear combination of other variables, with the goal of obtaining the posterior probability of the regression coefficients (as well ...
*
Constrained optimization In mathematical optimization, constrained optimization (in some contexts called constraint optimization) is the process of optimizing an objective function with respect to some variables in the presence of constraints on those variables. The obj ...
*
Integer programming An integer programming problem is a mathematical optimization or feasibility program in which some or all of the variables are restricted to be integers. In many settings the term refers to integer linear programming (ILP), in which the objective ...


References

{{Reflist Least squares