HOME

TheInfoList



OR:

In
numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of numerical methods ...
, the condition number of a function measures how much the output value of the function can change for a small change in the input argument. This is used to measure how sensitive a function is to changes or errors in the input, and how much error in the output results from an error in the input. Very frequently, one is solving the inverse problem: given f(x) = y, one is solving for ''x,'' and thus the condition number of the (local) inverse must be used. In
linear regression In statistics, linear regression is a linear approach for modelling the relationship between a scalar response and one or more explanatory variables (also known as dependent and independent variables). The case of one explanatory variable is cal ...
the condition number of the moment matrix can be used as a diagnostic for multicollinearity. The condition number is an application of the
derivative In mathematics, the derivative of a function of a real variable measures the sensitivity to change of the function value (output value) with respect to a change in its argument (input value). Derivatives are a fundamental tool of calculus. ...
, and is formally defined as the value of the asymptotic worst-case relative change in output for a relative change in input. The "function" is the solution of a problem and the "arguments" are the data in the problem. The condition number is frequently applied to questions in
linear algebra Linear algebra is the branch of mathematics concerning linear equations such as: :a_1x_1+\cdots +a_nx_n=b, linear maps such as: :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrice ...
, in which case the derivative is straightforward but the error could be in many different directions, and is thus computed from the geometry of the matrix. More generally, condition numbers can be defined for non-linear functions in several variables. A problem with a low condition number is said to be well-conditioned, while a problem with a high condition number is said to be ill-conditioned. In non-mathematical terms, an ill-conditioned problem is one where, for a small change in the inputs (the
independent variables Dependent and independent variables are variables in mathematical modeling, statistical modeling and experimental sciences. Dependent variables receive this name because, in an experiment, their values are studied under the supposition or deman ...
) there is a large change in the answer or
dependent variable Dependent and independent variables are variables in mathematical modeling, statistical modeling and experimental sciences. Dependent variables receive this name because, in an experiment, their values are studied under the supposition or dema ...
. This means that the correct solution/answer to the equation becomes hard to find. The condition number is a property of the problem. Paired with the problem are any number of algorithms that can be used to solve the problem, that is, to calculate the solution. Some algorithms have a property called '' backward stability''; in general, a backward stable algorithm can be expected to accurately solve well-conditioned problems. Numerical analysis textbooks give formulas for the condition numbers of problems and identify known backward stable algorithms. As a rule of thumb, if the condition number \kappa(A) = 10^k, then you may lose up to k digits of accuracy on top of what would be lost to the numerical method due to loss of precision from arithmetic methods. However, the condition number does not give the exact value of the maximum inaccuracy that may occur in the algorithm. It generally just bounds it with an estimate (whose computed value depends on the choice of the norm to measure the inaccuracy).


General definition in the context of error analysis

Given a problem f and an algorithm \tilde with an input x and output \tilde(x), the ''error'' is \delta f(x) := f(x) - \tilde(x), the ''absolute'' error is \, \delta f(x)\, = \left\, f(x) - \tilde(x)\right\, and the ''relative'' error is \, \delta f(x)\, / \, f(x)\, = \left\, f(x) - \tilde(x)\right\, / \, f(x)\, . In this context, the ''absolute'' condition number of a problem f is : \lim_\, \sup_ \frac and the ''relative'' condition number is : \lim_\, \sup_ \frac.


Matrices

For example, the condition number associated with the
linear equation In mathematics, a linear equation is an equation that may be put in the form a_1x_1+\ldots+a_nx_n+b=0, where x_1,\ldots,x_n are the variables (or unknowns), and b,a_1,\ldots,a_n are the coefficients, which are often real numbers. The coeffici ...
''Ax'' = ''b'' gives a bound on how inaccurate the solution ''x'' will be after approximation. Note that this is before the effects of round-off error are taken into account; conditioning is a property of the
matrix Matrix most commonly refers to: * ''The Matrix'' (franchise), an American media franchise ** '' The Matrix'', a 1999 science-fiction action film ** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchi ...
, not the
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
or floating-point accuracy of the computer used to solve the corresponding system. In particular, one should think of the condition number as being (very roughly) the rate at which the solution ''x'' will change with respect to a change in ''b''. Thus, if the condition number is large, even a small error in ''b'' may cause a large error in ''x''. On the other hand, if the condition number is small, then the error in ''x'' will not be much bigger than the error in ''b''. The condition number is defined more precisely to be the maximum ratio of the
relative error The approximation error in a data value is the discrepancy between an exact value and some ''approximation'' to it. This error can be expressed as an absolute error (the numerical amount of the discrepancy) or as a relative error (the absolute er ...
in ''x'' to the relative error in ''b''. Let ''e'' be the error in ''b''. Assuming that ''A'' is a
nonsingular In linear algebra, an -by- square matrix is called invertible (also nonsingular or nondegenerate), if there exists an -by- square matrix such that :\mathbf = \mathbf = \mathbf_n \ where denotes the -by- identity matrix and the multiplic ...
matrix, the error in the solution ''A''−1''b'' is ''A''−1''e''. The ratio of the relative error in the solution to the relative error in ''b'' is : / = \frac \frac. The maximum value (for nonzero ''b'' and ''e'') is then seen to be the product of the two operator norms as follows: :\begin \max_ \left\ &= \max_ \left\ \, \max_ \left\ \\ &= \max_ \left\ \, \max_ \left \ \\ &= \left\, A^ \right \, \, \, A\, . \end The same definition is used for any consistent norm, i.e. one that satisfies : \kappa(A) = \left\, A^ \right\, \, \left\, A \right\, \ge \left\, A^ A \right\, = 1. When the condition number is exactly one (which can only happen if ''A'' is a scalar multiple of a
linear isometry In mathematics, an isometry (or congruence, or congruent transformation) is a distance-preserving transformation between metric spaces, usually assumed to be bijective. The word isometry is derived from the Ancient Greek: ἴσος ''isos'' me ...
), then a solution algorithm can find (in principle, meaning if the algorithm introduces no errors of its own) an approximation of the solution whose precision is no worse than that of the data. However, it does not mean that the algorithm will converge rapidly to this solution, just that it will not diverge arbitrarily because of inaccuracy on the source data (backward error), provided that the forward error introduced by the algorithm does not diverge as well because of accumulating intermediate rounding errors. The condition number may also be infinite, but this implies that the problem is ill-posed (does not possess a unique, well-defined solution for each choice of data; that is, the matrix is not invertible), and no algorithm can be expected to reliably find a solution. The definition of the condition number depends on the choice of norm, as can be illustrated by two examples. If \, \cdot\, is the matrix norm induced by the (vector) Euclidean norm (sometimes known as the ''L''2 norm and typically denoted as \, \cdot\, _2), then : \kappa(A) = \frac, where \sigma_\text(A) and \sigma_\text(A) are maximal and minimal singular values of A respectively. Hence: * If A is normal, then \kappa(A) = \frac, where \lambda_\text(A) and \lambda_\text(A) are maximal and minimal (by moduli)
eigenvalue In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denote ...
s of A respectively. * If A is
unitary Unitary may refer to: Mathematics * Unitary divisor * Unitary element * Unitary group * Unitary matrix * Unitary morphism * Unitary operator * Unitary transformation * Unitary representation In mathematics, a unitary representation of a grou ...
, then \kappa(A) = 1. The condition number with respect to ''L''2 arises so often in
numerical linear algebra Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematic ...
that it is given a name, the condition number of a matrix. If \, \cdot\, is the matrix norm induced by the L^\infty (vector) norm and A is lower triangular non-singular (i.e. a_ \ne 0 for all i), then : \kappa(A) \geq \frac recalling that the eigenvalues of any triangular matrix are simply the diagonal entries. The condition number computed with this norm is generally larger than the condition number computed relative to the Euclidean norm, but it can be evaluated more easily (and this is often the only practicably computable condition number, when the problem to solve involves a ''non-linear algebra'', for example when approximating irrational and transcendental functions or numbers with numerical methods). If the condition number is not significantly larger than one, the matrix is well-conditioned, which means that its inverse can be computed with good accuracy. If the condition number is very large, then the matrix is said to be ill-conditioned. Practically, such a matrix is almost singular, and the computation of its inverse, or solution of a linear system of equations is prone to large numerical errors. A matrix that is not invertible has condition number equal to infinity.


Nonlinear

Condition numbers can also be defined for nonlinear functions, and can be computed using
calculus Calculus, originally called infinitesimal calculus or "the calculus of infinitesimals", is the mathematics, mathematical study of continuous change, in the same way that geometry is the study of shape, and algebra is the study of generalizati ...
. The condition number varies with the point; in some cases one can use the maximum (or
supremum In mathematics, the infimum (abbreviated inf; plural infima) of a subset S of a partially ordered set P is a greatest element in P that is less than or equal to each element of S, if such an element exists. Consequently, the term ''greatest ...
) condition number over the
domain Domain may refer to: Mathematics *Domain of a function, the set of input values for which the (total) function is defined ** Domain of definition of a partial function ** Natural domain of a partial function **Domain of holomorphy of a function * ...
of the function or domain of the question as an overall condition number, while in other cases the condition number at a particular point is of more interest.


One variable

The condition number of a
differentiable function In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. In other words, the graph of a differentiable function has a non- vertical tangent line at each interior point in ...
f in one variable as a function is \left, xf'/f\. Evaluated at a point x, this is : \left, \frac\. Most elegantly, this can be understood as (the absolute value of) the ratio of the logarithmic derivative of f, which is (\log f)' = f'/f, and the logarithmic derivative of x, which is (\log x)' = x'/x = 1/x, yielding a ratio of xf'/f. This is because the logarithmic derivative is the infinitesimal rate of relative change in a function: it is the derivative f' scaled by the value of f. Note that if a function has a
zero 0 (zero) is a number representing an empty quantity. In place-value notation such as the Hindu–Arabic numeral system, 0 also serves as a placeholder numerical digit, which works by Multiplication, multiplying digits to the left of 0 by th ...
at a point, its condition number at the point is infinite, as infinitesimal changes in the input can change the output from zero to positive or negative, yielding a ratio with zero in the denominator, hence infinite relative change. More directly, given a small change \Delta x in x, the relative change in x is x + \Delta x) - x/ x = (\Delta x) / x, while the relative change in f(x) is (x + \Delta x) - f(x)/ f(x). Taking the ratio yields : \frac = \frac \frac = \frac \frac. The last term is the difference quotient (the slope of the secant line), and taking the
limit Limit or Limits may refer to: Arts and media * ''Limit'' (manga), a manga by Keiko Suenobu * ''Limit'' (film), a South Korean film * Limit (music), a way to characterize harmony * "Limit" (song), a 2016 single by Luna Sea * "Limits", a 2019 ...
yields the derivative. Condition numbers of common elementary functions are particularly important in computing significant figures and can be computed immediately from the derivative; see significance arithmetic of transcendental functions. A few important ones are given below:


Several variables

Condition numbers can be defined for any function f mapping its data from some
domain Domain may refer to: Mathematics *Domain of a function, the set of input values for which the (total) function is defined ** Domain of definition of a partial function ** Natural domain of a partial function **Domain of holomorphy of a function * ...
(e.g. an m-tuple of
real number In mathematics, a real number is a number that can be used to measure a ''continuous'' one-dimensional quantity such as a distance, duration or temperature. Here, ''continuous'' means that values can have arbitrarily small variations. Every ...
s x) into some codomain (e.g. an n-tuple of real numbers f(x)), where both the domain and codomain are Banach spaces. They express how sensitive that function is to small changes (or small errors) in its arguments. This is crucial in assessing the sensitivity and potential accuracy difficulties of numerous computational problems, for example, polynomial root finding or computing
eigenvalue In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denote ...
s. The condition number of f at a point x (specifically, its relative condition number) is then defined to be the maximum ratio of the fractional change in f(x) to any fractional change in x, in the limit where the change \delta x in x becomes infinitesimally small: : \lim_ \sup_ \left \left. \frac \right/ \frac \right where \, \cdot\, is a norm on the domain/codomain of f. If f is differentiable, this is equivalent to: : \frac, where denotes the
Jacobian matrix In vector calculus, the Jacobian matrix (, ) of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. When this matrix is square, that is, when the function takes the same number of variable ...
of
partial derivative In mathematics, a partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant (as opposed to the total derivative, in which all variables are allowed to vary). Pa ...
s of f at x, and \, J(x)\, is the induced norm on the matrix.


See also

* Numerical methods for linear least squares * Hilbert matrix * Ill-posed problem * Singular value *
Wilson matrix Wilson matrix is the following 4\times 4 matrix having integers as elements: ::W = \begin5&7&6&5 \\ 7&10&8&7 \\ 6&8&10&9 \\ 5&7&9&10\end This is the coefficient matrix of the following system of linear equations considered in a paper by J. Morris ...


References


Further reading

* {{cite book , first=James , last=Demmel , author-link=James Demmel , chapter=Nearest Defective Matrices and the Geometry of Ill-conditioning , pages=35–55 , title=Reliable Numerical Computation , editor-first=M. G. , editor-last=Cox , editor2-first=S. , editor2-last=Hammarling , location=Oxford , publisher=Clarendon Press , year=1990 , isbn=0-19-853564-3


External links


Condition Number of a Matrix
at ''Holistic Numerical Methods Institute''


Condition number – Encyclopedia of Mathematics

Who Invented the Matrix Condition Number? by Nick Higham
Numerical analysis Matrices