HOME

TheInfoList



OR:

This is a list of
numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of numerical methods ...
topics.


General

*
Validated numerics Validated numerics, or rigorous computation, verified computation, reliable computation, numerical verification (german: Zuverlässiges Rechnen) is numerics including mathematically strict error (rounding error, truncation error, discretization er ...
* Iterative method *
Rate of convergence In numerical analysis, the order of convergence and the rate of convergence of a convergent sequence are quantities that represent how quickly the sequence approaches its limit. A sequence (x_n) that converges to x^* is said to have ''order of c ...
— the speed at which a convergent sequence approaches its limit **
Order of accuracy In numerical analysis, order of accuracy quantifies the rate of convergence of a numerical approximation of a differential equation to the exact solution. Consider u, the exact solution to a differential equation in an appropriate normed space (V,, ...
— rate at which numerical solution of differential equation converges to exact solution * Series acceleration — methods to accelerate the speed of convergence of a series **
Aitken's delta-squared process In numerical analysis, Aitken's delta-squared process or Aitken extrapolation is a series acceleration method, used for accelerating the rate of convergence of a sequence. It is named after Alexander Aitken, who introduced this method in 1926.Alexa ...
— most useful for linearly converging sequences **
Minimum polynomial extrapolation In mathematics, minimum polynomial extrapolation is a sequence transformation used for convergence acceleration In mathematics, series acceleration is one of a collection of sequence transformations for improving the rate of convergence of a series ...
— for vector sequences ** Richardson extrapolation **
Shanks transformation In numerical analysis, the Shanks transformation is a non-linear series acceleration method to increase the rate of convergence of a sequence. This method is named after Daniel Shanks, who rediscovered this sequence transformation in 1955. It was ...
— similar to Aitken's delta-squared process, but applied to the partial sums ** Van Wijngaarden transformation — for accelerating the convergence of an alternating series *
Abramowitz and Stegun ''Abramowitz and Stegun'' (''AS'') is the informal name of a 1964 mathematical reference work edited by Milton Abramowitz and Irene Stegun of the United States National Bureau of Standards (NBS), now the ''National Institute of Standards and ...
— book containing formulas and tables of many special functions **
Digital Library of Mathematical Functions The Digital Library of Mathematical Functions (DLMF) is an online project at the National Institute of Standards and Technology (NIST) to develop a database of mathematical reference data for special functions and their applications. It is inte ...
— successor of book by Abramowitz and Stegun *
Curse of dimensionality The curse of dimensionality refers to various phenomena that arise when analyzing and organizing data in high-dimensional spaces that do not occur in low-dimensional settings such as the three-dimensional physical space of everyday experience. T ...
* Local convergence and global convergence — whether you need a good initial guess to get convergence * Superconvergence * Discretization *
Difference quotient In single-variable calculus, the difference quotient is usually the name for the expression : \frac which when taken to the limit as ''h'' approaches 0 gives the derivative of the function ''f''. The name of the expression stems from the fact ...
*Complexity: **
Computational complexity of mathematical operations The following tables list the computational complexity of various algorithms for common mathematical operations. Here, complexity refers to the time complexity of performing computations on a multitape Turing machine. See big O notation for an e ...
** Smoothed analysis — measuring the expected performance of algorithms under slight random perturbations of worst-case inputs * Symbolic-numeric computation — combination of symbolic and numeric methods *Cultural and historical aspects: **
History of numerical solution of differential equations using computers Differential equations, in particular Euler equations, rose in prominence during World War II in calculating the accurate trajectory of ballistics, both rocket-propelled and gun or cannon type projectiles. Originally, mathematicians used the simple ...
**
Hundred-dollar, Hundred-digit Challenge problems The Hundred-dollar, Hundred-digit Challenge problems are 10 problems in numerical mathematics published in 2002 by . A $100 prize was offered to whoever produced the most accurate solutions, measured up to 10 significant digits. The deadline for th ...
— list of ten problems proposed by
Nick Trefethen Lloyd Nicholas Trefethen (born 30 August 1955) is an American mathematician, professor of numerical analysis and head of the Numerical Analysis Group at the Mathematical Institute, University of Oxford. Education Trefethen was born 30 August 19 ...
in 2002 **
International Workshops on Lattice QCD and Numerical Analysis International is an adjective (also used as a noun) meaning "between nations". International may also refer to: Music Albums * ''International'' (Kevin Michael album), 2011 * ''International'' (New Order album), 2002 * ''International'' (The T ...
**
Timeline of numerical analysis after 1945 The following is a timeline of numerical analysis after 1945, and deals with developments after the invention of the modern electronic computer, which began during Second World War. For a fuller history of the subject before this period, see tim ...
*General classes of methods: **
Collocation method In mathematics, a collocation method is a method for the numerical solution of ordinary differential equations, partial differential equations and integral equations. The idea is to choose a finite-dimensional space of candidate solutions (usually ...
— discretizes a continuous equation by requiring it only to hold at certain points **
Level-set method Level-set methods (LSM) are a conceptual framework for using level sets as a tool for numerical analysis of surfaces and shapes. The advantage of the level-set model is that one can perform numerical computations involving curves and surfaces o ...
***
Level set (data structures) In computer science a level set data structure is designed to represent discretely sampled dynamic level sets functions. A common use of this form of data structure is in efficient image rendering. The underlying method constructs a signed dist ...
— data structures for representing level sets **
Sinc numerical methods In numerical analysis and applied mathematics, sinc numerical methods are numerical techniques for finding approximate solutions of partial differential equations and integral equations based on the translates of sinc function and Cardinal function ...
— methods based on the sinc function, sinc(''x'') = sin(''x'') / ''x'' **
ABS methods ABS methods, where the acronym contains the initials of Jozsef Abaffy, Charles G. Broyden and Emilio Spedicato, have been developed since 1981 to generate a large class of algorithms for the following applications: * solution of general linear alg ...


Error

Error analysis (mathematics) In mathematics, error analysis is the study of kind and quantity of error, or uncertainty, that may be present in the solution to a problem. This issue is particularly prominent in applied areas such as numerical analysis and statistics. Error a ...
* Approximation * Approximation error * Catastrophic cancellation * Condition number *
Discretization error In numerical analysis, computational physics, and simulation, discretization error is the error resulting from the fact that a function of a continuous variable is represented in the computer by a finite number of evaluations, for example, on a ...
*
Floating point In computing, floating-point arithmetic (FP) is arithmetic that represents real numbers approximately, using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. For example, 12.345 can b ...
number ** Guard digit — extra precision introduced during a computation to reduce round-off error ** Truncation — rounding a floating-point number by discarding all digits after a certain digit ** Round-off error *** Numeric precision in Microsoft Excel **
Arbitrary-precision arithmetic In computer science, arbitrary-precision arithmetic, also called bignum arithmetic, multiple-precision arithmetic, or sometimes infinite-precision arithmetic, indicates that calculations are performed on numbers whose digits of precision are li ...
*
Interval arithmetic Interval arithmetic (also known as interval mathematics, interval analysis, or interval computation) is a mathematical technique used to put bounds on rounding errors and measurement errors in mathematical computation. Numerical methods usin ...
— represent every number by two floating-point numbers guaranteed to have the unknown number between them **
Interval contractor In mathematics, an interval contractor (or contractor for short) associated to a set X is an operator C which associates to a hyperrectangle in \bold^n another box C( of \bold^n such that the two following properties are always satisfied: ...
— maps interval to subinterval which still contains the unknown exact answer **
Interval propagation In numerical mathematics, interval propagation or interval constraint propagation is the problem of contracting interval domains associated to variables of R without removing any value that is consistent with a set of constraints (i.e., equations ...
— contracting interval domains without removing any value consistent with the constraints ***See also: Interval boundary element method, Interval finite element *
Loss of significance In numerical analysis, catastrophic cancellation is the phenomenon that subtracting good approximations to two nearby numbers may yield a very bad approximation to the difference of the original numbers. For example, if there are two studs, one L_ ...
*
Numerical error In software engineering and mathematics, numerical error is the error in the numerical computations. Types It can be the combined effect of two kinds of error in a calculation. * the first is caused by the finite precision of computations involv ...
* Numerical stability *Error propagation: **
Propagation of uncertainty In statistics, propagation of uncertainty (or propagation of error) is the effect of variables' uncertainties (or errors, more specifically random errors) on the uncertainty of a function based on them. When the variables are the values of exp ...
*** List of uncertainty propagation software **
Significance arithmetic Significance arithmetic is a set of rules (sometimes called significant figure rules) for approximating the propagation of uncertainty in scientific or statistical calculations. These rules can be used to find the appropriate number of significant ...
** Residual (numerical analysis) *
Relative change and difference In any quantitative science, the terms relative change and relative difference are used to compare two quantities while taking into account the "sizes" of the things being compared, i.e. dividing by a ''standard'' or ''reference'' or ''starting'' v ...
— the relative difference between ''x'' and ''y'' is , ''x'' − ''y'', / max(, ''x'', , , ''y'', ) *
Significant figures Significant figures (also known as the significant digits, ''precision'' or ''resolution'') of a number in positional notation are digits in the number that are reliable and necessary to indicate the quantity of something. If a number expres ...
** False precision — giving more significant figures than appropriate * Sterbenz lemma *
Truncation error In numerical analysis and scientific computing, truncation error is an error caused by approximating a mathematical process. Examples Infinite series A summation series for e^x is given by an infinite series such as e^x=1+ x+ \frac + \frac ...
— error committed by doing only a finite numbers of steps *
Well-posed problem The mathematical term well-posed problem stems from a definition given by 20th-century French mathematician Jacques Hadamard. He believed that mathematical models of physical phenomena should have the properties that: # a solution exists, # the sol ...
* Affine arithmetic


Elementary and special functions

*
Unrestricted algorithm An unrestricted algorithm is an algorithm for the computation of a mathematical function that puts no restrictions on the range of the argument or on the precision that may be demanded in the result. The idea of such an algorithm was put forward by ...
*Summation: ** Kahan summation algorithm **
Pairwise summation In numerical analysis, pairwise summation, also called cascade summation, is a technique to sum a sequence of finite-precision floating-point numbers that substantially reduces the accumulated round-off error compared to naively accumulating the s ...
— slightly worse than Kahan summation but cheaper **
Binary splitting In mathematics, binary splitting is a technique for speeding up numerical evaluation of many types of series with rational terms. In particular, it can be used to evaluate hypergeometric series at rational points. Method Given a series :S(a,b) = ...
**
2Sum 2Sum is a floating-point algorithm for computing the exact round-off error in a floating-point addition operation. 2Sum and its variant Fast2Sum were first published by Møller in 1965. Fast2Sum is often used implicitly in other algorithms such ...
*Multiplication: **
Multiplication algorithm A multiplication algorithm is an algorithm (or method) to multiply two numbers. Depending on the size of the numbers, different algorithms are more efficient than others. Efficient multiplication algorithms have existed since the advent of the d ...
— general discussion, simple methods **
Karatsuba algorithm The Karatsuba algorithm is a fast multiplication algorithm. It was discovered by Anatoly Karatsuba in 1960 and published in 1962. Knuth D.E. (1969) ''The Art of Computer Programming. v.2.'' Addison-Wesley Publ.Co., 724 pp. It is a div ...
— the first algorithm which is faster than straightforward multiplication **
Toom–Cook multiplication Toom–Cook, sometimes known as Toom-3, named after Andrei Toom, who introduced the new algorithm with its low complexity, and Stephen Cook, who cleaned the description of it, is a multiplication algorithm for large integers. Given two large integ ...
— generalization of Karatsuba multiplication **
Schönhage–Strassen algorithm The Schönhage–Strassen algorithm is an asymptotically fast multiplication algorithm for large integers. It was developed by Arnold Schönhage and Volker Strassen in 1971.A. Schönhage and V. Strassen,Schnelle Multiplikation großer Zahlen, '' ...
— based on Fourier transform, asymptotically very fast ** Fürer's algorithm — asymptotically slightly faster than Schönhage–Strassen *
Division algorithm A division algorithm is an algorithm which, given two integers N and D, computes their quotient and/or remainder, the result of Euclidean division. Some are applied by hand, while others are employed by digital circuit designs and software. Div ...
— for computing quotient and/or remainder of two numbers **
Long division In arithmetic, long division is a standard division algorithm suitable for dividing multi-digit Hindu-Arabic numerals (Positional notation) that is simple enough to perform by hand. It breaks down a division problem into a series of easier steps ...
** Restoring division ** Non-restoring division ** SRT division **
Newton–Raphson division A division algorithm is an algorithm which, given two integers N and D, computes their quotient and/or remainder, the result of Euclidean division. Some are applied by hand, while others are employed by digital circuit designs and software. Divis ...
: uses Newton's method to find the
reciprocal Reciprocal may refer to: In mathematics * Multiplicative inverse, in mathematics, the number 1/''x'', which multiplied by ''x'' gives the product 1, also known as a ''reciprocal'' * Reciprocal polynomial, a polynomial obtained from another pol ...
of D, and multiply that reciprocal by N to find the final quotient Q. ** Goldschmidt division *Exponentiation: **
Exponentiation by squaring Exponentiation is a mathematical operation, written as , involving two numbers, the '' base'' and the ''exponent'' or ''power'' , and pronounced as " (raised) to the (power of) ". When is a positive integer, exponentiation corresponds to re ...
** Addition-chain exponentiation * Multiplicative inverse Algorithms: for computing a number's multiplicative inverse (reciprocal). ** Newton's method *Polynomials: **
Horner's method In mathematics and computer science, Horner's method (or Horner's scheme) is an algorithm for polynomial evaluation. Although named after William George Horner, this method is much older, as it has been attributed to Joseph-Louis Lagrange by Horn ...
**
Estrin's scheme In numerical analysis, Estrin's scheme (after Gerald Estrin), also known as Estrin's method, is an algorithm for numerical evaluation of polynomials. Horner's method for evaluation of polynomials is one of the most commonly used algorithms for thi ...
— modification of the Horner scheme with more possibilities for parallelization **
Clenshaw algorithm In numerical analysis, the Clenshaw algorithm, also called Clenshaw summation, is a recursive method to evaluate a linear combination of Chebyshev polynomials. Note that this paper is written in terms of the ''Shifted'' Chebyshev polynomials of th ...
**
De Casteljau's algorithm In the mathematical field of numerical analysis, De Casteljau's algorithm is a recursive method to evaluate polynomials in Bernstein form or Bézier curves, named after its inventor Paul de Casteljau. De Casteljau's algorithm can also be used to ...
*Square roots and other roots: **
Integer square root In number theory, the integer square root (isqrt) of a non-negative integer ''n'' is the non-negative integer ''m'' which is the greatest integer less than or equal to the square root of ''n'', : \mbox( n ) = \lfloor \sqrt n \rfloor. For example ...
**
Methods of computing square roots Methods of computing square roots are numerical analysis algorithms for approximating the principal, or non-negative, square root (usually denoted \sqrt, \sqrt /math>, or S^) of a real number. Arithmetically, it means given S, a procedure for fi ...
** ''n''th root algorithm ** Shifting ''n''th root algorithm — similar to long division **
hypot In mathematics, Pythagorean addition is a binary operation on the real numbers that computes the length of the hypotenuse of a right triangle, given its two sides. According to the Pythagorean theorem, for a triangle with sides a and b, this lengt ...
— the function (''x''2 + ''y''2)1/2 ** Alpha max plus beta min algorithm — approximates hypot(x,y) ** Fast inverse square root — calculates 1 / using details of the IEEE floating-point system *Elementary functions (exponential, logarithm, trigonometric functions): **
Trigonometric tables In mathematics, tables of trigonometric functions are useful in a number of areas. Before the existence of pocket calculators, trigonometric tables were essential for navigation, science and engineering. The calculation of mathematical tables ...
— different methods for generating them **
CORDIC CORDIC (for "coordinate rotation digital computer"), also known as Volder's algorithm, or: Digit-by-digit method Circular CORDIC (Jack E. Volder), Linear CORDIC, Hyperbolic CORDIC (John Stephen Walther), and Generalized Hyperbolic CORDIC (GH C ...
— shift-and-add algorithm using a table of arc tangents **
BKM algorithm The BKM algorithm is a shift-and-add algorithm for computing elementary functions, first published in 1994 by Jean-Claude Bajard, Sylvanus Kla, and Jean-Michel Muller. BKM is based on computing complex logarithms (''L-mode'') and exponentials ( ...
— shift-and-add algorithm using a table of logarithms and complex numbers *Gamma function: ** Lanczos approximation ** Spouge's approximation — modification of Stirling's approximation; easier to apply than Lanczos * AGM method — computes arithmetic–geometric mean; related methods compute special functions *
FEE method In mathematics, the FEE method, or fast E-function evaluation method, is the method of fast summation of series of a special form. It was constructed in 1990 by Ekaterina Karatsuba and is so-named because it makes fast computations of the Siegel - ...
(Fast E-function Evaluation) — fast summation of series like the power series for e''x'' * Gal's accurate tables — table of function values with unequal spacing to reduce round-off error * Spigot algorithm — algorithms that can compute individual digits of a real number * Approximations of : ** Liu Hui's π algorithm — first algorithm that can compute π to arbitrary precision **
Leibniz formula for π In mathematics, the Leibniz formula for , named after Gottfried Leibniz, states that 1-\frac+\frac-\frac+\frac-\cdots=\frac, an alternating series. It is also called the Madhava–Leibniz series as it is a special case of a more general serie ...
— alternating series with very slow convergence **
Wallis product In mathematics, the Wallis product for , published in 1656 by John Wallis, states that :\begin \frac & = \prod_^ \frac = \prod_^ \left(\frac \cdot \frac\right) \\ pt& = \Big(\frac \cdot \frac\Big) \cdot \Big(\frac \cdot \frac\Big) \cdot \Big(\fr ...
— infinite product converging slowly to π/2 **
Viète's formula In mathematics, Viète's formula is the following infinite product of nested radicals representing twice the reciprocal of the mathematical constant : \frac2\pi = \frac2 \cdot \frac2 \cdot \frac2 \cdots It can also be represented as: \frac2\pi ...
— more complicated infinite product which converges faster ** Gauss–Legendre algorithm — iteration which converges quadratically to π, based on arithmetic–geometric mean ** Borwein's algorithm — iteration which converges quartically to 1/π, and other algorithms **
Chudnovsky algorithm The Chudnovsky algorithm is a fast method for calculating the digits of , based on Ramanujan’s formulae. It was published by the Chudnovsky brothers in 1988. It was used in the world record calculations of 2.7 trillion digits of in Decembe ...
— fast algorithm that calculates a hypergeometric series **
Bailey–Borwein–Plouffe formula The Bailey–Borwein–Plouffe formula (BBP formula) is a formula for . It was discovered in 1995 by Simon Plouffe and is named after the authors of the article in which it was published, David H. Bailey, Peter Borwein, and Plouffe. Before that, ...
— can be used to compute individual hexadecimal digits of π ** Bellard's formula — faster version of Bailey–Borwein–Plouffe formula ** List of formulae involving π


Numerical linear algebra

Numerical linear algebra Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathematic ...
— study of numerical algorithms for linear algebra problems


Basic concepts

*Types of matrices appearing in numerical analysis: **
Sparse matrix In numerical analysis and scientific computing, a sparse matrix or sparse array is a matrix in which most of the elements are zero. There is no strict definition regarding the proportion of zero-value elements for a matrix to qualify as sparse b ...
***
Band matrix Band or BAND may refer to: Places * Bánd, a village in Hungary *Band, Iran, a village in Urmia County, West Azerbaijan Province, Iran * Band, Mureș, a commune in Romania *Band-e Majid Khan, a village in Bukan County, West Azerbaijan Province, ...
*** Bidiagonal matrix ***
Tridiagonal matrix In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal (the first diagonal below this), and the supradiagonal/upper diagonal (the first diagonal above the main di ...
*** Pentadiagonal matrix *** Skyline matrix **
Circulant matrix In linear algebra, a circulant matrix is a square matrix in which all row vectors are composed of the same elements and each row vector is rotated one element to the right relative to the preceding row vector. It is a particular kind of Toeplit ...
**
Triangular matrix In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called if all the entries ''above'' the main diagonal are zero. Similarly, a square matrix is called if all the entries ''below'' the main diagonal are ...
**
Diagonally dominant matrix In mathematics, a square matrix is said to be diagonally dominant if, for every row of the matrix, the magnitude of the diagonal entry in a row is larger than or equal to the sum of the magnitudes of all the other (non-diagonal) entries in that row ...
** Block matrix — matrix composed of smaller matrices **
Stieltjes matrix In mathematics, particularly matrix theory, a Stieltjes matrix, named after Thomas Joannes Stieltjes, is a real symmetric positive definite matrix with nonpositive off-diagonal entries. A Stieltjes matrix is necessarily an M-matrix. Every ''n× ...
— symmetric positive definite with non-positive off-diagonal entries **
Hilbert matrix In linear algebra, a Hilbert matrix, introduced by , is a square matrix with entries being the unit fractions : H_ = \frac. For example, this is the 5 × 5 Hilbert matrix: : H = \begin 1 & \frac & \frac & \frac & \frac \\ \frac & \frac & \f ...
— example of a matrix which is extremely ill-conditioned (and thus difficult to handle) ** Wilkinson matrix — example of a symmetric tridiagonal matrix with pairs of nearly, but not exactly, equal eigenvalues **
Convergent matrix In linear algebra, a convergent matrix is a matrix that converges to the zero matrix under matrix exponentiation. Background When successive powers of a matrix T become small (that is, when all of the entries of T approach zero, upon raising T t ...
— square matrix whose successive powers approach the zero matrix *Algorithms for matrix multiplication: ** Strassen algorithm **
Coppersmith–Winograd algorithm In theoretical computer science, the computational complexity of matrix multiplication dictates how quickly the operation of matrix multiplication can be performed. Matrix multiplication algorithms are a central subroutine in theoretical and nu ...
** Cannon's algorithm — a distributed algorithm, especially suitable for processors laid out in a 2d grid ** Freivalds' algorithm — a randomized algorithm for checking the result of a multiplication *
Matrix decomposition In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of ...
s: **
LU decomposition In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix (see matrix decomposition). The product sometimes includes a p ...
— lower triangular times upper triangular **
QR decomposition In linear algebra, a QR decomposition, also known as a QR factorization or QU factorization, is a decomposition of a matrix ''A'' into a product ''A'' = ''QR'' of an orthogonal matrix ''Q'' and an upper triangular matrix ''R''. QR decomp ...
— orthogonal matrix times triangular matrix *** RRQR factorization — rank-revealing QR factorization, can be used to compute rank of a matrix **
Polar decomposition In mathematics, the polar decomposition of a square real or complex matrix A is a factorization of the form A = U P, where U is an orthogonal matrix and P is a positive semi-definite symmetric matrix (U is a unitary matrix and P is a positive se ...
— unitary matrix times positive-semidefinite Hermitian matrix **Decompositions by similarity: ***
Eigendecomposition In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matr ...
— decomposition in terms of eigenvectors and eigenvalues ***
Jordan normal form In linear algebra, a Jordan normal form, also known as a Jordan canonical form (JCF), is an upper triangular matrix of a particular form called a Jordan matrix representing a linear operator on a finite-dimensional vector space with respect to so ...
— bidiagonal matrix of a certain form; generalizes the eigendecomposition **** Weyr canonical form — permutation of Jordan normal form ***
Jordan–Chevalley decomposition In mathematics, the Jordan–Chevalley decomposition, named after Camille Jordan and Claude Chevalley, expresses a linear operator as the sum of its commuting semisimple part and its nilpotent part. The multiplicative decomposition expresses an inve ...
— sum of commuting nilpotent matrix and diagonalizable matrix *** Schur decomposition — similarity transform bringing the matrix to a triangular matrix **
Singular value decomposition In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any \ m \times n\ matrix. It is re ...
— unitary matrix times diagonal matrix times unitary matrix *
Matrix splitting In the mathematical discipline of numerical linear algebra, a matrix splitting is an expression which represents a given matrix as a sum or difference of matrices. Many iterative methods (for example, for systems of differential equations) depen ...
— expressing a given matrix as a sum or difference of matrices


Solving systems of linear equations

* Gaussian elimination ** Row echelon form — matrix in which all entries below a nonzero entry are zero ** Bareiss algorithm — variant which ensures that all entries remain integers if the initial matrix has integer entries **
Tridiagonal matrix algorithm In numerical linear algebra, the tridiagonal matrix algorithm, also known as the Thomas algorithm (named after Llewellyn Thomas), is a simplified form of Gaussian elimination that can be used to solve tridiagonal systems of equations. A tridiagon ...
— simplified form of Gaussian elimination for tridiagonal matrices *
LU decomposition In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix (see matrix decomposition). The product sometimes includes a p ...
— write a matrix as a product of an upper- and a lower-triangular matrix **
Crout matrix decomposition In linear algebra, the Crout matrix decomposition is an LU decomposition which decomposes a matrix into a lower triangular matrix (L), an upper triangular matrix (U) and, although not always needed, a permutation matrix (P). It was developed by ...
** LU reduction — a special parallelized version of a LU decomposition algorithm * Block LU decomposition *
Cholesky decomposition In linear algebra, the Cholesky decomposition or Cholesky factorization (pronounced ) is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for effici ...
— for solving a system with a positive definite matrix ** Minimum degree algorithm **
Symbolic Cholesky decomposition In the mathematical subfield of numerical analysis the symbolic Cholesky decomposition is an algorithm used to determine the non-zero pattern for the L factors of a symmetric sparse matrix when applying the Cholesky decomposition or variants. Algo ...
* Iterative refinement — procedure to turn an inaccurate solution in a more accurate one *Direct methods for sparse matrices: ** Frontal solver — used in finite element methods ** Nested dissection — for symmetric matrices, based on graph partitioning *
Levinson recursion Levinson recursion or Levinson–Durbin recursion is a procedure in linear algebra to recursively calculate the solution to an equation involving a Toeplitz matrix. The algorithm runs in time, which is a strong improvement over Gauss–Jordan eli ...
— for Toeplitz matrices *
SPIKE algorithm The SPIKE algorithm is a hybrid Parallel computing, parallel solver for Band matrix, banded System of linear equations, linear systems developed by Eric Polizzi and Ahmed Sameh Overview The SPIKE algorithm deals with a linear system , where is a b ...
— hybrid parallel solver for narrow-banded matrices *
Cyclic reduction Cyclic reduction is a numerical method for solving large linear systems by repeatedly splitting the problem. Each step eliminates even or odd rows and columns of a matrix and remains in a similar form. The elimination step is relatively expensive ...
— eliminate even or odd rows or columns, repeat *Iterative methods: **
Jacobi method In numerical linear algebra, the Jacobi method is an iterative algorithm for determining the solutions of a strictly diagonally dominant system of linear equations. Each diagonal element is solved for, and an approximate value is plugged in. The ...
**
Gauss–Seidel method In numerical linear algebra, the Gauss–Seidel method, also known as the Liebmann method or the method of successive displacement, is an iterative method used to solve a system of linear equations. It is named after the German mathematicians Carl ...
***
Successive over-relaxation In numerical linear algebra, the method of successive over-relaxation (SOR) is a variant of the Gauss–Seidel method for solving a linear system of equations, resulting in faster convergence. A similar method can be used for any slowly converging ...
(SOR) — a technique to accelerate the Gauss–Seidel method ****
Symmetric successive over-relaxation In applied mathematics, symmetric successive over-relaxation (SSOR), is a preconditioner. If the original matrix can be split into diagonal, lower and upper triangular as A=D+L+L^\mathsf then the SSOR preconditioner matrix is defined as M=(D+L) D^ ...
(SSOR) — variant of SOR for symmetric matrices ***
Backfitting algorithm In statistics, the backfitting algorithm is a simple iterative procedure used to fit a generalized additive model. It was introduced in 1985 by Leo Breiman and Jerome Friedman along with generalized additive models. In most cases, the backfitting ...
— iterative procedure used to fit a generalized additive model, often equivalent to Gauss–Seidel ** Modified Richardson iteration ** Conjugate gradient method (CG) — assumes that the matrix is positive definite *** Derivation of the conjugate gradient method *** Nonlinear conjugate gradient method — generalization for nonlinear optimization problems **
Biconjugate gradient method In mathematics, more specifically in numerical linear algebra, the biconjugate gradient method is an algorithm to solve systems of linear equations :A x= b.\, Unlike the conjugate gradient method, this algorithm does not require the matrix A to ...
(BiCG) ***
Biconjugate gradient stabilized method In numerical linear algebra, the biconjugate gradient stabilized method, often abbreviated as BiCGSTAB, is an iterative method developed by H. A. van der Vorst for the numerical solution of nonsymmetric linear systems. It is a variant of the bic ...
(BiCGSTAB) — variant of BiCG with better convergence ** Conjugate residual method — similar to CG but only assumed that the matrix is symmetric **
Generalized minimal residual method In mathematics, the generalized minimal residual method (GMRES) is an iterative method for the numerical solution of an indefinite nonsymmetric system of linear equations. The method approximates the solution by the vector in a Krylov subspace wit ...
(GMRES) — based on the Arnoldi iteration ** Chebyshev iteration — avoids inner products but needs bounds on the spectrum ** Stone's method (SIP — Strongly Implicit Procedure) — uses an incomplete LU decomposition ** Kaczmarz method **
Preconditioner In mathematics, preconditioning is the application of a transformation, called the preconditioner, that conditions a given problem into a form that is more suitable for Numerical mathematics, numerical solving methods. Preconditioning is typical ...
*** Incomplete Cholesky factorization — sparse approximation to the Cholesky factorization ***
Incomplete LU factorization In numerical linear algebra, an incomplete LU factorization (abbreviated as ILU) of a matrix is a sparse approximation of the LU factorization often used as a preconditioner. Introduction Consider a sparse linear system Ax = b. These are often ...
— sparse approximation to the LU factorization ** Uzawa iteration — for saddle node problems *Underdetermined and overdetermined systems (systems that have no or more than one solution): ** Numerical computation of null space — find all solutions of an underdetermined system ** Moore–Penrose pseudoinverse — for finding solution with smallest 2-norm (for underdetermined systems) or smallest residual **
Sparse approximation Sparse approximation (also known as sparse representation) theory deals with sparse solutions for systems of linear equations. Techniques for finding these solutions and exploiting them in applications have found wide use in image processing, sign ...
— for finding the sparsest solution (i.e., the solution with as many zeros as possible)


Eigenvalue algorithms

Eigenvalue algorithm In numerical analysis, one of the most important problems is designing efficient and stable algorithms for finding the eigenvalues of a matrix. These eigenvalue algorithms may also find eigenvectors. Eigenvalues and eigenvectors Given an square ...
— a numerical algorithm for locating the eigenvalues of a matrix *
Power iteration In mathematics, power iteration (also known as the power method) is an eigenvalue algorithm: given a diagonalizable matrix A, the algorithm will produce a number \lambda, which is the greatest (in absolute value) eigenvalue of A, and a nonzero vec ...
* Inverse iteration * Rayleigh quotient iteration * Arnoldi iteration — based on Krylov subspaces *
Lanczos algorithm The Lanczos algorithm is an iterative method devised by Cornelius Lanczos that is an adaptation of power methods to find the m "most useful" (tending towards extreme highest/lowest) eigenvalues and eigenvectors of an n \times n Hermitian matri ...
— Arnoldi, specialized for positive-definite matrices ** Block Lanczos algorithm — for when matrix is over a finite field *
QR algorithm In numerical linear algebra, the QR algorithm or QR iteration is an eigenvalue algorithm: that is, a procedure to calculate the eigenvalues and eigenvectors of a matrix. The QR algorithm was developed in the late 1950s by John G. F. Francis and by ...
*
Jacobi eigenvalue algorithm In numerical linear algebra, the Jacobi eigenvalue algorithm is an iterative method for the calculation of the eigenvalues and eigenvectors of a real symmetric matrix (a process known as diagonalization). It is named after Carl Gustav Jacob Jacobi, ...
— select a small submatrix which can be diagonalized exactly, and repeat **
Jacobi rotation In numerical linear algebra, a Jacobi rotation is a rotation, ''Q'k''ℓ, of a 2-dimensional linear subspace of an ''n-''dimensional inner product space, chosen to zero a symmetric pair of off-diagonal entries of an ''n''×''n'' real symmetric ma ...
— the building block, almost a Givens rotation **
Jacobi method for complex Hermitian matrices In mathematics, the Jacobi method for complex Hermitian matrices is a generalization of the Jacobi iteration method. The Jacobi iteration method is also explained in "Introduction to Linear Algebra" by . Derivation The complex unitary rotati ...
*
Divide-and-conquer eigenvalue algorithm Divide-and-conquer eigenvalue algorithms are a class of eigenvalue algorithms for Hermitian or real symmetric matrices that have recently (circa 1990s) become competitive in terms of stability and efficiency with more traditional algorithms such as ...
* Folded spectrum method *
LOBPCG Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) is a matrix-free method for finding the largest (or smallest) eigenvalues and the corresponding eigenvectors of a symmetric generalized eigenvalue problem :A x= \lambda B x, for a ...
— Locally Optimal Block Preconditioned Conjugate Gradient Method *
Eigenvalue perturbation In mathematics, an eigenvalue perturbation problem is that of finding the eigenvectors and eigenvalues of a system Ax=\lambda x that is perturbed from one with known eigenvectors and eigenvalues A_0 x=\lambda_0x_0 . This is useful for studyi ...
— stability of eigenvalues under perturbations of the matrix


Other concepts and algorithms

* Orthogonalization algorithms: **
Gram–Schmidt process In mathematics, particularly linear algebra and numerical analysis, the Gram–Schmidt process is a method for orthonormalizing a set of vectors in an inner product space, most commonly the Euclidean space equipped with the standard inner produ ...
**
Householder transformation In linear algebra, a Householder transformation (also known as a Householder reflection or elementary reflector) is a linear transformation that describes a reflection about a plane or hyperplane containing the origin. The Householder transformati ...
*** Householder operator — analogue of Householder transformation for general inner product spaces **
Givens rotation In numerical linear algebra, a Givens rotation is a rotation in the plane spanned by two coordinates axes. Givens rotations are named after Wallace Givens, who introduced them to numerical analysts in the 1950s while he was working at Argonne Nation ...
*
Krylov subspace In linear algebra, the order-''r'' Krylov subspace generated by an ''n''-by-''n'' matrix ''A'' and a vector ''b'' of dimension ''n'' is the linear subspace spanned by the images of ''b'' under the first ''r'' powers of ''A'' (starting from A^0=I), ...
* Block matrix pseudoinverse *
Bidiagonalization Bidiagonalization is one of unitary (orthogonal) matrix decompositions such that U* A V = B, where U and V are unitary (orthogonal) matrices; * denotes Hermitian transpose; and B is upper bidiagonal. A is allowed to be rectangular. For dense matr ...
*
Cuthill–McKee algorithm In numerical linear algebra, the Cuthill–McKee algorithm (CM), named after Elizabeth Cuthill and James McKee,E. Cuthill and J. McKeethe bandwidth of sparse symmetric matrices''In Proc. 24th Nat. Conf. Association for Computing Machinery, ACM, ...
— permutes rows/columns in sparse matrix to yield a narrow band matrix * In-place matrix transposition — computing the transpose of a matrix without using much additional storage *
Pivot element The pivot or pivot element is the element of a matrix, or an array, which is selected first by an algorithm (e.g. Gaussian elimination, simplex algorithm, etc.), to do certain calculations. In the case of matrix algorithms, a pivot entry is usuall ...
— entry in a matrix on which the algorithm concentrates *
Matrix-free methods In computational mathematics, a matrix-free method is an algorithm for solving a linear system of equations or an eigenvalue problem that does not store the coefficient matrix explicitly, but accesses the matrix by evaluating matrix-vector products ...
— methods that only access the matrix by evaluating matrix-vector products


Interpolation and approximation

Interpolation — construct a function going through some given data points *
Nearest-neighbor interpolation Nearest-neighbor interpolation (also known as proximal interpolation or, in some contexts, point sampling) is a simple method of multivariate interpolation in one or more dimensions. Interpolation is the problem of approximating the value of ...
— takes the value of the nearest neighbor


Polynomial interpolation

Polynomial interpolation In numerical analysis, polynomial interpolation is the interpolation of a given data set by the polynomial of lowest possible degree that passes through the points of the dataset. Given a set of data points (x_0,y_0), \ldots, (x_n,y_n), with no ...
— interpolation by polynomials *
Linear interpolation In mathematics, linear interpolation is a method of curve fitting using linear polynomials to construct new data points within the range of a discrete set of known data points. Linear interpolation between two known points If the two known poi ...
*
Runge's phenomenon In the mathematical field of numerical analysis, Runge's phenomenon () is a problem of oscillation at the edges of an interval that occurs when using polynomial interpolation with polynomials of high degree over a set of equispaced interpolation ...
*
Vandermonde matrix In linear algebra, a Vandermonde matrix, named after Alexandre-Théophile Vandermonde, is a matrix with the terms of a geometric progression in each row: an matrix :V=\begin 1 & x_1 & x_1^2 & \dots & x_1^\\ 1 & x_2 & x_2^2 & \dots & x_2^\\ 1 & x_ ...
* Chebyshev polynomials * Chebyshev nodes *
Lebesgue constant (interpolation) In mathematics, the Lebesgue constants (depending on a set of nodes and of its size) give an idea of how good the interpolation, interpolant of a Function (mathematics), function (at the given nodes) is in comparison with the best polynomial appro ...
*Different forms for the interpolant: **
Newton polynomial In the mathematical field of numerical analysis, a Newton polynomial, named after its inventor Isaac Newton, is an interpolation polynomial for a given set of data points. The Newton polynomial is sometimes called Newton's divided differences interp ...
***
Divided differences In mathematics, divided differences is an algorithm, historically used for computing tables of logarithms and trigonometric functions. Charles Babbage's difference engine, an early mechanical calculator, was designed to use this algorithm in its o ...
***
Neville's algorithm In mathematics, Neville's algorithm is an algorithm used for polynomial interpolation that was derived by the mathematician Eric Harold Neville in 1934. Given ''n'' + 1 points, there is a unique polynomial of degree ''≤ n'' which goes through the ...
— for evaluating the interpolant; based on the Newton form **
Lagrange polynomial In numerical analysis, the Lagrange interpolating polynomial is the unique polynomial of lowest degree that interpolates a given set of data. Given a data set of coordinate pairs (x_j, y_j) with 0 \leq j \leq k, the x_j are called ''nodes'' an ...
**
Bernstein polynomial In the mathematical field of numerical analysis, a Bernstein polynomial is a polynomial that is a linear combination of Bernstein basis polynomials. The idea is named after Sergei Natanovich Bernstein. A numerically stable way to evaluate pol ...
— especially useful for approximation **
Brahmagupta's interpolation formula Brahmagupta's interpolation formula is a second-order polynomial interpolation formula developed by the Indian mathematician and astronomer Brahmagupta (598–668 CE) in the early 7th century CE. The Sanskrit couplet describing the formula can be ...
— seventh-century formula for quadratic interpolation *Extensions to multiple dimensions: **
Bilinear interpolation In mathematics, bilinear interpolation is a method for interpolating functions of two variables (e.g., ''x'' and ''y'') using repeated linear interpolation. It is usually applied to functions sampled on a 2D rectilinear grid, though it can be ...
**
Trilinear interpolation Trilinear interpolation is a method of multivariate interpolation on a 3-dimensional regular grid. It approximates the value of a function at an intermediate point (x, y, z) within the local axial rectangular prism linearly, using function data ...
** Bicubic interpolation ** Tricubic interpolation ** Padua points — set of points in R2 with unique polynomial interpolant and minimal growth of Lebesgue constant *
Hermite interpolation In numerical analysis, Hermite interpolation, named after Charles Hermite, is a method of polynomial interpolation, which generalizes Lagrange interpolation. Lagrange interpolation allows computing a polynomial of degree less than that takes the s ...
* Birkhoff interpolation * Abel–Goncharov interpolation


Spline interpolation

Spline interpolation In the mathematical field of numerical analysis, spline interpolation is a form of interpolation where the interpolant is a special type of piecewise polynomial called a spline. That is, instead of fitting a single, high-degree polynomial to all ...
— interpolation by piecewise polynomials *
Spline (mathematics) In mathematics, a spline is a special function defined piecewise by polynomials. In interpolating problems, spline interpolation is often preferred to polynomial interpolation because it yields similar results, even when using low degree po ...
— the piecewise polynomials used as interpolants * Perfect spline — polynomial spline of degree ''m'' whose ''m''th derivate is ±1 *
Cubic Hermite spline In numerical analysis, a cubic Hermite spline or cubic Hermite interpolator is a spline where each piece is a third-degree polynomial specified in Hermite form, that is, by its values and first derivatives at the end points of the correspondi ...
**
Centripetal Catmull–Rom spline In computer graphics, the centripetal Catmull–Rom spline is a variant form of the Catmull–Rom spline, originally formulated by Edwin Catmull and Raphael Rom, which can be evaluated using a recursive algorithm proposed by Barry and Goldman. It is ...
— special case of cubic Hermite splines without self-intersections or cusps *
Monotone cubic interpolation In the mathematical field of numerical analysis, monotone cubic interpolation is a variant of cubic interpolation that preserves monotonicity of the data set being interpolated. Monotonicity is preserved by linear interpolation but not guaranteed ...
*
Hermite spline In the mathematical subfield of numerical analysis, a Hermite spline is a spline curve where each polynomial of the spline is in Hermite form. See also * Cubic Hermite spline *Hermite polynomials *Hermite interpolation In numerical analysis, Her ...
* Bézier curve **
De Casteljau's algorithm In the mathematical field of numerical analysis, De Casteljau's algorithm is a recursive method to evaluate polynomials in Bernstein form or Bézier curves, named after its inventor Paul de Casteljau. De Casteljau's algorithm can also be used to ...
**
composite Bézier curve In geometric modelling and in computer graphics, a composite Bézier curve or Bézier spline is a spline made out of Bézier curves that is at least C^0 continuous. In other words, a composite Bézier curve is a series of Bézier curves joined ...
**Generalizations to more dimensions: *** Bézier triangle — maps a triangle to R3 ***
Bézier surface Bézier surfaces are a species of mathematical spline used in computer graphics, computer-aided design, and finite element modeling. As with Bézier curves, a Bézier surface is defined by a set of control points. Similar to interpolation in man ...
— maps a square to R3 * B-spline ** Box spline — multivariate generalization of B-splines **
Truncated power function In mathematics, the truncated power function with exponent n is defined as :x_+^n = \begin x^n &:\ x > 0 \\ 0 &:\ x \le 0. \end In particular, :x_+ = \begin x &:\ x > 0 \\ 0 &:\ x \le 0. \end and interpret the exponent as conventional power ...
** De Boor's algorithm — generalizes De Casteljau's algorithm *
Non-uniform rational B-spline Non-uniform rational basis spline (NURBS) is a mathematical model using basis splines (B-splines) that is commonly used in computer graphics for representing curves and surfaces. It offers great flexibility and precision for handling both analy ...
(NURBS) ** T-spline — can be thought of as a NURBS surface for which a row of control points is allowed to terminate *
Kochanek–Bartels spline In mathematics, a Kochanek–Bartels spline or Kochanek–Bartels curve is a cubic Hermite spline with tension, bias, and continuity parameters defined to change the behavior of the tangents. Given ''n'' + 1 knots, :p0, ..., p''n'', to be inte ...
* Coons patch — type of manifold parametrization used to smoothly join other surfaces together * M-spline — a non-negative spline * I-spline — a monotone spline, defined in terms of M-splines * Smoothing spline — a spline fitted smoothly to noisy data * Blossom (functional) — a unique, affine, symmetric map associated to a polynomial or spline *See also: List of numerical computational geometry topics


Trigonometric interpolation

Trigonometric interpolation — interpolation by trigonometric polynomials *
Discrete Fourier transform In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced samples of a function into a same-length sequence of equally-spaced samples of the discrete-time Fourier transform (DTFT), which is a comple ...
— can be viewed as trigonometric interpolation at equidistant points ** Relations between Fourier transforms and Fourier series * Fast Fourier transform (FFT) — a fast method for computing the discrete Fourier transform ** Bluestein's FFT algorithm ** Bruun's FFT algorithm **
Cooley–Tukey FFT algorithm The Cooley–Tukey algorithm, named after J. W. Cooley and John Tukey, is the most common fast Fourier transform (FFT) algorithm. It re-expresses the discrete Fourier transform (DFT) of an arbitrary composite size N = N_1N_2 in terms of ''N''1 s ...
**
Split-radix FFT algorithm The split-radix FFT is a fast Fourier transform (FFT) algorithm for computing the discrete Fourier transform (DFT), and was first described in an initially little-appreciated paper by R. Yavne (1968) and subsequently rediscovered simultaneously by ...
— variant of Cooley–Tukey that uses a blend of radices 2 and 4 ** Goertzel algorithm **
Prime-factor FFT algorithm The prime-factor algorithm (PFA), also called the Good–Thomas algorithm (1958/1963), is a fast Fourier transform (FFT) algorithm that re-expresses the discrete Fourier transform (DFT) of a size ''N'' = ''N''1''N''2 as a two-dimensional ''N''1× ...
** Rader's FFT algorithm ** Bit-reversal permutation — particular permutation of vectors with 2''m'' entries used in many FFTs. ** Butterfly diagram **
Twiddle factor A twiddle factor, in fast Fourier transform (FFT) algorithms, is any of the trigonometric constant coefficients that are multiplied by the data in the course of the algorithm. This term was apparently coined by Gentleman & Sande in 1966, and has ...
— the trigonometric constant coefficients that are multiplied by the data ** Cyclotomic fast Fourier transform — for FFT over finite fields **Methods for computing discrete convolutions with finite impulse response filters using the FFT: ***
Overlap–add method In signal processing, the overlap–add method is an efficient way to evaluate the discrete convolution of a very long signal x /math> with a finite impulse response (FIR) filter h /math>: where for ''m'' outside the region . This article uses c ...
***
Overlap–save method In signal processing, ''overlap–save'' is the traditional name for an efficient way to evaluate the discrete convolution between a very long signal x /math> and a finite impulse response (FIR) filter h /math>: where for ''m'' outside the regio ...
*
Sigma approximation In mathematics, σ-approximation adjusts a Fourier summation to greatly reduce the Gibbs phenomenon, which would otherwise occur at discontinuities. A σ-approximated summation for a series of period ''T'' can be written as follows: s(\theta) = ...
*
Dirichlet kernel In mathematical analysis, the Dirichlet kernel, named after the German mathematician Peter Gustav Lejeune Dirichlet, is the collection of periodic functions defined as D_n(x)= \sum_^n e^ = \left(1+2\sum_^n\cos(kx)\right)=\frac, where is any nonneg ...
— convolving any function with the Dirichlet kernel yields its trigonometric interpolant *
Gibbs phenomenon In mathematics, the Gibbs phenomenon, discovered by Available on-line at:National Chiao Tung University: Open Course Ware: Hewitt & Hewitt, 1979. and rediscovered by , is the oscillatory behavior of the Fourier series of a piecewise continuousl ...


Other interpolants

* Simple rational approximation **
Polynomial and rational function modeling In statistical modeling (especially process modeling), polynomial functions and rational functions are sometimes used as an empirical technique for curve fitting. Polynomial function models A polynomial function is one that has the form : y = a_x ...
— comparison of polynomial and rational interpolation *
Wavelet A wavelet is a wave-like oscillation with an amplitude that begins at zero, increases or decreases, and then returns to zero one or more times. Wavelets are termed a "brief oscillation". A taxonomy of wavelets has been established, based on the num ...
** Continuous wavelet ** Transfer matrix **See also: List of functional analysis topics, List of wavelet-related transforms *
Inverse distance weighting Inverse distance weighting (IDW) is a type of deterministic method for multivariate interpolation with a known scattered set of points. The assigned values to unknown points are calculated with a weighted average of the values available at the kn ...
*
Radial basis function A radial basis function (RBF) is a real-valued function \varphi whose value depends only on the distance between the input and some fixed point, either the origin, so that \varphi(\mathbf) = \hat\varphi(\left\, \mathbf\right\, ), or some other fixed ...
(RBF) — a function of the form ƒ(''x'') = ''φ''(, ''x''−''x''0, ) **
Polyharmonic spline In applied mathematics, polyharmonic splines are used for function approximation and data interpolation. They are very useful for interpolating and fitting scattered data in many dimensions. Special cases include thin plate splines and natural cu ...
— a commonly used radial basis function **
Thin plate spline Thin plate splines (TPS) are a spline-based technique for data interpolation and smoothing. They were introduced to geometric design by Duchon. They are an important special case of a polyharmonic spline. Robust Point Matching (RPM) is a common e ...
— a specific polyharmonic spline: ''r''2 log ''r'' ** Hierarchical RBF * Subdivision surface — constructed by recursively subdividing a piecewise linear interpolant **
Catmull–Clark subdivision surface The Catmull–Clark algorithm is a technique used in 3D computer graphics to create curved surfaces by using subdivision surface 3D modeling, modeling. It was devised by Edwin Catmull and James H. Clark, Jim Clark in 1978 as a generalization of b ...
** Doo–Sabin subdivision surface ** Loop subdivision surface * Slerp (spherical linear interpolation) — interpolation between two points on a sphere **Generalized quaternion interpolation — generalizes slerp for interpolation between more than two quaternions * Irrational base discrete weighted transform *
Nevanlinna–Pick interpolation In complex analysis, given ''initial data'' consisting of n points \lambda_1, \ldots, \lambda_n in the complex unit disc \mathbb and ''target data'' consisting of n points z_1, \ldots, z_n in \mathbb, the Nevanlinna–Pick interpolation problem is ...
— interpolation by analytic functions in the unit disc subject to a bound ** Pick matrix — the Nevanlinna–Pick interpolation has a solution if this matrix is positive semi-definite *
Multivariate interpolation In numerical analysis, multivariate interpolation is interpolation on functions of more than one variable; when the variates are spatial coordinates, it is also known as spatial interpolation. The function to be interpolated is known at given poi ...
— the function being interpolated depends on more than one variable ** Barnes interpolation — method for two-dimensional functions using Gaussians common in meteorology ** Coons surface — combination of linear interpolation and bilinear interpolation **
Lanczos resampling filtering and Lanczos resampling are two applications of a mathematical formula. It can be used as a low-pass filter or used to smoothly interpolate the value of a digital signal between its samples. In the latter case it maps each sample of ...
— based on convolution with a sinc function **
Natural neighbor interpolation image:Natural-neighbors-coefficients-example.png, 200px, Natural neighbor interpolation with Sibson weights. The area of the green circles are the interpolating weights, ''w'i''. The purple-shaded region is the new Voronoi cell, after inserting ...
** Nearest neighbor value interpolation ** PDE surface ** Transfinite interpolation — constructs function on planar domain given its values on the boundary ** Trend surface analysis — based on low-order polynomials of spatial coordinates; uses scattered observations **Method based on polynomials are listed under ''Polynomial interpolation''


Approximation theory

Approximation theory *
Orders of approximation In science, engineering, and other quantitative disciplines, order of approximation refers to formal or informal expressions for how accurate an approximation is. Usage in science and engineering In formal expressions, the ordinal number used ...
*
Lebesgue's lemma ''For Lebesgue's lemma for open covers of compact spaces in topology see Lebesgue's number lemma'' In mathematics, Lebesgue's lemma is an important statement in approximation theory. It provides a bound for the projection error, controlling the e ...
*
Curve fitting Curve fitting is the process of constructing a curve, or mathematical function, that has the best fit to a series of data points, possibly subject to constraints. Curve fitting can involve either interpolation, where an exact fit to the data i ...
**
Vector field reconstruction Vector field reconstruction is a method of creating a vector field from experimental or computer generated data, usually with the goal of finding a differential equation model of the system. A differential equation model is one that describes th ...
*
Modulus of continuity In mathematical analysis, a modulus of continuity is a function ω : , ∞→ , ∞used to measure quantitatively the uniform continuity of functions. So, a function ''f'' : ''I'' → R admits ω as a modulus of continuity if and only if :, f(x)-f ...
— measures smoothness of a function *
Least squares (function approximation) In mathematics, least squares function approximation applies the principle of least squares to function approximation, by means of a weighted sum of other functions. The best approximation can be defined as that which minimizes the difference bet ...
— minimizes the error in the L2-norm *
Minimax approximation algorithm A minimax approximation algorithm (or L∞ approximation or uniform approximation) is a method to find an approximation of a mathematical function that minimizes maximum error. For example, given a function f defined on the interval ,b/math> and ...
— minimizes the maximum error over an interval (the L-norm) ** Equioscillation theorem — characterizes the best approximation in the L-norm * Unisolvent point set — function from given function space is determined uniquely by values on such a set of points * Stone–Weierstrass theorem — continuous functions can be approximated uniformly by polynomials, or certain other function spaces *Approximation by polynomials: **
Linear approximation In mathematics, a linear approximation is an approximation of a general function using a linear function (more precisely, an affine function). They are widely used in the method of finite differences to produce first order methods for solving o ...
**
Bernstein polynomial In the mathematical field of numerical analysis, a Bernstein polynomial is a polynomial that is a linear combination of Bernstein basis polynomials. The idea is named after Sergei Natanovich Bernstein. A numerically stable way to evaluate pol ...
— basis of polynomials useful for approximating a function ** Bernstein's constant — error when approximating , ''x'', by a polynomial **
Remez algorithm The Remez algorithm or Remez exchange algorithm, published by Evgeny Yakovlevich Remez in 1934, is an iterative algorithm used to find simple approximations to functions, specifically, approximations by functions in a Chebyshev space that are th ...
— for constructing the best polynomial approximation in the L-norm ** Bernstein's inequality (mathematical analysis) — bound on maximum of derivative of polynomial in unit disk ** Mergelyan's theorem — generalization of Stone–Weierstrass theorem for polynomials ** Müntz–Szász theorem — variant of Stone–Weierstrass theorem for polynomials if some coefficients have to be zero ** Bramble–Hilbert lemma — upper bound on Lp error of polynomial approximation in multiple dimensions **
Discrete Chebyshev polynomials In mathematics, discrete Chebyshev polynomials, or Gram polynomials, are a type of discrete orthogonal polynomials used in approximation theory, introduced by Pafnuty Chebyshev and rediscovered by Gram. They were later found to be applicable to v ...
— polynomials orthogonal with respect to a discrete measure ** Favard's theorem — polynomials satisfying suitable 3-term recurrence relations are orthogonal polynomials *Approximation by Fourier series / trigonometric polynomials: ** Jackson's inequality — upper bound for best approximation by a trigonometric polynomial *** Bernstein's theorem (approximation theory) — a converse to Jackson's inequality **
Fejér's theorem In mathematics, Fejér's theorem,Leopold FejérUntersuchungen über Fouriersche Reihen ''Mathematische Annalen''vol. 58 1904, 51-69. named after Hungarian mathematician Lipót Fejér, states the following: Explanation of Fejér's Theorem's Ex ...
— Cesàro means of partial sums of Fourier series converge uniformly for continuous periodic functions ** Erdős–Turán inequality — bounds distance between probability and Lebesgue measure in terms of Fourier coefficients *Different approximations: **
Moving least squares Moving least squares is a method of reconstructing continuous functions from a set of unorganized point samples via the calculation of a weighted least squares measure biased towards the region around the point at which the reconstructed value is ...
**
Padé approximant In mathematics, a Padé approximant is the "best" approximation of a function near a specific point by a rational function of given order. Under this technique, the approximant's power series agrees with the power series of the function it is ap ...
***
Padé table In complex analysis, a Padé table is an array, possibly of infinite extent, of the rational Padé approximants :''R'm'', ''n'' to a given complex formal power series. Certain sequences of approximants lying within a Padé table can often b ...
— table of Padé approximants **
Hartogs–Rosenthal theorem In mathematics, the Hartogs–Rosenthal theorem is a classical result in complex analysis on the uniform approximation of continuous functions on compact subsets of the complex plane by rational functions. The theorem was proved in 1931 by the Germ ...
— continuous functions can be approximated uniformly by rational functions on a set of Lebesgue measure zero ** Szász–Mirakyan operator — approximation by e−''n'' ''x''''k'' on a semi-infinite interval ** Szász–Mirakjan–Kantorovich operator ** Baskakov operator — generalize Bernstein polynomials, Szász–Mirakyan operators, and Lupas operators ** Favard operator — approximation by sums of Gaussians * Surrogate model — application: replacing a function that is hard to evaluate by a simpler function *
Constructive function theory In mathematical analysis, constructive function theory is a field which studies the connection between the smoothness of a function and its degree of approximation. It is closely related to approximation theory. The term was coined by Sergei Berns ...
— field that studies connection between degree of approximation and smoothness *
Universal differential equation A universal differential equation (UDE) is a non-trivial differential algebraic equation with the property that its solutions can approximate any continuous function on any interval of the real line to any desired level of accuracy. Precisely, a ( ...
— differential–algebraic equation whose solutions can approximate any continuous function * Fekete problem — find ''N'' points on a sphere that minimize some kind of energy *
Carleman's condition In mathematics, particularly, in analysis, Carleman's condition gives a sufficient condition for the determinacy of the moment problem. That is, if a measure \mu satisfies Carleman's condition, there is no other measure \nu having the same moment ...
— condition guaranteeing that a measure is uniquely determined by its moments * Krein's condition — condition that exponential sums are dense in weighted L2 space * Lethargy theorem — about distance of points in a metric space from members of a sequence of subspaces * Wirtinger's representation and projection theorem *Journals: **
Constructive Approximation ''Constructive Approximation'' is "an international mathematics journal dedicated to Approximations and Expansions and related research in computation, function theory, functional analysis, interpolation spaces and interpolation of operators, nume ...
**
Journal of Approximation Theory The ''Journal of Approximation Theory'' is "devoted to advances in pure and applied approximation theory and related areas." References External links ''Journal of Approximation Theory'' web site''Journal of Approximation Theory'' home page a ...


Miscellaneous

*
Extrapolation In mathematics, extrapolation is a type of estimation, beyond the original observation range, of the value of a variable on the basis of its relationship with another variable. It is similar to interpolation, which produces estimates between know ...
**
Linear predictive analysis Linear predictive analysis is a simple form of first-order extrapolation: if it has been changing at this rate then it will probably continue to change at approximately the same rate, at least in the short term. This is equivalent to fitting a t ...
— linear extrapolation * Unisolvent functions — functions for which the interpolation problem has a unique solution *
Regression analysis In statistical modeling, regression analysis is a set of statistical processes for estimating the relationships between a dependent variable (often called the 'outcome' or 'response' variable, or a 'label' in machine learning parlance) and one ...
**
Isotonic regression In statistics and numerical analysis, isotonic regression or monotonic regression is the technique of fitting a free-form line to a sequence of observations such that the fitted line is non-decreasing (or non-increasing) everywhere, and lies as ...
* Curve-fitting compaction *
Interpolation (computer graphics) In the context of live-action and computer animation, interpolation is inbetweening,{{Cite web, url=https://www.freecodecamp.org/news/understanding-linear-interpolation-in-ui-animations-74701eb9957c/, title=Understanding Linear Interpolation in UI ...


Finding roots of nonlinear equations

:''See #Numerical linear algebra for linear equations''
Root-finding algorithm In mathematics and computing, a root-finding algorithm is an algorithm for finding zeros, also called "roots", of continuous functions. A zero of a function , from the real numbers to real numbers or from the complex numbers to the complex numbers ...
— algorithms for solving the equation ''f''(''x'') = 0 *General methods: **
Bisection method In mathematics, the bisection method is a root-finding method that applies to any continuous function for which one knows two values with opposite signs. The method consists of repeatedly bisecting the interval defined by these values and the ...
— simple and robust; linear convergence ***
Lehmer–Schur algorithm In mathematics, the Lehmer–Schur algorithm (named after Derrick Henry Lehmer and Issai Schur) is a root-finding algorithm for complex polynomials, extending the idea of enclosing roots like in the one-dimensional bisection method to the complex p ...
— variant for complex functions **
Fixed-point iteration In numerical analysis, fixed-point iteration is a method of computing fixed points of a function. More specifically, given a function f defined on the real numbers with real values and given a point x_0 in the domain of f, the fixed-point itera ...
** Newton's method — based on linear approximation around the current iterate; quadratic convergence *** Kantorovich theorem — gives a region around solution such that Newton's method converges *** Newton fractal — indicates which initial condition converges to which root under Newton iteration ***
Quasi-Newton method Quasi-Newton methods are methods used to either find zeroes or local maxima and minima of functions, as an alternative to Newton's method. They can be used if the Jacobian or Hessian is unavailable or is too expensive to compute at every iteration. ...
— uses an approximation of the Jacobian: ****
Broyden's method In numerical analysis, Broyden's method is a quasi-Newton method for finding roots in variables. It was originally described by C. G. Broyden in 1965. Newton's method for solving uses the Jacobian matrix, , at every iteration. However, compu ...
— uses a rank-one update for the Jacobian **** Symmetric rank-one — a symmetric (but not necessarily positive definite) rank-one update of the Jacobian ****
Davidon–Fletcher–Powell formula The Davidon–Fletcher–Powell formula (or DFP; named after William C. Davidon, Roger Fletcher, and Michael J. D. Powell) finds the solution to the secant equation that is closest to the current estimate and satisfies the curvature condition. It ...
— update of the Jacobian in which the matrix remains positive definite **** Broyden–Fletcher–Goldfarb–Shanno algorithm — rank-two update of the Jacobian in which the matrix remains positive definite ****
Limited-memory BFGS Limited-memory BFGS (L-BFGS or LM-BFGS) is an optimization algorithm in the family of quasi-Newton methods that approximates the Broyden–Fletcher–Goldfarb–Shanno algorithm (BFGS) using a limited amount of computer memory. It is a popular alg ...
method — truncated, matrix-free variant of BFGS method suitable for large problems *** Steffensen's method — uses divided differences instead of the derivative **
Secant method In numerical analysis, the secant method is a root-finding algorithm that uses a succession of roots of secant lines to better approximate a root of a function ''f''. The secant method can be thought of as a finite-difference approximation o ...
— based on linear interpolation at last two iterates **
False position method In mathematics, the ''regula falsi'', method of false position, or false position method is a very old method for solving an equation with one unknown; this method, in modified form, is still in use. In simple terms, the method is the trial and e ...
— secant method with ideas from the bisection method **
Muller's method Muller's method is a root-finding algorithm, a numerical method for solving equations of the form ''f''(''x'') = 0. It was first presented by David E. Muller in 1956. Muller's method is based on the secant method, which constructs at every itera ...
— based on quadratic interpolation at last three iterates ** Sidi's generalized secant method — higher-order variants of secant method **
Inverse quadratic interpolation In numerical analysis, inverse quadratic interpolation is a root-finding algorithm, meaning that it is an algorithm for solving equations of the form ''f''(''x'') = 0. The idea is to use quadratic interpolation to approximate the inverse of ''f''. ...
— similar to Muller's method, but interpolates the inverse ** Brent's method — combines bisection method, secant method and inverse quadratic interpolation **
Ridders' method In numerical analysis, Ridders' method is a root-finding algorithm based on the false position method and the use of an exponential function to successively approximate a root of a continuous function f(x). The method is due to C. Ridders. Ridders' ...
— fits a linear function times an exponential to last two iterates and their midpoint **
Halley's method In numerical analysis, Halley's method is a root-finding algorithm used for functions of one real variable with a continuous second derivative. It is named after its inventor Edmond Halley. The algorithm is second in the class of Householder's m ...
— uses ''f'', ''f''' and ''f''''; achieves the cubic convergence **
Householder's method In mathematics, and more specifically in numerical analysis, Householder's methods are a class of root-finding algorithms that are used for functions of one real variable with continuous derivatives up to some order . Each of these methods is chara ...
— uses first ''d'' derivatives to achieve order ''d'' + 1; generalizes Newton's and Halley's method *Methods for polynomials: **
Aberth method The Aberth method, or Aberth–Ehrlich method or Ehrlich–Aberth method, named after Oliver Aberth and Louis W. Ehrlich, is a root-finding algorithm developed in 1967 for simultaneous approximation of all the roots of a univariate polynomial. Thi ...
** Bairstow's method ** Durand–Kerner method ** Graeffe's method **
Jenkins–Traub algorithm The Jenkins–Traub algorithm for polynomial zeros is a fast globally convergent iterative polynomial root-finding method published in 1970 by Michael A. Jenkins and Joseph F. Traub. They gave two variants, one for general polynomials with comple ...
— fast, reliable, and widely used **
Laguerre's method In numerical analysis, Laguerre's method is a root-finding algorithm tailored to polynomials. In other words, Laguerre's method can be used to numerically solve the equation for a given polynomial . One of the most useful properties of this metho ...
** Splitting circle method *Analysis: **
Wilkinson's polynomial In numerical analysis, Wilkinson's polynomial is a specific polynomial which was used by James H. Wilkinson in 1963 to illustrate a difficulty when finding the root of a polynomial: the location of the roots can be very sensitive to perturbatio ...
* Numerical continuation — tracking a root as one parameter in the equation changes **
Piecewise linear continuation Simplicial continuation, or piecewise linear continuation (Allgower and Georg),Eugene L. Allgower, K. Georg, "Introduction to Numerical Continuation Methods", ''SIAM Classics in Applied Mathematics'' 45, 2003.E. L. Allgower, K. Georg, "Simplicial an ...


Optimization

Mathematical optimization — algorithm for finding maxima or minima of a given function


Basic concepts

* Active set *
Candidate solution In mathematical optimization, a feasible region, feasible set, search space, or solution space is the set of all possible points (sets of values of the choice variables) of an optimization problem that satisfy the problem's constraints, potent ...
*
Constraint (mathematics) In mathematics, a constraint is a condition of an optimization problem that the solution must satisfy. There are several types of constraints—primarily equality constraints, inequality constraints, and integer constraints. The set of ca ...
** Constrained optimization — studies optimization problems with constraints ** Binary constraint — a constraint that involves exactly two variables *
Corner solution A corner solution is a special solution to an agent's maximization problem in which the quantity of one of the arguments in the maximized function is zero. In non-technical terms, a corner solution is when the chooser is either unwilling or unabl ...
*
Feasible region In mathematical optimization, a feasible region, feasible set, search space, or solution space is the set of all possible points (sets of values of the choice variables) of an optimization problem that satisfy the problem's constraints, potent ...
— contains all solutions that satisfy the constraints but may not be optimal *
Global optimum In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given ra ...
and
Local optimum In applied mathematics and computer science, a local optimum of an optimization problem is a solution that is optimal (either maximal or minimal) within a neighboring set of candidate solutions. This is in contrast to a global optimum, which ...
*
Maxima and minima In mathematical analysis, the maxima and minima (the respective plurals of maximum and minimum) of a function, known collectively as extrema (the plural of extremum), are the largest and smallest value of the function, either within a given ra ...
*
Slack variable In an optimization problem, a slack variable is a variable that is added to an inequality constraint to transform it into an equality. Introducing a slack variable replaces an inequality constraint with an equality constraint and a non-negativity c ...
*
Continuous optimization Continuous optimization is a branch of optimization in applied mathematics. As opposed to discrete optimization, the variables used in the objective function are required to be continuous variables—that is, to be chosen from a set of rea ...
*
Discrete optimization Discrete optimization is a branch of optimization in applied mathematics and computer science. Scope As opposed to continuous optimization, some or all of the variables used in a discrete mathematical program are restricted to be discrete varia ...


Linear programming

Linear programming (also treats ''integer programming'') — objective function and constraints are linear * Algorithms for linear programming: **
Simplex algorithm In mathematical optimization, Dantzig's simplex algorithm (or simplex method) is a popular algorithm for linear programming. The name of the algorithm is derived from the concept of a simplex and was suggested by T. S. Motzkin. Simplices are n ...
***
Bland's rule In mathematical optimization, Bland's rule (also known as Bland's algorithm, Bland's anti-cycling rule or Bland's pivot rule) is an algorithmic refinement of the simplex method for linear optimization. With Bland's rule, the simplex algorithm solv ...
— rule to avoid cycling in the simplex method ***
Klee–Minty cube The Klee–Minty cube or Klee–Minty polytope (named after Victor Klee and George J. Minty) is a unit cube, unit hypercube of variable dimension whose corners have been perturbed. Klee and Minty demonstrated that George Dantzig's simplex algorith ...
— perturbed (hyper)cube; simplex method has exponential complexity on such a domain ***
Criss-cross algorithm In mathematical optimization, the criss-cross algorithm is any of a family of algorithms for linear programming. Variants of the criss-cross algorithm also solve more general problems with linear inequality constraints and nonlinear object ...
— similar to the simplex algorithm *** Big M method — variation of simplex algorithm for problems with both "less than" and "greater than" constraints **
Interior point method Interior-point methods (also referred to as barrier methods or IPMs) are a certain class of algorithms that solve linear and nonlinear convex optimization problems. An interior point method was discovered by Soviet mathematician I. I. Dikin in 1 ...
***
Ellipsoid method In mathematical optimization, the ellipsoid method is an iterative method for minimizing convex functions. When specialized to solving feasible linear optimization problems with rational data, the ellipsoid method is an algorithm which find ...
*** Karmarkar's algorithm ***
Mehrotra predictor–corrector method Mehrotra's predictor–corrector method in optimization is a specific interior point method for linear programming. It was proposed in 1989 by Sanjay Mehrotra. The method is based on the fact that at each iteration of an interior point algorithm it ...
** Column generation ** k-approximation of k-hitting set — algorithm for specific LP problems (to find a weighted hitting set) *
Linear complementarity problem In mathematical optimization theory, the linear complementarity problem (LCP) arises frequently in computational mechanics and encompasses the well-known quadratic programming as a special case. It was proposed by Cottle and Dantzig in 1968. ...
*Decompositions: **
Benders' decomposition Benders decomposition (or Benders' decomposition) is a technique in mathematical programming that allows the solution of very large linear programming problems that have a special block structure. This block structure often occurs in applications ...
**
Dantzig–Wolfe decomposition Dantzig–Wolfe decomposition is an algorithm for solving linear programming problems with special structure. It was originally developed by George Dantzig and Philip Wolfe and initially published in 1960. Many texts on linear programming have s ...
** Theory of two-level planning ** Variable splitting * Basic solution (linear programming) — solution at vertex of feasible region *
Fourier–Motzkin elimination Fourier–Motzkin elimination, also known as the FME method, is a mathematical algorithm for eliminating variables from a system of linear inequalities. It can output real solutions. The algorithm is named after Joseph Fourier who proposed the m ...
*
Hilbert basis (linear programming) The Hilbert basis of a convex cone ''C'' is a minimal set of integer vectors such that every integer vector in ''C'' is a conical combination of the vectors in the Hilbert basis with integer coefficients. Definition Given a lattice L\subset\mat ...
— set of integer vectors in a convex cone which generate all integer vectors in the cone *
LP-type problem In the study of algorithms, an LP-type problem (also called a generalized linear program) is an optimization problem that shares certain properties with low-dimensional linear programs and that may be solved by similar algorithms. LP-type problems i ...
*
Linear inequality In mathematics a linear inequality is an inequality which involves a linear function. A linear inequality contains one of the symbols of inequality:. It shows the data which is not equal in graph form. * greater than * ≤ less than or equal to * ...
* Vertex enumeration problem — list all vertices of the feasible set


Convex optimization

Convex optimization Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets (or, equivalently, maximizing concave functions over convex sets). Many classes of convex optimization pr ...
*
Quadratic programming Quadratic programming (QP) is the process of solving certain mathematical optimization problems involving quadratic functions. Specifically, one seeks to optimize (minimize or maximize) a multivariate quadratic function subject to linear constr ...
**
Linear least squares (mathematics) Linear least squares (LLS) is the least squares approximation of linear functions to data. It is a set of formulations for solving statistical problems involved in linear regression, including variants for ordinary (unweighted), weighted, and ...
** Total least squares **
Frank–Wolfe algorithm The Frank–Wolfe algorithm is an iterative first-order optimization algorithm for constrained convex optimization. Also known as the conditional gradient method, reduced gradient algorithm and the convex combination algorithm, the method was orig ...
**
Sequential minimal optimization Sequential minimal optimization (SMO) is an algorithm for solving the quadratic programming (QP) problem that arises during the training of support-vector machines (SVM). It was invented by John Platt in 1998 at Microsoft Research. SMO is widely ...
— breaks up large QP problems into a series of smallest possible QP problems ** Bilinear program *
Basis pursuit Basis pursuit is the mathematical optimization problem of the form : \min_x \, x\, _1 \quad \text \quad y = Ax, where ''x'' is a ''N''-dimensional solution vector (signal), ''y'' is a ''M''-dimensional vector of observations (measurements), ''A ...
— minimize L1-norm of vector subject to linear constraints **
Basis pursuit denoising In applied mathematics and statistics, basis pursuit denoising (BPDN) refers to a mathematical optimization problem of the form : \min_x \left(\frac \, y - Ax\, ^2_2 + \lambda \, x\, _1\right), where \lambda is a parameter that controls the tra ...
(BPDN) — regularized version of basis pursuit *** In-crowd algorithm — algorithm for solving basis pursuit denoising *
Linear matrix inequality In convex optimization, a linear matrix inequality (LMI) is an expression of the form : \operatorname(y):=A_0+y_1A_1+y_2A_2+\cdots+y_m A_m\succeq 0\, where * y= _i\,,~i\!=\!1,\dots, m/math> is a real vector, * A_0, A_1, A_2,\dots,A_m are n\times n ...
* Conic optimization **
Semidefinite programming Semidefinite programming (SDP) is a subfield of convex optimization concerned with the optimization of a linear objective function (a user-specified function that the user wants to minimize or maximize) over the intersection of the cone of positive ...
**
Second-order cone programming A second-order cone program (SOCP) is a convex optimization problem of the form :minimize \ f^T x \ :subject to ::\lVert A_i x + b_i \rVert_2 \leq c_i^T x + d_i,\quad i = 1,\dots,m ::Fx = g \ where the problem parameters are f \in \mathbb^n, ...
** Sum-of-squares optimization **Quadratic programming (see above) *
Bregman method The Bregman method is an iterative algorithm to solve certain convex optimization problems involving regularization. The original version is due to Lev M. Bregman, who published it in 1967. The algorithm is a row-action method accessing constr ...
— row-action method for strictly convex optimization problems *
Proximal gradient method Proximal gradient methods are a generalized form of projection used to solve non-differentiable convex optimization problems. Many interesting problems can be formulated as convex optimization problems of the form \operatorname\limits_ \sum_^n ...
— use splitting of objective function in sum of possible non-differentiable pieces * Subgradient method — extension of steepest descent for problems with a non-differentiable objective function * Biconvex optimization — generalization where objective function and constraint set can be biconvex


Nonlinear programming

Nonlinear programming In mathematics, nonlinear programming (NLP) is the process of solving an optimization problem where some of the constraints or the objective function are nonlinear. An optimization problem is one of calculation of the extrema (maxima, minima or s ...
— the most general optimization problem in the usual framework *Special cases of nonlinear programming: **See ''Linear programming'' and ''Convex optimization'' above **
Geometric programming A geometric program (GP) is an optimization problem of the form : \begin \mbox & f_0(x) \\ \mbox & f_i(x) \leq 1, \quad i=1, \ldots, m\\ & g_i(x) = 1, \quad i=1, \ldots, p, \end where f_0,\dots,f_m are posynomials and g_1,\dots,g_p are monomials. I ...
— problems involving signomials or posynomials *** Signomial — similar to polynomials, but exponents need not be integers *** Posynomial — a signomial with positive coefficients ** Quadratically constrained quadratic program **
Linear-fractional programming In mathematical optimization, linear-fractional programming (LFP) is a generalization of linear programming (LP). Whereas the objective function in a linear program is a linear function, the objective function in a linear-fractional program is a r ...
— objective is ratio of linear functions, constraints are linear ***
Fractional programming In mathematical optimization, fractional programming is a generalization of linear-fractional programming. The objective function in a fractional program is a ratio of two functions that are in general nonlinear. The ratio to be optimized often desc ...
— objective is ratio of nonlinear functions, constraints are linear ** Nonlinear complementarity problem (NCP) — find ''x'' such that ''x'' ≥ 0, ''f''(''x'') ≥ 0 and ''x''T ''f''(''x'') = 0 ** Least squares — the objective function is a sum of squares ***
Non-linear least squares Non-linear least squares is the form of least squares analysis used to fit a set of ''m'' observations with a model that is non-linear in ''n'' unknown parameters (''m'' ≥ ''n''). It is used in some forms of nonlinear regression. The ...
*** Gauss–Newton algorithm **** BHHH algorithm — variant of Gauss–Newton in econometrics **** Generalized Gauss–Newton method — for constrained nonlinear least-squares problems ***
Levenberg–Marquardt algorithm In mathematics and computing, the Levenberg–Marquardt algorithm (LMA or just LM), also known as the damped least-squares (DLS) method, is used to solve non-linear least squares problems. These minimization problems arise especially in least sq ...
*** Iteratively reweighted least squares (IRLS) — solves a weighted least-squares problem at every iteration ***
Partial least squares Partial least squares regression (PLS regression) is a statistical method that bears some relation to principal components regression; instead of finding hyperplanes of maximum variance between the response and independent variables, it finds a li ...
— statistical techniques similar to principal components analysis ****
Non-linear iterative partial least squares Principal component analysis (PCA) is a popular technique for analyzing large datasets containing a high number of dimensions/features per observation, increasing the interpretability of data while preserving the maximum amount of information, and ...
(NIPLS) **
Mathematical programming with equilibrium constraints Mathematical programming with equilibrium constraints (MPEC) is the study of constrained optimization problems where the constraints include variational inequalities or complementarities. MPEC is related to the Stackelberg game. MPEC is used ...
— constraints include variational inequalities or complementarities **Univariate optimization: ***
Golden section search The golden-section search is a technique for finding an extremum (minimum or maximum) of a function inside a specified interval. For a strictly unimodal function with an extremum inside the interval, it will find that extremum, while for an inter ...
***
Successive parabolic interpolation Successive parabolic interpolation is a technique for finding the extremum (minimum or maximum) of a continuous unimodal function by successively fitting parabolas (polynomials of degree two) to a function of one variable at three unique points or ...
— based on quadratic interpolation through the last three iterates *General algorithms: **Concepts: *** Descent direction *** Guess value — the initial guess for a solution with which an algorithm starts ***
Line search In optimization, the line search strategy is one of two basic iterative approaches to find a local minimum \mathbf^* of an objective function f:\mathbb R^n\to\mathbb R. The other approach is trust region. The line search approach first finds a ...
****
Backtracking line search In (unconstrained) mathematical optimization, a backtracking line search is a line search method to determine the amount to move along a given search direction. Its use requires that the objective function is differentiable and that its gradien ...
****
Wolfe conditions In the unconstrained minimization problem, the Wolfe conditions are a set of inequalities for performing inexact line search, especially in quasi-Newton methods, first published by Philip Wolfe in 1969. In these methods the idea is to find ::\mi ...
** Gradient method — method that uses the gradient as the search direction ***
Gradient descent In mathematics, gradient descent (also often called steepest descent) is a first-order iterative optimization algorithm for finding a local minimum of a differentiable function. The idea is to take repeated steps in the opposite direction of the ...
**** Stochastic gradient descent *** Landweber iteration — mainly used for ill-posed problems ** Successive linear programming (SLP) — replace problem by a linear programming problem, solve that, and repeat **
Sequential quadratic programming Sequential quadratic programming (SQP) is an iterative method for constrained nonlinear optimization. SQP methods are used on mathematical problems for which the objective function and the constraints are twice continuously differentiable. SQP me ...
(SQP) — replace problem by a quadratic programming problem, solve that, and repeat **
Newton's method in optimization In calculus, Newton's method is an iterative method for finding the roots of a differentiable function , which are solutions to the equation . As such, Newton's method can be applied to the derivative of a twice-differentiable function to fi ...
***See also under ''Newton algorithm'' in the section ''Finding roots of nonlinear equations'' ** Nonlinear conjugate gradient method **Derivative-free methods ***
Coordinate descent Coordinate descent is an optimization algorithm that successively minimizes along coordinate directions to find the minimum of a function. At each iteration, the algorithm determines a coordinate or coordinate block via a coordinate selection rule, ...
— move in one of the coordinate directions ****
Adaptive coordinate descent Adaptive coordinate descent is an improvement of the coordinate descent algorithm to non-separable optimization by the use of adaptive encoding. The adaptive coordinate descent approach gradually builds a transformation of the coordinate system suc ...
— adapt coordinate directions to objective function **** Random coordinate descent — randomized version ***
Nelder–Mead method The Nelder–Mead method (also downhill simplex method, amoeba method, or polytope method) is a numerical method used to find the minimum or maximum of an objective function in a multidimensional space. It is a direct search method (based on ...
*** Pattern search (optimization) *** Powell's method — based on conjugate gradient descent *** Rosenbrock methods — derivative-free method, similar to Nelder–Mead but with guaranteed convergence **
Augmented Lagrangian method Augmented Lagrangian methods are a certain class of algorithms for solving constrained optimization problems. They have similarities to penalty methods in that they replace a constrained optimization problem by a series of unconstrained problems a ...
— replaces constrained problems by unconstrained problems with a term added to the objective function **
Ternary search A ternary search algorithm is a technique in computer science for finding the minimum or maximum of a unimodal function. A ternary search determines either that the minimum or maximum cannot be in the first third of the domain or that it cannot be ...
**
Tabu search Tabu search is a metaheuristic search method employing local search methods used for mathematical optimization. It was created by Fred W. Glover in 1986 and formalized in 1989. Local (neighborhood) searches take a potential solution to a pro ...
** Guided Local Search — modification of search algorithms which builds up penalties during a search **Reactive search optimization (RSO) — the algorithm adapts its parameters automatically ** MM algorithm — majorize-minimization, a wide framework of methods **
Least absolute deviations Least absolute deviations (LAD), also known as least absolute errors (LAE), least absolute residuals (LAR), or least absolute values (LAV), is a statistical optimality criterion and a statistical optimization technique based minimizing the ''sum o ...
***
Expectation–maximization algorithm In statistics, an expectation–maximization (EM) algorithm is an iterative method to find (local) maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variabl ...
****
Ordered subset expectation maximization In mathematical optimization, the ordered subset expectation maximization (OSEM) method is an iterative method that is used in computed tomography. In applications in medical imaging, the OSEM method is used for positron emission tomography, f ...
**
Nearest neighbor search Nearest neighbor search (NNS), as a form of proximity search, is the optimization problem of finding the point in a given set that is closest (or most similar) to a given point. Closeness is typically expressed in terms of a dissimilarity function ...
**
Space mapping The space mapping methodology for modeling and design optimization of engineering systems was first discovered by John Bandler in 1993. It uses relevant existing knowledge to speed up model generation and design optimization of a system. The kno ...
— uses "coarse" (ideal or low-fidelity) and "fine" (practical or high-fidelity) models


Optimal control and infinite-dimensional optimization

Optimal control *
Pontryagin's minimum principle Pontryagin's maximum principle is used in optimal control theory to find the best possible control for taking a dynamical system from one state to another, especially in the presence of constraints for the state or input controls. It states that it ...
— infinite-dimensional version of Lagrange multipliers **
Costate equations The costate equation is related to the state equation used in optimal control. It is also referred to as auxiliary, adjoint, influence, or multiplier equation. It is stated as a vector of first order differential equations : \dot^(t)=-\frac where ...
— equation for the "Lagrange multipliers" in Pontryagin's minimum principle **
Hamiltonian (control theory) The Hamiltonian is a function used to solve a problem of optimal control for a dynamical system. It can be understood as an instantaneous increment of the Lagrangian expression of the problem that is to be optimized over a certain time period. Ins ...
— minimum principle says that this function should be minimized *Types of problems: ** Linear-quadratic regulator — system dynamics is a linear differential equation, objective is quadratic ** Linear-quadratic-Gaussian control (LQG) — system dynamics is a linear SDE with additive noise, objective is quadratic *** Optimal projection equations — method for reducing dimension of LQG control problem * Algebraic Riccati equation — matrix equation occurring in many optimal control problems *
Bang–bang control In control theory, a bang–bang controller (2 step or on–off controller), is a feedback controller that switches abruptly between two states. These controllers may be realized in terms of any element that provides hysteresis. They are often use ...
— control that switches abruptly between two states * Covector mapping principle * Differential dynamic programming — uses locally-quadratic models of the dynamics and cost functions * DNSS point — initial state for certain optimal control problems with multiple optimal solutions * Legendre–Clebsch condition — second-order condition for solution of optimal control problem * Pseudospectral optimal control ** Bellman pseudospectral method — based on Bellman's principle of optimality ** Chebyshev pseudospectral method — uses Chebyshev polynomials (of the first kind) **
Flat pseudospectral method The flat pseudospectral method is part of the family of the Ross–Fahroo pseudospectral methods introduced by Ross and Fahroo. Ross, I. M. and Fahroo, F., �Pseudospectral Methods for the Optimal Motion Planning of Differentially Flat Systems” ...
— combines Ross–Fahroo pseudospectral method with differential flatness ** Gauss pseudospectral method — uses collocation at the Legendre–Gauss points ** Legendre pseudospectral method — uses Legendre polynomials ** Pseudospectral knotting method — generalization of pseudospectral methods in optimal control **
Ross–Fahroo pseudospectral method Introduced by I. Michael Ross and F. Fahroo, the Ross–Fahroo pseudospectral methods are a broad collection of pseudospectral methods for optimal control.N. Bedrossian, M. Karpenko, and S. Bhatt, "Overclock My Satellite: Sophisticated Algorit ...
— class of pseudospectral method including Chebyshev, Legendre and knotting *
Ross–Fahroo lemma Named after I. Michael Ross and F. Fahroo, the Ross–Fahroo lemma is a fundamental result in optimal control theory. I. M. Ross and F. Fahroo, A Pseudospectral Transformation of the Covectors of Optimal Control Systems, Proceedings of the First I ...
— condition to make discretization and duality operations commute * Ross' π lemma — there is fundamental time constant within which a control solution must be computed for controllability and stability * Sethi model — optimal control problem modelling advertising Infinite-dimensional optimization *
Semi-infinite programming In optimization theory, semi-infinite programming (SIP) is an optimization problem with a finite number of variables and an infinite number of constraints, or an infinite number of variables and a finite number of constraints. In the former case the ...
— infinite number of variables and finite number of constraints, or other way around * Shape optimization, Topology optimization — optimization over a set of regions ** Topological derivative — derivative with respect to changing in the shape * Generalized semi-infinite programming — finite number of variables, infinite number of constraints


Uncertainty and randomness

*Approaches to deal with uncertainty: ** Markov decision process **
Partially observable Markov decision process A partially observable Markov decision process (POMDP) is a generalization of a Markov decision process (MDP). A POMDP models an agent decision process in which it is assumed that the system dynamics are determined by an MDP, but the agent cannot ...
**
Robust optimization Robust optimization is a field of mathematical optimization theory that deals with optimization problems in which a certain measure of robustness is sought against uncertainty that can be represented as deterministic variability in the value of the ...
***
Wald's maximin model In decision theory and game theory, Wald's maximin model is a non-probabilistic decision-making model according to which decisions are ranked on the basis of their worst-case outcomes – the optimal decision is one with the least bad worst outco ...
**
Scenario optimization The scenario approach or scenario optimization approach is a technique for obtaining solutions to robust optimization and chance-constrained optimization problems based on a sample of the constraints. It also relates to inductive reasoning in mod ...
— constraints are uncertain ** Stochastic approximation ** Stochastic optimization ** Stochastic programming ** Stochastic gradient descent *
Random optimization Random optimization (RO) is a family of numerical optimization methods that do not require the gradient of the problem to be optimized and RO can hence be used on functions that are not continuous or differentiable. Such optimization methods are al ...
algorithms: **
Random search Random search (RS) is a family of numerical optimization methods that do not require the gradient of the problem to be optimized, and RS can hence be used on functions that are not continuous or differentiable. Such optimization methods are also k ...
— choose a point randomly in ball around current iterate **
Simulated annealing Simulated annealing (SA) is a probabilistic technique for approximating the global optimum of a given function. Specifically, it is a metaheuristic to approximate global optimization in a large search space for an optimization problem. ...
***
Adaptive simulated annealing Adaptive simulated annealing (ASA) is a variant of simulated annealing (SA) algorithm in which the algorithm parameters that control temperature schedule and random step selection are automatically adjusted according to algorithm progress. This mak ...
— variant in which the algorithm parameters are adjusted during the computation. *** Great Deluge algorithm *** Mean field annealing — deterministic variant of simulated annealing ** Bayesian optimization — treats objective function as a random function and places a prior over it **
Evolutionary algorithm In computational intelligence (CI), an evolutionary algorithm (EA) is a subset of evolutionary computation, a generic population-based metaheuristic optimization algorithm. An EA uses mechanisms inspired by biological evolution, such as reproduct ...
*** Differential evolution ***
Evolutionary programming Evolutionary programming is one of the four major evolutionary algorithm paradigms. It is similar to genetic programming, but the structure of the program to be optimized is fixed, while its numerical parameters are allowed to evolve. It was fir ...
*** Genetic algorithm,
Genetic programming In artificial intelligence, genetic programming (GP) is a technique of evolving programs, starting from a population of unfit (usually random) programs, fit for a particular task by applying operations analogous to natural genetic processes to t ...
**** Genetic algorithms in economics *** MCACEA (Multiple Coordinated Agents Coevolution Evolutionary Algorithm) — uses an evolutionary algorithm for every agent *** Simultaneous perturbation stochastic approximation (SPSA) ** Luus–Jaakola ** Particle swarm optimization **
Stochastic tunneling In numerical analysis, stochastic tunneling (STUN) is an approach to global optimization based on the Monte Carlo method- sampling of the function to be objective minimized in which the function is nonlinearly transformed to allow for easier tunne ...
** Harmony search — mimicks the improvisation process of musicians **see also the section ''Monte Carlo method''


Theoretical aspects

*
Convex analysis Convex analysis is the branch of mathematics devoted to the study of properties of convex functions and convex sets, often with applications in convex minimization, a subdomain of optimization theory. Convex sets A subset C \subseteq X of som ...
— function ''f'' such that ''f''(''tx'' + (1 − ''t'')''y'') ≥ ''tf''(''x'') + (1 − ''t'')''f''(''y'') for ''t'' ∈ ,1**
Pseudoconvex function In convex analysis and the calculus of variations, both branches of mathematics, a pseudoconvex function is a function that behaves like a convex function with respect to finding its local minima, but need not actually be convex. Informally, a di ...
— function ''f'' such that ∇''f'' · (''y'' − ''x'') ≥ 0 implies ''f''(''y'') ≥ ''f''(''x'') **
Quasiconvex function In mathematics, a quasiconvex function is a real-valued function defined on an interval or on a convex subset of a real vector space such that the inverse image of any set of the form (-\infty,a) is a convex set. For a function of a sing ...
— function ''f'' such that ''f''(''tx'' + (1 − ''t'')''y'') ≤ max(''f''(''x''), ''f''(''y'')) for ''t'' ∈ ,1**
Subderivative In mathematics, the subderivative, subgradient, and subdifferential generalize the derivative to convex functions which are not necessarily differentiable. Subderivatives arise in convex analysis, the study of convex functions, often in connectio ...
**
Geodesic convexity In mathematics — specifically, in Riemannian geometry — geodesic convexity is a natural generalization of convexity for sets and functions to Riemannian manifolds. It is common to drop the prefix "geodesic" and refer simply to "conve ...
— convexity for functions defined on a Riemannian manifold *
Duality (optimization) In mathematical optimization theory, duality or the duality principle is the principle that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. If the primal is a minimization problem then th ...
**
Weak duality In applied mathematics, weak duality is a concept in optimization which states that the duality gap is always greater than or equal to 0. That means the solution to the dual (minimization) problem is ''always'' greater than or equal to the solution ...
— dual solution gives a bound on the primal solution ** Strong duality — primal and dual solutions are equivalent **
Shadow price A shadow price is the monetary value assigned to an abstract or intangible commodity which is not traded in the marketplace. This often takes the form of an externality. Shadow prices are also known as the recalculation of known market prices in o ...
** Dual cone and polar cone **
Duality gap In optimization problems in applied mathematics, the duality gap is the difference between the primal and dual solutions. If d^* is the optimal dual value and p^* is the optimal primal value then the duality gap is equal to p^* - d^*. This value ...
— difference between primal and dual solution **
Fenchel's duality theorem In mathematics, Fenchel's duality theorem is a result in the theory of convex functions named after Werner Fenchel. Let ''ƒ'' be a proper convex function on R''n'' and let ''g'' be a proper concave function on R''n''. Then, if regularity cond ...
— relates minimization problems with maximization problems of convex conjugates ** Perturbation function — any function which relates to primal and dual problems **
Slater's condition In mathematics, Slater's condition (or Slater condition) is a sufficient condition for strong duality to hold for a convex optimization problem, named after Morton L. Slater. Informally, Slater's condition states that the feasible region must ...
— sufficient condition for strong duality to hold in a convex optimization problem **
Total dual integrality In mathematical optimization, total dual integrality is a sufficient condition for the integrality of a polyhedron. Thus, the optimization of a linear objective over the integral points of such a polyhedron can be done using techniques from linear ...
— concept of duality for integer linear programming **
Wolfe duality In mathematical optimization, Wolfe duality, named after Philip Wolfe, is type of dual problem in which the objective function and constraints are all differentiable functions. Using this concept a lower bound for a minimization problem can be f ...
— for when objective function and constraints are differentiable * Farkas' lemma * Karush–Kuhn–Tucker conditions (KKT) — sufficient conditions for a solution to be optimal ** Fritz John conditions — variant of KKT conditions *
Lagrange multiplier In mathematical optimization, the method of Lagrange multipliers is a strategy for finding the local maxima and minima of a function subject to equality constraints (i.e., subject to the condition that one or more equations have to be satisfied e ...
** Lagrange multipliers on Banach spaces *
Semi-continuity In mathematical analysis, semicontinuity (or semi-continuity) is a property of extended real-valued functions that is weaker than continuity. An extended real-valued function f is upper (respectively, lower) semicontinuous at a point x_0 if, r ...
*
Complementarity theory A complementarity problem is a type of mathematical optimization problem. It is the problem of optimizing (minimizing or maximizing) a function of two vector variables subject to certain requirements (constraints) which include: that the inner pr ...
— study of problems with constraints of the form ⟨''u'', ''v''⟩ = 0 ** Mixed complementarity problem *** Mixed linear complementarity problem *** Lemke's algorithm — method for solving (mixed) linear complementarity problems * Danskin's theorem — used in the analysis of minimax problems * Maximum theorem — the maximum and maximizer are continuous as function of parameters, under some conditions *
No free lunch in search and optimization In computational complexity and optimization the no free lunch theorem is a result that states that for certain types of mathematical problems, the computational cost of finding a solution, averaged over all problems in the class, is the same ...
*
Relaxation (approximation) In mathematical optimization and related fields, relaxation is a modeling strategy. A relaxation is an approximation of a difficult problem by a nearby problem that is easier to solve. A solution of the relaxed problem provides information about ...
— approximating a given problem by an easier problem by relaxing some constraints ** Lagrangian relaxation **
Linear programming relaxation In mathematics, the relaxation of a (mixed) integer linear program is the problem that arises by removing the integrality constraint of each variable. For example, in a 0–1 integer program, all constraints are of the form :x_i\in\. The relax ...
— ignoring the integrality constraints in a linear programming problem *
Self-concordant function In optimization, a self-concordant function is a function f:\mathbb \rightarrow \mathbb for which : , f(x), \leq 2 f''(x)^ or, equivalently, a function f:\mathbb \rightarrow \mathbb that, wherever f''(x) > 0, satisfies : \left, \frac \frac ...
*
Reduced cost In linear programming, reduced cost, or opportunity cost, is the amount by which an objective function coefficient would have to improve (so increase for maximization problem, decrease for minimization problem) before it would be possible for a c ...
— cost for increasing a variable by a small amount * Hardness of approximation — computational complexity of getting an approximate solution


Applications

*In geometry: **
Geometric median In geometry, the geometric median of a discrete set of sample points in a Euclidean space is the point minimizing the sum of distances to the sample points. This generalizes the median, which has the property of minimizing the sum of distances ...
— the point minimizing the sum of distances to a given set of points ** Chebyshev center — the centre of the smallest ball containing a given set of points *In statistics: ** Iterated conditional modes — maximizing joint probability of Markov random field ** Response surface methodology — used in the design of experiments *
Automatic label placement Automatic label placement, sometimes called text placement or name placement, comprises the computer methods of placing labels automatically on a map or chart. This is related to the typographic design of such labels. The typical features depicted ...
*
Compressed sensing Compressed sensing (also known as compressive sensing, compressive sampling, or sparse sampling) is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions to underdetermined linear systems. This ...
— reconstruct a signal from knowledge that it is sparse or compressible *
Cutting stock problem In operations research, the cutting-stock problem is the problem of cutting standard-sized pieces of stock material, such as paper rolls or sheet metal, into pieces of specified sizes while minimizing material wasted. It is an optimization problem ...
* Demand optimization *
Destination dispatch Destination dispatch is an optimization technique used for multi-elevator installations, in which groups passengers heading to the same destinations use the same elevators, thereby reducing waiting and travel times. Comparatively, the traditional ...
— an optimization technique for dispatching elevators *
Energy minimization In the field of computational chemistry, energy minimization (also called energy optimization, geometry minimization, or geometry optimization) is the process of finding an arrangement in space of a collection of atoms where, according to some com ...
*
Entropy maximization The principle of maximum entropy states that the probability distribution which best represents the current state of knowledge about a system is the one with largest entropy, in the context of precisely stated prior data (such as a proposition ...
*
Highly optimized tolerance In applied mathematics Applied mathematics is the application of mathematical methods by different fields such as physics, engineering, medicine, biology, finance, business, computer science, and industry. Thus, applied mathematics is a com ...
*
Hyperparameter optimization In machine learning, hyperparameter optimization or tuning is the problem of choosing a set of optimal hyperparameters for a learning algorithm. A hyperparameter is a parameter whose value is used to control the learning process. By contrast, the va ...
* Inventory control problem **
Newsvendor model The newsvendor (or newsboy or single-periodWilliam J. Stevenson, Operations Management. 10th edition, 2009; page 581 or salvageable) model is a mathematical model in operations management and applied economics used to determine optimal inventory l ...
** Extended newsvendor model ** Assemble-to-order system * Linear programming decoding *
Linear search problem In computational complexity theory, the linear search problem is an optimal search problem introduced by Richard E. Bellman and independently considered by Anatole Beck. The problem "An immobile hider is located on the real line according to a ...
— find a point on a line by moving along the line *
Low-rank approximation In mathematics, low-rank approximation is a mathematical optimization, minimization problem, in which the Loss function, cost function measures the fit between a given matrix (the data) and an approximating matrix (the optimization variable), subjec ...
— find best approximation, constraint is that rank of some matrix is smaller than a given number * Meta-optimization — optimization of the parameters in an optimization method * Multidisciplinary design optimization * Optimal computing budget allocation — maximize the overall simulation efficiency for finding an optimal decision *
Paper bag problem In geometry, the paper bag problem or teabag problem is to calculate the maximum possible inflated volume of an initially flat sealed rectangular bag which has the same shape as a cushion or pillow, made out of two pieces of material which can bend ...
*
Process optimization Process optimization is the discipline of adjusting a process so as to optimize (make the best or most effective use of) some specified set of parameters without violating some constraint. The most common goals are minimizing cost and maximizing ...
*
Recursive economics Recursive economics is a branch of modern economics based on a paradigm of individuals making a series of two-period optimization decisions over time. Differences between recursive and neoclassical paradigms The neoclassical model assumes a one-per ...
— individuals make a series of two-period optimization decisions over time. *
Stigler diet The Stigler diet is an optimization problem named for George Stigler, a 1982 Nobel Laureate in economics, who posed the following problem: The nutrient RDAs required to be met in Stigler’s experiment were calories, protein, calcium, iron, as well ...
* Space allocation problem * Stress majorization * Trajectory optimization * Transportation theory * Wing-shape optimization


Miscellaneous

*
Combinatorial optimization Combinatorial optimization is a subfield of mathematical optimization that consists of finding an optimal object from a finite set of objects, where the set of feasible solutions is discrete or can be reduced to a discrete set. Typical combi ...
*
Dynamic programming Dynamic programming is both a mathematical optimization method and a computer programming method. The method was developed by Richard Bellman in the 1950s and has found applications in numerous fields, from aerospace engineering to economics. ...
**
Bellman equation A Bellman equation, named after Richard E. Bellman, is a necessary condition for optimality associated with the mathematical optimization method known as dynamic programming. It writes the "value" of a decision problem at a certain point in time ...
** Hamilton–Jacobi–Bellman equation — continuous-time analogue of Bellman equation **
Backward induction Backward induction is the process of reasoning backwards in time, from the end of a problem or situation, to determine a sequence of optimal actions. It proceeds by examining the last point at which a decision is to be made and then identifying wha ...
— solving dynamic programming problems by reasoning backwards in time **
Optimal stopping In mathematics, the theory of optimal stopping or early stopping : (For French translation, secover storyin the July issue of ''Pour la Science'' (2009).) is concerned with the problem of choosing a time to take a particular action, in order to ...
— choosing the optimal time to take a particular action ***
Odds algorithm The odds algorithm (or Bruss algorithm) is a mathematical method for computing optimal strategies for a class of problems that belong to the domain of optimal stopping problems. Their solution follows from the ''odds strategy'', and the importance ...
*** Robbins' problem *
Global optimization Global optimization is a branch of applied mathematics and numerical analysis that attempts to find the global minima or maxima of a function or a set of functions on a given set. It is usually described as a minimization problem because the max ...
: **
BRST algorithm Boender-Rinnooy-Stougie-Timmer algorithm (BRST) is an optimization algorithm suitable for finding global optimum of black box functions. In their paper Boender ''et al.'' describe their method as a stochastic method involving a combination of sam ...
** MCS algorithm *
Multi-objective optimization Multi-objective optimization (also known as multi-objective programming, vector optimization, multicriteria optimization, multiattribute optimization or Pareto optimization) is an area of multiple criteria decision making that is concerned with ...
— there are multiple conflicting objectives **
Benson's algorithm Benson's algorithm, named after Harold Benson, is a method for solving multi-objective linear programming problems and vector linear programs. This works by finding the "efficient extreme points in the outcome set". The primary concept in Bens ...
— for linear
vector optimization Vector optimization is a subarea of mathematical optimization where optimization problems with a vector-valued objective functions are optimized with respect to a given partial ordering and subject to certain constraints. A multi-objective optimiz ...
problems *
Bilevel optimization Bilevel optimization is a special kind of optimization where one problem is embedded (nested) within another. The outer optimization task is commonly referred to as the upper-level optimization task, and the inner optimization task is commonly ref ...
— studies problems in which one problem is embedded in another *
Optimal substructure In computer science, a problem is said to have optimal substructure if an optimal solution can be constructed from optimal solutions of its subproblems. This property is used to determine the usefulness of greedy algorithms for a problem.{{cite boo ...
* Dykstra's projection algorithm — finds a point in intersection of two convex sets *Algorithmic concepts: **
Barrier function In constrained Mathematical optimization, optimization, a field of mathematics, a barrier function is a continuous function whose value on a point increases to infinity as the point approaches the boundary of the Candidate solution, feasible region ...
**
Penalty method Penalty methods are a certain class of algorithms for solving constrained optimization problems. A penalty method replaces a constrained optimization problem by a series of unconstrained problems whose solutions ideally converge to the solution o ...
**
Trust region In mathematical optimization, a trust region is the subset of the region of the objective function that is approximated using a model function (often a quadratic). If an adequate model of the objective function is found within the trust region, the ...
*
Test functions for optimization In applied mathematics, test functions, known as artificial landscapes, are useful to evaluate characteristics of optimization algorithms, such as: * Convergence rate. * Precision. * Robustness. * General performance. Here some test functions are ...
: ** Rosenbrock function — two-dimensional function with a banana-shaped valley **
Himmelblau's function In mathematical optimization, Himmelblau's function is a multi-modal function, used to test the performance of optimization algorithms. The function is defined by: : f(x, y) = (x^2+y-11)^2 + (x+y^2-7)^2.\quad It has one local maximum at x = - ...
— two-dimensional with four local minima, defined by f(x, y) = (x^2+y-11)^2 + (x+y^2-7)^2 ** Rastrigin function — two-dimensional function with many local minima ** Shekel function — multimodal and multidimensional *
Mathematical Optimization Society The Mathematical Optimization Society (MOS), known as the Mathematical Programming Society until 2010,Numerical integration In analysis, numerical integration comprises a broad family of algorithms for calculating the numerical value of a definite integral, and by extension, the term is also sometimes used to describe the numerical solution of differential equations ...
— the numerical evaluation of an integral *
Rectangle method In mathematics, a Riemann sum is a certain kind of approximation of an integral by a finite sum. It is named after nineteenth century German mathematician Bernhard Riemann. One very common application is approximating the area of functions or lin ...
— first-order method, based on (piecewise) constant approximation *
Trapezoidal rule In calculus, the trapezoidal rule (also known as the trapezoid rule or trapezium rule; see Trapezoid for more information on terminology) is a technique for approximating the definite integral. \int_a^b f(x) \, dx. The trapezoidal rule works by ...
— second-order method, based on (piecewise) linear approximation *
Simpson's rule In numerical integration, Simpson's rules are several approximations for definite integrals, named after Thomas Simpson (1710–1761). The most basic of these rules, called Simpson's 1/3 rule, or just Simpson's rule, reads \int_a^b f(x) \, ...
— fourth-order method, based on (piecewise) quadratic approximation **
Adaptive Simpson's method Adaptive Simpson's method, also called adaptive Simpson's rule, is a method of numerical integration proposed by G.F. Kuncir in 1962. It is probably the first recursive adaptive algorithm for numerical integration to appear in print,For an earlier, ...
*
Boole's rule In mathematics, Boole's rule, named after George Boole, is a method of numerical integration. Formula Simple Boole's Rule It approximates an integral: : \int_^ f(x)\,dx by using the values of at five equally spaced points: : \begin & x_0 ...
— sixth-order method, based on the values at five equidistant points *
Newton–Cotes formulas In numerical analysis, the Newton–Cotes formulas, also called the Newton–Cotes quadrature rules or simply Newton–Cotes rules, are a group of formulas for numerical integration (also called ''quadrature'') based on evaluating the integrand at ...
— generalizes the above methods *
Romberg's method In numerical analysis, Romberg's method is used to estimate the definite integral \int_a^b f(x) \, dx by applying Richardson extrapolation repeatedly on the trapezium rule or the rectangle rule (midpoint rule). The estimates generate a trian ...
— Richardson extrapolation applied to trapezium rule *
Gaussian quadrature In numerical analysis, a quadrature rule is an approximation of the definite integral of a function, usually stated as a weighted sum of function values at specified points within the domain of integration. (See numerical integration for mor ...
— highest possible degree with given number of points ** Chebyshev–Gauss quadrature — extension of Gaussian quadrature for integrals with weight {{nowrap, (1 − ''x''2)±1/2 on ��1, 1**
Gauss–Hermite quadrature In numerical analysis, Gauss–Hermite quadrature is a form of Gaussian quadrature for approximating the value of integrals of the following kind: :\int_^ e^ f(x)\,dx. In this case :\int_^ e^ f(x)\,dx \approx \sum_^n w_i f(x_i) where ''n'' is ...
— extension of Gaussian quadrature for integrals with weight exp(−''x''2) on ��∞, ∞**
Gauss–Jacobi quadrature In numerical analysis, Gauss–Jacobi quadrature (named after Carl Friedrich Gauss and Carl Gustav Jacob Jacobi) is a method of numerical quadrature based on Gaussian quadrature. Gauss–Jacobi quadrature can be used to approximate integrals of t ...
— extension of Gaussian quadrature for integrals with weight (1 − ''x'')''α'' (1 + ''x'')''β'' on ��1, 1**
Gauss–Laguerre quadrature In numerical analysis Gauss–Laguerre quadrature (named after Carl Friedrich Gauss and Edmond Laguerre) is an extension of the Gaussian quadrature method for approximating the value of integrals of the following kind: :\int_^ e^ f(x)\,dx. In th ...
— extension of Gaussian quadrature for integrals with weight exp(−''x'') on , ∞** Gauss–Kronrod quadrature formula — nested rule based on Gaussian quadrature ** Gauss–Kronrod rules * Tanh-sinh quadrature — variant of Gaussian quadrature which works well with singularities at the end points *
Clenshaw–Curtis quadrature Clenshaw–Curtis quadrature and Fejér quadrature are methods for numerical integration, or "quadrature", that are based on an expansion of the integrand in terms of Chebyshev polynomials. Equivalently, they employ a change of variables x = \cos ...
— based on expanding the integrand in terms of Chebyshev polynomials *
Adaptive quadrature Adaptive quadrature is a numerical integration method in which the integral of a function f(x) is approximated using static quadrature rules on adaptively refined subintervals of the region of integration. Generally, adaptive algorithms are just ...
— adapting the subintervals in which the integration interval is divided depending on the integrand *
Monte Carlo integration In mathematics, Monte Carlo integration is a technique for numerical integration using random numbers. It is a particular Monte Carlo method that numerically computes a definite integral. While other algorithms usually evaluate the integrand a ...
— takes random samples of the integrand **''See also #Monte Carlo method'' * Quantized state systems method (QSS) — based on the idea of state quantization *
Lebedev quadrature In numerical analysis, Lebedev quadrature, named after Vyacheslav Ivanovich Lebedev, is an approximation to the surface integral of a function over a three-dimensional sphere. The grid is constructed so to have octahedral rotation and inversion sy ...
— uses a grid on a sphere with octahedral symmetry * Sparse grid * Coopmans approximation *
Numerical differentiation In numerical analysis, numerical differentiation algorithms estimate the derivative of a mathematical function or function subroutine using values of the function and perhaps other knowledge about the function. Finite differences The simp ...
— for fractional-order integrals **
Numerical smoothing and differentiation Numerical may refer to: * Number * Numerical digit * Numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distin ...
** Adjoint state method — approximates gradient of a function in an optimization problem *
Euler–Maclaurin formula In mathematics, the Euler–Maclaurin formula is a formula for the difference between an integral and a closely related sum. It can be used to approximate integrals by finite sums, or conversely to evaluate finite sums and infinite series using ...


Numerical methods for ordinary differential equations

Numerical methods for ordinary differential equations Numerical methods for ordinary differential equations are methods used to find numerical approximations to the solutions of ordinary differential equations (ODEs). Their use is also known as "numerical integration", although this term can also ...
— the numerical solution of ordinary differential equations (ODEs) *
Euler method In mathematics and computational science, the Euler method (also called forward Euler method) is a first-order numerical procedure for solving ordinary differential equations (ODEs) with a given initial value. It is the most basic explicit m ...
— the most basic method for solving an ODE *
Explicit and implicit methods Explicit and implicit methods are approaches used in numerical analysis for obtaining numerical approximations to the solutions of time-dependent ordinary and partial differential equations, as is required in computer simulations of physical pro ...
— implicit methods need to solve an equation at every step *
Backward Euler method In numerical analysis and scientific computing, the backward Euler method (or implicit Euler method) is one of the most basic numerical methods for the solution of ordinary differential equations. It is similar to the (standard) Euler method, but d ...
— implicit variant of the Euler method *
Trapezoidal rule In calculus, the trapezoidal rule (also known as the trapezoid rule or trapezium rule; see Trapezoid for more information on terminology) is a technique for approximating the definite integral. \int_a^b f(x) \, dx. The trapezoidal rule works by ...
— second-order implicit method *
Runge–Kutta methods In numerical analysis, the Runge–Kutta methods ( ) are a family of implicit and explicit iterative methods, which include the Euler method, used in temporal discretization for the approximate solutions of simultaneous nonlinear equations. The ...
— one of the two main classes of methods for initial-value problems **
Midpoint method In numerical analysis, a branch of applied mathematics, the midpoint method is a one-step method for numerically solving the differential equation, : y'(t) = f(t, y(t)), \quad y(t_0) = y_0 . The explicit midpoint method is given by the formula ...
— a second-order method with two stages **
Heun's method In mathematics and computational science, Heun's method may refer to the improved or modified Euler's method (that is, the explicit trapezoidal rule), or a similar two-stage Runge–Kutta method. It is named after Karl Heun and is a numerical p ...
— either a second-order method with two stages, or a third-order method with three stages **
Bogacki–Shampine method The Bogacki–Shampine method is a method for the numerical solution of ordinary differential equations, that was proposed by Przemysław Bogacki and Lawrence F. Shampine in 1989 . The Bogacki–Shampine method is a Runge–Kutta method of order t ...
— a third-order method with four stages (FSAL) and an embedded fourth-order method **
Cash–Karp method In numerical analysis, the Cash–Karp method is a method for solving ordinary differential equations (ODEs). It was proposed by Professor Jeff R. Cash from Imperial College London and Alan H. Karp from IBM Scientific Center. The method is a memb ...
— a fifth-order method with six stages and an embedded fourth-order method **
Dormand–Prince method In numerical analysis, the Dormand–Prince (RKDP) method or DOPRI method, is an embedded method for solving ordinary differential equations . The method is a member of the Runge–Kutta family of ODE solvers. More specifically, it uses six func ...
— a fifth-order method with seven stages (FSAL) and an embedded fourth-order method **
Runge–Kutta–Fehlberg method In mathematics, the Runge–Kutta–Fehlberg method (or Fehlberg method) is an algorithm in numerical analysis for the numerical solution of ordinary differential equations. It was developed by the German mathematician Erwin Fehlberg and is base ...
— a fifth-order method with six stages and an embedded fourth-order method **
Gauss–Legendre method In numerical analysis and scientific computing, the Gauss–Legendre methods are a family of numerical methods for ordinary differential equations. Gauss–Legendre methods are implicit Runge–Kutta methods. More specifically, they are collocation ...
— family of A-stable method with optimal order based on Gaussian quadrature ** Butcher group — algebraic formalism involving rooted trees for analysing Runge–Kutta methods **
List of Runge–Kutta methods Runge–Kutta methods are methods for the numerical solution of the ordinary differential equation :\frac = f(t, y). Explicit Runge–Kutta methods take the form :\begin y_ &= y_n + h \sum_^s b_i k_i \\ k_1 &= f(t_n, y_n), \\ k_2 &= f(t_n+c_2h ...
*
Linear multistep method Linear multistep methods are used for the numerical solution of ordinary differential equations. Conceptually, a numerical method starts from an initial point and then takes a short step forward in time to find the next solution point. The proce ...
— the other main class of methods for initial-value problems **
Backward differentiation formula The backward differentiation formula (BDF) is a family of implicit methods for the numerical integration of ordinary differential equations. They are linear multistep methods that, for a given function and time, approximate the derivative of that ...
— implicit methods of order 2 to 6; especially suitable for stiff equations ** Numerov's method — fourth-order method for equations of the form y'' = f(t,y) **
Predictor–corrector method In numerical analysis, predictor–corrector methods belong to a class of algorithms designed to integrate ordinary differential equationsto find an unknown function that satisfies a given differential equation. All such algorithms proceed in two s ...
— uses one method to approximate solution and another one to increase accuracy *
General linear methods General linear methods (GLMs) are a large class of numerical methods used to obtain numerical solutions to ordinary differential equations. They include multistage Runge–Kutta methods that use intermediate collocation points, as well as linea ...
— a class of methods encapsulating linear multistep and Runge-Kutta methods * Bulirsch–Stoer algorithm — combines the midpoint method with Richardson extrapolation to attain arbitrary order * Exponential integrator — based on splitting ODE in a linear part, which is solved exactly, and a nonlinear part *Methods designed for the solution of ODEs from classical physics: ** Newmark-beta method — based on the extended mean-value theorem **
Verlet integration Verlet integration () is a numerical method used to integrate Newton's equations of motion. It is frequently used to calculate trajectories of particles in molecular dynamics simulations and computer graphics. The algorithm was first used in 1791 ...
— a popular second-order method **
Leapfrog integration In numerical analysis, leapfrog integration is a method for numerically integrating differential equations of the form \ddot x = \frac = A(x), or equivalently of the form \dot v = \frac = A(x), \;\dot x = \frac = v, particularly in the case of a d ...
— another name for Verlet integration **
Beeman's algorithm Beeman's algorithm is a method for numerically integrating ordinary differential equations of order 2, more specifically Newton's equations of motion \ddot x=A(x). It was designed to allow high numbers of particles in simulations of molecular dyna ...
— a two-step method extending the Verlet method **
Dynamic relaxation Dynamic relaxation is a numerical method, which, among other things, can be used to do " form-finding" for cable and fabric structures. The aim is to find a geometry where all forces are in equilibrium. In the past this was done by direct modellin ...
* Geometric integrator — a method that preserves some geometric structure of the equation **
Symplectic integrator In mathematics, a symplectic integrator (SI) is a numerical integration scheme for Hamiltonian systems. Symplectic integrators form the subclass of geometric integrators which, by definition, are canonical transformations. They are widely used in ...
— a method for the solution of Hamilton's equations that preserves the symplectic structure *** Variational integrator — symplectic integrators derived using the underlying variational principle ***
Semi-implicit Euler method In mathematics, the semi-implicit Euler method, also called symplectic Euler, semi-explicit Euler, Euler–Cromer, and Newton–Størmer–Verlet (NSV), is a modification of the Euler method for solving Hamilton's equations, a system of ordinary d ...
— variant of Euler method which is symplectic when applied to separable Hamiltonians ** Energy drift — phenomenon that energy, which should be conserved, drifts away due to numerical errors *Other methods for initial value problems (IVPs): **
Bi-directional delay line In mathematics, a bi-directional delay line is a numerical analysis technique used in computer simulation for solving ordinary differential equations by converting them to hyperbolic equations. In this way an explicit solution scheme is obtained wit ...
** Partial element equivalent circuit *Methods for solving two-point boundary value problems (BVPs): ** Shooting method ** Direct multiple shooting method — divides interval in several subintervals and applies the shooting method on each subinterval *Methods for solving differential-algebraic equations (DAEs), i.e., ODEs with constraints: **
Constraint algorithm In computational chemistry, a constraint algorithm is a method for satisfying the Newtonian motion of a rigid body which consists of mass points. A restraint algorithm is used to ensure that the distance between mass points is maintained. The gen ...
— for solving Newton's equations with constraints ** Pantelides algorithm — for reducing the index of a DEA *Methods for solving stochastic differential equations (SDEs): **
Euler–Maruyama method In Itô calculus, the Euler–Maruyama method (also called the Euler method) is a method for the approximate numerical solution of a stochastic differential equation (SDE). It is an extension of the Euler method for ordinary differential equations ...
— generalization of the Euler method for SDEs **
Milstein method In mathematics, the Milstein method is a technique for the approximate numerical solution of a stochastic differential equation. It is named after Grigori N. Milstein who first published it in 1974. Description Consider the autonomous Itō stoch ...
— a method with strong order one ** Runge–Kutta method (SDE) — generalization of the family of Runge–Kutta methods for SDEs *Methods for solving integral equations: **
Nyström method In mathematics numerical analysis, the Nyström method or quadrature method seeks the numerical solution of an integral equation by replacing the integral with a representative weighted sum. The continuous problem is broken into n discrete inter ...
— replaces the integral with a quadrature rule *Analysis: **
Truncation error (numerical integration) Truncation errors in numerical integration are of two kinds: * ''local truncation errors'' – the error caused by one iteration, and * ''global truncation errors'' – the cumulative error caused by many iterations. Definitions Suppose we have ...
— local and global truncation errors, and their relationships *** Lady Windermere's Fan (mathematics) — telescopic identity relating local and global truncation errors *
Stiff equation In mathematics, a stiff equation is a differential equation for which certain numerical methods for solving the equation are numerically unstable, unless the step size is taken to be extremely small. It has proven difficult to formulate a precise ...
— roughly, an ODE for which unstable methods need a very short step size, but stable methods do not **
L-stability Within mathematics regarding differential equations, L-stability is a special case of A-stability, a property of Runge–Kutta methods for solving ordinary differential equations. A method is L-stable if it is A-stable and \phi(z) \to 0 as ...
— method is A-stable and stability function vanishes at infinity *
Adaptive stepsize In mathematics and numerical analysis, an adaptive step size is used in some methods for the numerical solution of ordinary differential equations (including the special case of numerical integration) in order to control the errors of the method ...
— automatically changing the step size when that seems advantageous * Parareal -- a parallel-in-time integration algorithm


Numerical methods for partial differential equations

Numerical partial differential equations Numerical methods for partial differential equations is the branch of numerical analysis that studies the numerical solution of partial differential equations (PDEs). In principle, specialized methods for hyperbolic, parabolic or elliptic parti ...
— the numerical solution of partial differential equations (PDEs)


Finite difference methods

Finite difference method In numerical analysis, finite-difference methods (FDM) are a class of numerical techniques for solving differential equations by approximating derivatives with finite differences. Both the spatial domain and time interval (if applicable) are ...
— based on approximating differential operators with difference operators *
Finite difference A finite difference is a mathematical expression of the form . If a finite difference is divided by , one gets a difference quotient. The approximation of derivatives by finite differences plays a central role in finite difference methods for t ...
— the discrete analogue of a differential operator **
Finite difference coefficient In mathematics, to approximate a derivative to an arbitrary order of accuracy, it is possible to use the finite difference. A finite difference can be central, forward or backward. Central finite difference This table contains the coefficients o ...
— table of coefficients of finite-difference approximations to derivatives **
Discrete Laplace operator In mathematics, the discrete Laplace operator is an analog of the continuous Laplace operator, defined so that it has meaning on a graph or a discrete grid. For the case of a finite-dimensional graph (having a finite number of edges and vertice ...
— finite-difference approximation of the Laplace operator *** Eigenvalues and eigenvectors of the second derivative — includes eigenvalues of discrete Laplace operator ***
Kronecker sum of discrete Laplacians In mathematics, the Kronecker sum of discrete Laplacians, named after Leopold Kronecker, is a discrete version of the separation of variables for the continuous Laplacian in a Cuboid#Rectangular_cuboid, rectangular cuboid domain. General form of t ...
— used for Laplace operator in multiple dimensions **
Discrete Poisson equation In mathematics, the discrete Poisson equation is the finite difference analog of the Poisson equation. In it, the discrete Laplace operator takes the place of the Laplace operator. The discrete Poisson equation is frequently used in numerical an ...
— discrete analogue of the Poisson equation using the discrete Laplace operator * Stencil (numerical analysis) — the geometric arrangements of grid points affected by a basic step of the algorithm **
Compact stencil In mathematics, especially in the areas of numerical analysis called numerical partial differential equations, a compact stencil is a type of stencil that uses only nine nodes for its discretization method in two dimensions. It uses only the cent ...
— stencil which only uses a few grid points, usually only the immediate and diagonal neighbours *** Higher-order compact finite difference scheme **
Non-compact stencil In numerical mathematics, a non-compact stencil is a type of discretization method, where any node surrounding the node of interest may be used in the calculation. Its computational time grows with an increase of layers of nodes used. Non-compact ...
— any stencil that is not compact **
Five-point stencil In numerical analysis, given a square grid in one or two dimensions, the five-point stencil of a point in the grid is a stencil made up of the point itself together with its four "neighbors". It is used to write finite difference approximations to ...
— two-dimensional stencil consisting of a point and its four immediate neighbours on a rectangular grid *Finite difference methods for heat equation and related PDEs: ** FTCS scheme (forward-time central-space) — first-order explicit ** Crank–Nicolson method — second-order implicit *Finite difference methods for hyperbolic PDEs like the wave equation: **
Lax–Friedrichs method The Lax–Friedrichs method, named after Peter Lax and Kurt O. Friedrichs, is a numerical method for the solution of hyperbolic partial differential equations based on finite differences. The method can be described as the FTCS (forward in time ...
— first-order explicit **
Lax–Wendroff method The Lax–Wendroff method, named after Peter Lax and Burton Wendroff, is a numerical method for the solution of hyperbolic partial differential equations, based on finite difference A finite difference is a mathematical expression of the form . ...
— second-order explicit ** MacCormack method — second-order explicit ** Upwind scheme *** Upwind differencing scheme for convection — first-order scheme for convection–diffusion problems **Lax–Wendroff theorem — conservative scheme for hyperbolic system of conservation laws converges to the weak solution *Alternating direction implicit method (ADI) — update using the flow in ''x''-direction and then using flow in ''y''-direction *Nonstandard finite difference scheme *Specific applications: **Finite difference methods for option pricing **Finite-difference time-domain method — a finite-difference method for electrodynamics


Finite element methods, gradient discretisation methods

Finite element method — based on a discretization of the space of solutions gradient discretisation method — based on both the discretization of the solution and of its gradient *Finite element method in structural mechanics — a physical approach to finite element methods *Galerkin method — a finite element method in which the residual is orthogonal to the finite element space **Discontinuous Galerkin method — a Galerkin method in which the approximate solution is not continuous *Rayleigh–Ritz method — a finite element method based on variational principles *Spectral element method — high-order finite element methods *hp-FEM — variant in which both the size and the order of the elements are automatically adapted *Examples of finite elements: **Bilinear quadrilateral element — also known as the Q4 element **Constant strain triangle element (CST) — also known as the T3 element **Quadratic quadrilateral element — also known as the Q8 element **Barsoum elements *Direct stiffness method — a particular implementation of the finite element method, often used in structural analysis *Trefftz method *Finite element updating *Extended finite element method — puts functions tailored to the problem in the approximation space *Functionally graded elements — elements for describing functionally graded materials *Superelement — particular grouping of finite elements, employed as a single element * Interval finite element method — combination of finite elements with interval arithmetic *Discrete exterior calculus — discrete form of the exterior calculus of differential geometry *Modal analysis using FEM — solution of eigenvalue problems to find natural vibrations *Céa's lemma — solution in the finite-element space is an almost best approximation in that space of the true solution *Patch test (finite elements) — simple test for the quality of a finite element *MAFELAP (MAthematics of Finite ELements and APplications) — international conference held at Brunel University *NAFEMS — not-for-profit organisation that sets and maintains standards in computer-aided engineering analysis *Multiphase topology optimisation — technique based on finite elements for determining optimal composition of a mixture * Interval finite element *Applied element method — for simulation of cracks and structural collapse *Wood–Armer method — structural analysis method based on finite elements used to design reinforcement for concrete slabs *Isogeometric analysis — integrates finite elements into conventional NURBS-based CAD design tools *Loubignac iteration *Stiffness matrix — finite-dimensional analogue of differential operator *Combination with meshfree methods: **Weakened weak form — form of a PDE that is weaker than the standard weak form **G space — functional space used in formulating the weakened weak form **Smoothed finite element method *Variational multiscale method *List of finite element software packages


Other methods

*Spectral method — based on the Fourier transformation **Pseudo-spectral method *Method of lines — reduces the PDE to a large system of ordinary differential equations *Boundary element method (BEM) — based on transforming the PDE to an integral equation on the boundary of the domain ** Interval boundary element method — a version using interval arithmetics *Analytic element method — similar to the boundary element method, but the integral equation is evaluated analytically *Finite volume method — based on dividing the domain in many small domains; popular in computational fluid dynamics **Godunov's scheme — first-order conservative scheme for fluid flow, based on piecewise constant approximation **MUSCL scheme — second-order variant of Godunov's scheme **AUSM — advection upstream splitting method **Flux limiter — limits spatial derivatives (fluxes) in order to avoid spurious oscillations **Riemann solver — a solver for Riemann problems (a conservation law with piecewise constant data) **Properties of discretization schemes — finite volume methods can be conservative, bounded, etc. *Discrete element method — a method in which the elements can move freely relative to each other **Extended discrete element method — adds properties such as strain to each particle **Movable cellular automaton — combination of cellular automata with discrete elements *Meshfree methods — does not use a mesh, but uses a particle view of the field **Discrete least squares meshless method — based on minimization of weighted summation of the squared residual **Diffuse element method **Finite pointset method — represent continuum by a point cloud **Moving Particle Semi-implicit Method **Method of fundamental solutions (MFS) — represents solution as linear combination of fundamental solutions **Variants of MFS with source points on the physical boundary: ***Boundary knot method (BKM) ***Boundary particle method (BPM) ***Regularized meshless method (RMM) ***Singular boundary method (SBM) *Methods designed for problems from electromagnetics: **Finite-difference time-domain method — a finite-difference method **Rigorous coupled-wave analysis — semi-analytical Fourier-space method based on Floquet's theorem **Transmission-line matrix method (TLM) — based on analogy between electromagnetic field and mesh of transmission lines **Uniform theory of diffraction — specifically designed for scattering problems *Particle-in-cell — used especially in fluid dynamics **Multiphase particle-in-cell method — considers solid particles as both numerical particles and fluid *High-resolution scheme *Shock capturing method *Vorticity confinement — for vortex-dominated flows in fluid dynamics, similar to shock capturing *Split-step method *Fast marching method *Orthogonal collocation *Lattice Boltzmann methods — for the solution of the Navier-Stokes equations *Roe solver — for the solution of the Euler equation *Relaxation (iterative method) — a method for solving elliptic PDEs by converting them to evolution equations *Broad classes of methods: **Mimesis (mathematics), Mimetic methods — methods that respect in some sense the structure of the original problem **Multiphysics — models consisting of various submodels with different physics **Immersed boundary method — for simulating elastic structures immersed within fluids *Multisymplectic integrator — extension of symplectic integrators, which are for ODEs *Stretched grid method — for problems solution that can be related to an elastic grid behavior.


Techniques for improving these methods

*Multigrid method — uses a hierarchy of nested meshes to speed up the methods *Domain decomposition methods — divides the domain in a few subdomains and solves the PDE on these subdomains **Additive Schwarz method **Abstract additive Schwarz method — abstract version of additive Schwarz without reference to geometric information **Balancing domain decomposition method (BDD) — preconditioner for symmetric positive definite matrices **BDDC, Balancing domain decomposition by constraints (BDDC) — further development of BDD **FETI, Finite element tearing and interconnect (FETI) **FETI-DP — further development of FETI **Fictitious domain method — preconditioner constructed with a structured mesh on a fictitious domain of simple shape **Mortar methods — meshes on subdomain do not mesh **Neumann–Dirichlet method — combines Neumann problem on one subdomain with Dirichlet problem on other subdomain **Neumann–Neumann methods — domain decomposition methods that use Neumann problems on the subdomains **Poincaré–Steklov operator — maps tangential electric field onto the equivalent electric current **Schur complement method — early and basic method on subdomains that do not overlap **Schwarz alternating method — early and basic method on subdomains that overlap *Coarse space (numerical analysis), Coarse space — variant of the problem which uses a discretization with fewer degrees of freedom *Adaptive mesh refinement — uses the computed solution to refine the mesh only where necessary *Fast multipole method — hierarchical method for evaluating particle-particle interactions *Perfectly matched layer — artificial absorbing layer for wave equations, used to implement absorbing boundary conditions


Grids and meshes

*Grid classification / Types of mesh: **Polygon mesh — consists of polygons in 2D or 3D **Triangle mesh — consists of triangles in 2D or 3D ***Triangulation (geometry) — subdivision of given region in triangles, or higher-dimensional analogue ***Nonobtuse mesh — mesh in which all angles are less than or equal to 90° ***Point-set triangulation — triangle mesh such that given set of point are all a vertex of a triangle ***Polygon triangulation — triangle mesh inside a polygon ***Delaunay triangulation — triangulation such that no vertex is inside the circumcentre of a triangle ***Constrained Delaunay triangulation — generalization of the Delaunay triangulation that forces certain required segments into the triangulation ***Pitteway triangulation — for any point, triangle containing it has nearest neighbour of the point as a vertex ***Minimum-weight triangulation — triangulation of minimum total edge length ***Kinetic triangulation — a triangulation that moves over time ***Triangulated irregular network ***Quasi-triangulation — subdivision into simplices, where vertices are not points but arbitrary sloped line segments **Volume mesh — consists of three-dimensional shapes **Regular grid — consists of congruent parallelograms, or higher-dimensional analogue **Unstructured grid **Geodesic grid — isotropic grid on a sphere *Mesh generation **Image-based meshing — automatic procedure of generating meshes from 3D image data **Marching cubes — extracts a polygon mesh from a scalar field **Parallel mesh generation **Ruppert's algorithm — creates quality Delauney triangularization from piecewise linear data *Subdivisions: *Apollonian network — undirected graph formed by recursively subdividing a triangle *Barycentric subdivision — standard way of dividing arbitrary convex polygons into triangles, or the higher-dimensional analogue *Improving an existing mesh: **Chew's second algorithm — improves Delauney triangularization by refining poor-quality triangles **Laplacian smoothing — improves polynomial meshes by moving the vertices *Jump-and-Walk algorithm — for finding triangle in a mesh containing a given point *Spatial twist continuum — dual representation of a mesh consisting of hexahedra *Pseudotriangle — simply connected region between any three mutually tangent convex sets *Simplicial complex — all vertices, line segments, triangles, tetrahedra, ..., making up a mesh


Analysis

*Lax equivalence theorem — a consistent method is convergent if and only if it is stable *Courant–Friedrichs–Lewy condition — stability condition for hyperbolic PDEs *Von Neumann stability analysis — all Fourier components of the error should be stable *Numerical diffusion — diffusion introduced by the numerical method, above to that which is naturally present **False diffusion *Numerical dispersion *Numerical resistivity — the same, with resistivity instead of diffusion *Weak formulation — a functional-analytic reformulation of the PDE necessary for some methods *Total variation diminishing — property of schemes that do not introduce spurious oscillations *Godunov's theorem — linear monotone schemes can only be of first order *Motz's problem — benchmark problem for singularity problems


Monte Carlo method

*Variants of the Monte Carlo method: **Direct simulation Monte Carlo **Quasi-Monte Carlo method **Markov chain Monte Carlo ***Metropolis–Hastings algorithm ****Multiple-try Metropolis — modification which allows larger step sizes ****Wang and Landau algorithm — extension of Metropolis Monte Carlo ****Equation of State Calculations by Fast Computing Machines — 1953 article proposing the Metropolis Monte Carlo algorithm ****Multicanonical ensemble — sampling technique that uses Metropolis–Hastings to compute integrals ***Gibbs sampling ***Coupling from the past ***Reversible-jump Markov chain Monte Carlo **Dynamic Monte Carlo method ***Kinetic Monte Carlo ***Gillespie algorithm **Particle filter ***Auxiliary particle filter **Reverse Monte Carlo **Demon algorithm *Pseudo-random number sampling **Inverse transform sampling — general and straightforward method but computationally expensive **Rejection sampling — sample from a simpler distribution but reject some of the samples ***Ziggurat algorithm — uses a pre-computed table covering the probability distribution with rectangular segments **For sampling from a normal distribution: ***Box–Muller transform ***Marsaglia polar method **Convolution random number generator — generates a random variable as a sum of other random variables **Indexed search *Variance reduction techniques: **Antithetic variates **Control variates **Importance sampling **Stratified sampling **VEGAS algorithm *Low-discrepancy sequence **Constructions of low-discrepancy sequences *Event generator *Parallel tempering *Umbrella sampling — improves sampling in physical systems with significant energy barriers *Hybrid Monte Carlo *Ensemble Kalman filter — recursive filter suitable for problems with a large number of variables *Transition path sampling *Walk-on-spheres method — to generate exit-points of Brownian motion from bounded domains *Applications: **Ensemble forecasting — produce multiple numerical predictions from slightly initial conditions or parameters **Bond fluctuation model — for simulating the conformation and dynamics of polymer systems **Iterated filtering **Metropolis light transport **Monte Carlo localization — estimates the position and orientation of a robot **Monte Carlo methods for electron transport **Monte Carlo method for photon transport **Monte Carlo methods in finance ***Monte Carlo methods for option pricing ***Quasi-Monte Carlo methods in finance **Monte Carlo molecular modeling ***Path integral molecular dynamics — incorporates Feynman path integrals **Quantum Monte Carlo ***Diffusion Monte Carlo — uses a Green function to solve the Schrödinger equation ***Gaussian quantum Monte Carlo ***Path integral Monte Carlo ***Reptation Monte Carlo ***Variational Monte Carlo **Methods for simulating the Ising model: ***Swendsen–Wang algorithm — entire sample is divided into equal-spin clusters ***Wolff algorithm — improvement of the Swendsen–Wang algorithm ***Metropolis–Hastings algorithm **Auxiliary field Monte Carlo — computes averages of operators in many-body quantum mechanical problems **Cross-entropy method — for multi-extremal optimization and importance sampling *Also see the list of statistics topics


Applications

*Computational physics **Computational electromagnetics **Computational fluid dynamics (CFD) ***Numerical methods in fluid mechanics ***Large eddy simulation ***Smoothed-particle hydrodynamics ***Aeroacoustic analogy — used in numerical aeroacoustics to reduce sound sources to simple emitter types ***Stochastic Eulerian Lagrangian method — uses Eulerian description for fluids and Lagrangian for structures ***Explicit algebraic stress model **Computational magnetohydrodynamics (CMHD) — studies electrically conducting fluids **Climate model **Numerical weather prediction ***Geodesic grid **Celestial mechanics ***Numerical model of the Solar System **Quantum jump method — used for simulating open quantum systems, operates on wave function **Dynamic design analysis method (DDAM) — for evaluating effect of underwater explosions on equipment *Computational chemistry **Cell lists **Coupled cluster **Density functional theory **DIIS — direct inversion in (or of) the iterative subspace *Computational sociology *Computational statistics


Software

For a large list of software, see the list of numerical-analysis software.


Journals

*Acta Numerica *Mathematics of Computation (published by the American Mathematical Society) *Journal of Computational and Applied Mathematics *BIT Numerical Mathematics *Numerische Mathematik *Journals from the Society for Industrial and Applied Mathematics **SIAM Journal on Numerical Analysis **SIAM Journal on Scientific Computing


Researchers

*Cleve Moler *Gene H. Golub *James H. Wilkinson *Margaret H. Wright *Nicholas Higham, Nicholas J. Higham *
Nick Trefethen Lloyd Nicholas Trefethen (born 30 August 1955) is an American mathematician, professor of numerical analysis and head of the Numerical Analysis Group at the Mathematical Institute, University of Oxford. Education Trefethen was born 30 August 19 ...
*Peter Lax *Richard S. Varga *Ulrich W. Kulisch *Vladik Kreinovich Numerical analysis, *Topics Mathematics-related lists, Numerical analysis topics Outlines of mathematics and logic, Numerical analysis Wikipedia outlines, Numerical analysis Lists of topics, Numerical analysis