This is a list of
numerical analysis
Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic computation, symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of ...
topics.
General
*
Validated numerics
Validated numerics, or rigorous computation, verified computation, reliable computation, numerical verification () is numerics including mathematically strict error (rounding error, truncation error, discretization error) evaluation, and it is one ...
*
Iterative method
In computational mathematics, an iterative method is a Algorithm, mathematical procedure that uses an initial value to generate a sequence of improving approximate solutions for a class of problems, in which the ''i''-th approximation (called an " ...
*
Rate of convergence
In mathematical analysis, particularly numerical analysis, the rate of convergence and order of convergence of a sequence that converges to a limit are any of several characterizations of how quickly that sequence approaches its limit. These are ...
— the speed at which a convergent sequence approaches its limit
**
Order of accuracy
In numerical analysis, order of accuracy quantifies the rate of convergence of a numerical approximation of a differential equation to the exact solution.
Consider u, the exact solution to a differential equation in an appropriate normed space (V ...
— rate at which numerical solution of differential equation converges to exact solution
*
Series acceleration
Series may refer to:
People with the name
* Caroline Series (born 1951), English mathematician, daughter of George Series
* George Series (1920–1995), English physicist
Arts, entertainment, and media
Music
* Series, the ordered sets used ...
— methods to accelerate the speed of convergence of a series
**
Aitken's delta-squared process
In numerical analysis, Aitken's delta-squared process or Aitken extrapolation is a series acceleration method used for accelerating the rate of convergence of a sequence. It is named after Alexander Aitken, who introduced this method in 1926 as ...
— most useful for linearly converging sequences
**
Minimum polynomial extrapolation In mathematics, minimum polynomial extrapolation is a sequence transformation used for convergence acceleration of vector sequences, due to Cabay and Jackson.
While Aitken's method is the most famous, it often fails for vector sequences. An effec ...
— for vector sequences
**
Richardson extrapolation
In numerical analysis, Richardson extrapolation is a Series acceleration, sequence acceleration method used to improve the rate of convergence of a sequence of estimates of some value A^\ast = \lim_ A(h). In essence, given the value of A(h) for se ...
**
Shanks transformation In numerical analysis, the Shanks transformation is a non-linear series acceleration method to increase the rate of convergence of a sequence. This method is named after Daniel Shanks, who rediscovered this sequence transformation in 1955. It was fi ...
— similar to Aitken's delta-squared process, but applied to the partial sums
**
Van Wijngaarden transformation — for accelerating the convergence of an alternating series
*
Abramowitz and Stegun
''Abramowitz and Stegun'' (''AS'') is the informal name of a 1964 mathematical reference work edited by Milton Abramowitz and Irene Stegun of the United States National Bureau of Standards (NBS), now the National Institute of Standards and T ...
— book containing formulas and tables of many special functions
**
Digital Library of Mathematical Functions
Digital usually refers to something using discrete digits, often binary digits.
Businesses
*Digital bank, a form of financial institution
*Digital Equipment Corporation (DEC) or Digital, a computer company
*Digital Research (DR or DRI), a software ...
— successor of book by Abramowitz and Stegun
*
Curse of dimensionality
The curse of dimensionality refers to various phenomena that arise when analyzing and organizing data in high-dimensional spaces that do not occur in low-dimensional settings such as the three-dimensional physical space of everyday experience. T ...
*
Local convergence and global convergence — whether you need a good initial guess to get convergence
*
Superconvergence
*
Discretization
In applied mathematics, discretization is the process of transferring continuous functions, models, variables, and equations into discrete counterparts. This process is usually carried out as a first step toward making them suitable for numeri ...
*
Difference quotient
In single-variable calculus, the difference quotient is usually the name for the expression
: \frac
which when taken to the Limit of a function, limit as ''h'' approaches 0 gives the derivative of the Function (mathematics), function ''f''. The ...
*Complexity:
**
Computational complexity of mathematical operations
**
Smoothed analysis — measuring the expected performance of algorithms under slight random perturbations of worst-case inputs
*
Symbolic-numeric computation In mathematics and computer science, symbolic-numeric computation is the use of software
Software consists of computer programs that instruct the Execution (computing), execution of a computer. Software also includes design documents and specif ...
— combination of symbolic and numeric methods
*Cultural and historical aspects:
**
History of numerical solution of differential equations using computers
**
Hundred-dollar, Hundred-digit Challenge problems — list of ten problems proposed by
Nick Trefethen in 2002
**
Timeline of numerical analysis after 1945
The following is a timeline of numerical analysis after 1945, and deals with developments after the invention of the modern electronic computer, which began during Second World War. For a fuller history of the subject before this period, see tim ...
*General classes of methods:
**
Collocation method
In mathematics, a collocation method is a method for the numerical solution of ordinary differential equations, partial differential equations and integral equations. The idea is to choose a finite-dimensional space of candidate solutions (usually ...
— discretizes a continuous equation by requiring it only to hold at certain points
**
Level-set method
***
Level set (data structures)
Level or levels may refer to:
Engineering
*Level (optical instrument), a device used to measure true horizontal or relative heights
*Spirit level or bubble level, an instrument designed to indicate whether a surface is horizontal or vertical
*Ca ...
— data structures for representing level sets
**
Sinc numerical methods In numerical analysis and applied mathematics, sinc numerical methods are numerical techniques for finding approximate solutions of partial differential equations and integral equations based on the translates of sinc function and Cardinal function ...
— methods based on the sinc function, sinc(''x'') = sin(''x'') / ''x''
**
ABS methods
ABS usually refers to:
* Acrylonitrile butadiene styrene, a common plastics polymer
* Anti-lock braking system, in vehicles
Abs usually refers to:
* Rectus abdominis muscle ("abdominal muscle" or "abs") of humans and some mammals
* Abdominal musc ...
Error
Error analysis (mathematics) In mathematics, error analysis is the study of kind and quantity of error, or uncertainty, that may be present in the solution to a problem. This issue is particularly prominent in applied areas such as numerical analysis and statistics.
Error an ...
*
Approximation
An approximation is anything that is intentionally similar but not exactly equal to something else.
Etymology and usage
The word ''approximation'' is derived from Latin ''approximatus'', from ''proximus'' meaning ''very near'' and the prefix ...
*
Approximation error
The approximation error in a given data value represents the significant discrepancy that arises when an exact, true value is compared against some approximation derived for it. This inherent error in approximation can be quantified and express ...
*
Catastrophic cancellation
In numerical analysis, catastrophic cancellation is the phenomenon that subtracting good approximations to two nearby numbers may yield a very bad approximation to the difference of the original numbers.
For example, if there are two studs, one L ...
*
Condition number
In numerical analysis, the condition number of a function measures how much the output value of the function can change for a small change in the input argument. This is used to measure how sensitive a function is to changes or errors in the inpu ...
*
Discretization error
In numerical analysis, computational physics, and simulation, discretization error is the error resulting from the fact that a function of a continuous variable is represented in the computer by a finite number of evaluations, for example, on a ...
*
Floating point
In computing, floating-point arithmetic (FP) is arithmetic on subsets of real numbers formed by a ''significand'' (a signed sequence of a fixed number of digits in some base) multiplied by an integer power of that base.
Numbers of this form ...
number
**
Guard digit
Guard or guards may refer to:
Professional occupations
* Bodyguard, who protects an individual from personal assault
* Crossing guard, who stops traffic so pedestrians can cross the street
* Lifeguard, who rescues people from drowning
* Prison ...
— extra precision introduced during a computation to reduce round-off error
**
Truncation
In mathematics and computer science, truncation is limiting the number of digits right of the decimal point.
Truncation and floor function
Truncation of positive real numbers can be done using the floor function. Given a number x \in \mathbb ...
— rounding a floating-point number by discarding all digits after a certain digit
**
Round-off error
In computing, a roundoff error, also called rounding error, is the difference between the result produced by a given algorithm using exact arithmetic and the result produced by the same algorithm using finite-precision, rounded arithmetic. Roun ...
***
Numeric precision in Microsoft Excel
**
Arbitrary-precision arithmetic
In computer science, arbitrary-precision arithmetic, also called bignum arithmetic, multiple-precision arithmetic, or sometimes infinite-precision arithmetic, indicates that calculations are performed on numbers whose digits of precision are po ...
*
Interval arithmetic
Interval arithmetic (also known as interval mathematics; interval analysis or interval computation) is a mathematical technique used to mitigate rounding and measurement errors in mathematical computation by computing function bounds. Numeri ...
— represent every number by two floating-point numbers guaranteed to have the unknown number between them
**
Interval contractor — maps interval to subinterval which still contains the unknown exact answer
**
Interval propagation
In numerical mathematics, interval propagation or interval constraint propagation is the problem of contracting interval domains associated to variables of R without removing any value that is consistent with a set of constraints (i.e., equations ...
— contracting interval domains without removing any value consistent with the constraints
***See also:
Interval boundary element method,
Interval finite element
In numerical analysis, the interval finite element method (interval FEM) is a finite element method that uses interval parameters. Interval FEM can be applied in situations where it is not possible to get reliable probabilistic characteristics of ...
*
Loss of significance
*
Numerical error In software engineering and mathematics, numerical error is the error in Numerical computation, the numerical computations.
Types
It can be the combined effect of two kinds of error in a calculation.
The first is referred to as Round-off error and ...
*
Numerical stability
In the mathematical subfield of numerical analysis, numerical stability is a generally desirable property of numerical algorithms. The precise definition of stability depends on the context: one important context is numerical linear algebra, and ...
*Error propagation:
**
Propagation of uncertainty
In statistics, propagation of uncertainty (or propagation of error) is the effect of variables' uncertainties (or errors, more specifically random errors) on the uncertainty of a function based on them. When the variables are the values of ex ...
**
Residual (numerical analysis)
*
Relative change and difference
Relative may refer to:
General use
*Kinship and family, the principle binding the most basic social units of society. If two people are connected by circumstances of birth, they are said to be ''relatives''.
Philosophy
* Relativism, the concept ...
— the relative difference between ''x'' and ''y'' is , ''x'' − ''y'', / max(, ''x'', , , ''y'', )
*
Significant figures
Significant figures, also referred to as significant digits, are specific digits within a number that is written in positional notation that carry both reliability and necessity in conveying a particular quantity. When presenting the outcom ...
**
Artificial precision — when a numerical value or semantic is expressed with more precision than was initially provided from measurement or user input
**
False precision — giving more significant figures than appropriate
*
Sterbenz lemma
In floating-point arithmetic, the Sterbenz lemma or Sterbenz's lemma is a theorem giving conditions under which floating-point differences are computed exactly.
It is named after Pat H. Sterbenz, who published a variant of it in 1974.
The Sterbe ...
*
Truncation error
In numerical analysis and scientific computing, truncation error is an error caused by approximating a mathematical process. The term truncation comes from the fact that these simplifications often involve the truncation of an infinite series expa ...
— error committed by doing only a finite numbers of steps
*
Well-posed problem
*
Affine arithmetic
Elementary and special functions
*
Unrestricted algorithm
*Summation:
**
Kahan summation algorithm
**
Pairwise summation
In numerical analysis, pairwise summation, also called cascade summation, is a technique to sum a sequence of finite-arithmetic precision, precision floating-point numbers that substantially reduces the accumulated round-off error compared to naive ...
— slightly worse than Kahan summation but cheaper
**
Binary splitting
**
2Sum
2Sum is a floating-point algorithm for computing the exact round-off error in a floating-point addition operation.
2Sum and its variant Fast2Sum were first published by Ole Møller in 1965.
Fast2Sum is often used implicitly in other algorithms suc ...
*Multiplication:
**
Multiplication algorithm
A multiplication algorithm is an algorithm (or method) to multiplication, multiply two numbers. Depending on the size of the numbers, different algorithms are more efficient than others. Numerous algorithms are known and there has been much resea ...
— general discussion, simple methods
**
Karatsuba algorithm
The Karatsuba algorithm is a fast multiplication algorithm for integers. It was discovered by Anatoly Karatsuba in 1960 and published in 1962.
Knuth D.E. (1969) '' The Art of Computer Programming. v.2.'' Addison-Wesley Publ.Co., 724 pp ...
— the first algorithm which is faster than straightforward multiplication
**
Toom–Cook multiplication
Toom–Cook, sometimes known as Toom-3, named after Andrei Toom, who introduced the new algorithm with its low complexity, and Stephen Cook, who cleaned the description of it, is a multiplication algorithm for large integers.
Given two large in ...
— generalization of Karatsuba multiplication
**
Schönhage–Strassen algorithm — based on Fourier transform, asymptotically very fast
**
Fürer's algorithm — asymptotically slightly faster than Schönhage–Strassen
*
Division algorithm
A division algorithm is an algorithm which, given two integers ''N'' and ''D'' (respectively the numerator and the denominator), computes their quotient and/or remainder, the result of Euclidean division. Some are applied by hand, while others ar ...
— for computing quotient and/or remainder of two numbers
**
Long division
In arithmetic, long division is a standard division algorithm suitable for dividing multi-digit Hindu-Arabic numerals (positional notation) that is simple enough to perform by hand. It breaks down a division problem into a series of easier step ...
**
Restoring division
**
Non-restoring division
**
SRT division
**
Newton–Raphson division
A division algorithm is an algorithm which, given two integers ''N'' and ''D'' (respectively the numerator and the denominator), computes their quotient and/or remainder, the result of Euclidean division. Some are applied by hand, while others ar ...
: uses
Newton's method
In numerical analysis, the Newton–Raphson method, also known simply as Newton's method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a ...
to find the
reciprocal of D, and multiply that reciprocal by N to find the final quotient Q.
**
Goldschmidt division
*Exponentiation:
**
Exponentiation by squaring
**
Addition-chain exponentiation
*
Multiplicative inverse Algorithms: for computing a number's multiplicative inverse (reciprocal).
**
Newton's method
In numerical analysis, the Newton–Raphson method, also known simply as Newton's method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a ...
*Polynomials:
**
Horner's method
In mathematics and computer science, Horner's method (or Horner's scheme) is an algorithm for polynomial evaluation. Although named after William George Horner, this method is much older, as it has been attributed to Joseph-Louis Lagrange by Hor ...
**
Estrin's scheme — modification of the Horner scheme with more possibilities for parallelization
**
Clenshaw algorithm
**
De Casteljau's algorithm
*Square roots and other roots:
**
Integer square root
In number theory, the integer square root (isqrt) of a non-negative integer is the non-negative integer which is the greatest integer less than or equal to the square root of ,
\operatorname(n) = \lfloor \sqrt n \rfloor.
For example, \operatorn ...
**
Methods of computing square roots
**
''n''th root algorithm
**
hypot — the function (''x''
2 + ''y''
2)
1/2
**
Alpha max plus beta min algorithm — approximates hypot(x,y)
**
Fast inverse square root
Fast inverse square root, sometimes referred to as or by the hexadecimal constant , is an algorithm that estimates \frac, the Multiplicative inverse, reciprocal (or multiplicative inverse) of the square root of a 32-bit floating-point number x i ...
— calculates 1 / using details of the IEEE floating-point system
*Elementary functions (exponential, logarithm, trigonometric functions):
**
Trigonometric tables
In mathematics, tables of trigonometric functions are useful in a number of areas. Before the existence of pocket calculators, trigonometric tables were essential for navigation, science and engineering. The calculation of mathematical tables ...
— different methods for generating them
**
CORDIC
CORDIC, short for coordinate rotation digital computer, is a simple and efficient algorithm to calculate trigonometric functions, hyperbolic functions, square roots, multiplications, divisions, and exponentials and logarithms
In ma ...
— shift-and-add algorithm using a table of arc tangents
**
BKM algorithm — shift-and-add algorithm using a table of logarithms and complex numbers
*Gamma function:
**
Lanczos approximation
**
Spouge's approximation — modification of Stirling's approximation; easier to apply than Lanczos
*
AGM method — computes arithmetic–geometric mean; related methods compute special functions
*
FEE method
In mathematics, the FEE method, or fast E-function evaluation method, is the method of fast summation of series of a special form. It was constructed in 1990 by Ekaterina Karatsuba and is so-named because it makes fast computations of the Siegel ...
(Fast E-function Evaluation) — fast summation of series like the
power series
In mathematics, a power series (in one variable) is an infinite series of the form
\sum_^\infty a_n \left(x - c\right)^n = a_0 + a_1 (x - c) + a_2 (x - c)^2 + \dots
where ''a_n'' represents the coefficient of the ''n''th term and ''c'' is a co ...
for e
''x''
*
Gal's accurate tables Gal's accurate tables is a method devised by Shmuel Gal to provide accurate values of special functions using a lookup table and interpolation. It is a fast and efficient method for generating values of functions like the exponential or the trigono ...
— table of function values with unequal spacing to reduce round-off error
*
Spigot algorithm — algorithms that can compute individual digits of a real number
*
Approximations of :
**
Liu Hui's π algorithm — first algorithm that can compute π to arbitrary precision
**
Leibniz formula for π
In mathematics, the Leibniz formula for , named after Gottfried Wilhelm Leibniz, states that
\frac = 1-\frac+\frac-\frac+\frac-\cdots = \sum_^ \frac,
an alternating series.
It is sometimes called the Madhava–Leibniz series as it was firs ...
— alternating series with very slow convergence
**
Wallis product — infinite product converging slowly to π/2
**
Viète's formula
In mathematics, Viète's formula is the following infinite product of nested radicals representing twice the Multiplicative inverse, reciprocal of the mathematical constant pi, :
\frac2\pi = \frac2 \cdot \frac2 \cdot \frac2 \cdots
It can also b ...
— more complicated infinite product which converges faster
**
Gauss–Legendre algorithm
The Gauss–Legendre algorithm is an algorithm to compute the digits of . It is notable for being rapidly convergent, with only 25 iterations producing 45 million correct digits of . However, it has some drawbacks (for example, it is computer ...
— iteration which converges quadratically to π, based on arithmetic–geometric mean
**
Borwein's algorithm — iteration which converges quartically to 1/π, and other algorithms
**
Chudnovsky algorithm
The Chudnovsky algorithm is a fast method for calculating the digits of , based on Ramanujan's formulae. Published by the Chudnovsky brothers in 1988, it was used to calculate to a billion decimal places.
It was used in the world record calcu ...
— fast algorithm that calculates a hypergeometric series
**
Bailey–Borwein–Plouffe formula — can be used to compute individual hexadecimal digits of π
**
Bellard's formula — faster version of Bailey–Borwein–Plouffe formula
**
List of formulae involving π
The following is a list of significant formulae involving the mathematical constant . Many of these formulae can be found in the article '' Pi'', or the article '' Approximations of ''.
Euclidean geometry
: \pi = \frac Cd = \frac C
where is ...
Numerical linear algebra
Numerical linear algebra
Numerical linear algebra, sometimes called applied linear algebra, is the study of how matrix operations can be used to create computer algorithms which efficiently and accurately provide approximate answers to questions in continuous mathemati ...
— study of numerical algorithms for linear algebra problems
Basic concepts
*Types of matrices appearing in numerical analysis:
**
Sparse matrix
In numerical analysis and scientific computing, a sparse matrix or sparse array is a matrix in which most of the elements are zero. There is no strict definition regarding the proportion of zero-value elements for a matrix to qualify as sparse ...
***
Band matrix
In mathematics, particularly matrix theory, a band matrix or banded matrix is a sparse matrix whose non-zero entries are confined to a diagonal ''band'', comprising the main diagonal and zero or more diagonals on either side.
Band matrix
Bandwidt ...
***
Bidiagonal matrix
***
Tridiagonal matrix
In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal (the first diagonal below this), and the supradiagonal/upper diagonal (the first diagonal above the main ...
***
Pentadiagonal matrix
***
Skyline matrix
In scientific computing, skyline matrix storage, or SKS, or a variable band matrix storage, or envelope storage scheme is a form of a sparse matrix storage format matrix that reduces the storage requirement of a matrix more than band matrix, banded ...
**
Circulant matrix
In linear algebra, a circulant matrix is a square matrix in which all rows are composed of the same elements and each row is rotated one element to the right relative to the preceding row. It is a particular kind of Toeplitz matrix.
In numerica ...
**
Triangular matrix
In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called if all the entries ''above'' the main diagonal are zero. Similarly, a square matrix is called if all the entries ''below'' the main diagonal are z ...
**
Diagonally dominant matrix
In mathematics, a square matrix is said to be diagonally dominant if, for every row of the matrix, the magnitude of the diagonal entry in a row is greater than or equal to the sum of the magnitudes of all the other (off-diagonal) entries in that ro ...
**
Block matrix
In mathematics, a block matrix or a partitioned matrix is a matrix that is interpreted as having been broken into sections called blocks or submatrices.
Intuitively, a matrix interpreted as a block matrix can be visualized as the original matrix w ...
— matrix composed of smaller matrices
**
Stieltjes matrix — symmetric positive definite with non-positive off-diagonal entries
**
Hilbert matrix
In linear algebra, a Hilbert matrix, introduced by , is a square matrix with entries being the unit fractions
: H_ = \frac.
For example, this is the 5 × 5 Hilbert matrix:
: H = \begin
1 & \frac & \frac & \frac & \frac \\
\frac & \frac & \ ...
— example of a matrix which is extremely ill-conditioned (and thus difficult to handle)
**
Wilkinson matrix — example of a symmetric tridiagonal matrix with pairs of nearly, but not exactly, equal eigenvalues
**
Convergent matrix — square matrix whose successive powers approach the zero matrix
*Algorithms for matrix multiplication:
**
Strassen algorithm
In linear algebra, the Strassen algorithm, named after Volker Strassen, is an algorithm for matrix multiplication. It is faster than the standard matrix multiplication algorithm for large matrices, with a better asymptotic complexity, although t ...
**
Coppersmith–Winograd algorithm
**
Cannon's algorithm — a distributed algorithm, especially suitable for processors laid out in a 2d grid
**
Freivalds' algorithm Freivalds' algorithm (named after Rūsiņš Mārtiņš Freivalds) is a probabilistic randomized algorithm used to verify matrix multiplication. Given three ''n'' × ''n'' matrices A, B, and C, a general problem is to verify whether A ...
— a randomized algorithm for checking the result of a multiplication
*
Matrix decompositions:
**
LU decomposition
In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix (see matrix multiplication and matrix decomposition). The produ ...
— lower triangular times upper triangular
**
QR decomposition
In linear algebra, a QR decomposition, also known as a QR factorization or QU factorization, is a decomposition of a matrix ''A'' into a product ''A'' = ''QR'' of an orthonormal matrix ''Q'' and an upper triangular matrix ''R''. QR decom ...
— orthogonal matrix times triangular matrix
***
RRQR factorization — rank-revealing QR factorization, can be used to compute rank of a matrix
**
Polar decomposition
In mathematics, the polar decomposition of a square real or complex matrix A is a factorization of the form A = U P, where U is a unitary matrix, and P is a positive semi-definite Hermitian matrix (U is an orthogonal matrix, and P is a posit ...
— unitary matrix times positive-semidefinite Hermitian matrix
**Decompositions by similarity:
***
Eigendecomposition
In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the mat ...
— decomposition in terms of eigenvectors and eigenvalues
***
Jordan normal form
\begin
\lambda_1 1\hphantom\hphantom\\
\hphantom\lambda_1 1\hphantom\\
\hphantom\lambda_1\hphantom\\
\hphantom\lambda_2 1\hphantom\hphantom\\
\hphantom\hphantom\lambda_2\hphantom\\
\hphantom\lambda_3\hphantom\\
\hphantom\ddots\hphantom\\
...
— bidiagonal matrix of a certain form; generalizes the eigendecomposition
****
Weyr canonical form — permutation of Jordan normal form
***
Jordan–Chevalley decomposition
In mathematics, specifically linear algebra, the Jordan–Chevalley decomposition, named after Camille Jordan and Claude Chevalley, expresses a linear operator in a unique way as the sum of two other linear operators which are simpler to understand ...
— sum of commuting nilpotent matrix and diagonalizable matrix
***
Schur decomposition
In the mathematical discipline of linear algebra, the Schur decomposition or Schur triangulation, named after Issai Schur, is a matrix decomposition. It allows one to write an arbitrary complex square matrix as unitarily similar to an upper tria ...
— similarity transform bringing the matrix to a triangular matrix
**
Singular value decomposition
In linear algebra, the singular value decomposition (SVD) is a Matrix decomposition, factorization of a real number, real or complex number, complex matrix (mathematics), matrix into a rotation, followed by a rescaling followed by another rota ...
— unitary matrix times diagonal matrix times unitary matrix
*
Matrix splitting
In the mathematical discipline of numerical linear algebra, a matrix splitting is an expression which represents a given matrix as a sum or difference of matrices. Many iterative methods (for example, for systems of differential equations) depe ...
— expressing a given matrix as a sum or difference of matrices
Solving systems of linear equations
*
Gaussian elimination
In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of row-wise operations performed on the corresponding matrix of coefficients. This method can a ...
**
Row echelon form
In linear algebra, a matrix is in row echelon form if it can be obtained as the result of Gaussian elimination. Every matrix can be put in row echelon form by applying a sequence of elementary row operations. The term ''echelon'' comes from the F ...
— matrix in which all entries below a nonzero entry are zero
**
Bareiss algorithm — variant which ensures that all entries remain integers if the initial matrix has integer entries
**
Tridiagonal matrix algorithm
In numerical linear algebra, the tridiagonal matrix algorithm, also known as the Thomas algorithm (named after Llewellyn Thomas), is a simplified form of Gaussian elimination that can be used to solve Tridiagonal matrix, tridiagonal systems of equa ...
— simplified form of Gaussian elimination for tridiagonal matrices
*
LU decomposition
In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix (see matrix multiplication and matrix decomposition). The produ ...
— write a matrix as a product of an upper- and a lower-triangular matrix
**
Crout matrix decomposition
**
LU reduction — a special parallelized version of a LU decomposition algorithm
*
Block LU decomposition
*
Cholesky decomposition
In linear algebra, the Cholesky decomposition or Cholesky factorization (pronounced ) is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for eff ...
— for solving a system with a positive definite matrix
**
Minimum degree algorithm
**
Symbolic Cholesky decomposition
In the mathematical subfield of numerical analysis the symbolic Cholesky decomposition is an algorithm used to determine the non-zero pattern for the L factors of a symmetric sparse matrix when applying the Cholesky decomposition or variants.
Al ...
*
Iterative refinement — procedure to turn an inaccurate solution in a more accurate one
*Direct methods for sparse matrices:
**
Frontal solver A frontal solver is an approach to solving sparse linear systems which is used extensively in finite element analysis. Algorithms of this kind are variants of Gauss elimination that automatically avoids a large number of operations involving zero ...
— used in finite element methods
**
Nested dissection — for symmetric matrices, based on graph partitioning
*
Levinson recursion — for Toeplitz matrices
*
SPIKE algorithm — hybrid parallel solver for narrow-banded matrices
*
Cyclic reduction Cyclic reduction is a numerical method for solving large linear systems by repeatedly splitting the problem. Each step eliminates even or odd rows and columns of a matrix and remains in a similar form. The elimination step is relatively expensive ...
— eliminate even or odd rows or columns, repeat
*Iterative methods:
**
Jacobi method
**
Gauss–Seidel method
In numerical linear algebra, the Gauss–Seidel method, also known as the Liebmann method or the method of successive displacement, is an iterative method used to solve a system of linear equations. It is named after the German mathematicians Ca ...
***
Successive over-relaxation
In numerical linear algebra, the method of successive over-relaxation (SOR) is a variant of the Gauss–Seidel method for solving a linear system of equations, resulting in faster convergence. A similar method can be used for any slowly convergi ...
(SOR) — a technique to accelerate the Gauss–Seidel method
****
Symmetric successive over-relaxation
In applied mathematics, symmetric successive over-relaxation (SSOR), is a preconditioner.
If the original matrix can be Matrix splitting, split into diagonal, lower and upper triangular as A=D+L+L^\mathsf then the SSOR preconditioner matrix is def ...
(SSOR) — variant of SOR for symmetric matrices
***
Backfitting algorithm — iterative procedure used to fit a generalized additive model, often equivalent to Gauss–Seidel
**
Modified Richardson iteration
Modified Richardson iteration is an iterative method for solving a system of linear equations. Richardson iteration was proposed by Lewis Fry Richardson in his work dated 1910. It is similar to the Jacobi and Gauss–Seidel method.
We seek the so ...
**
Conjugate gradient method
In mathematics, the conjugate gradient method is an algorithm for the numerical solution of particular systems of linear equations, namely those whose matrix is positive-semidefinite. The conjugate gradient method is often implemented as an it ...
(CG) — assumes that the matrix is positive definite
***
Derivation of the conjugate gradient method
***
Nonlinear conjugate gradient method — generalization for nonlinear optimization problems
**
Biconjugate gradient method
In mathematics, more specifically in numerical linear algebra, the biconjugate gradient method is an algorithm to solve systems of linear equations
:A x= b.\,
Unlike the conjugate gradient method, this algorithm does not require the matrix A to ...
(BiCG)
***
Biconjugate gradient stabilized method (BiCGSTAB) — variant of BiCG with better convergence
**
Conjugate residual method — similar to CG but only assumed that the matrix is symmetric
**
Generalized minimal residual method
In mathematics, the generalized minimal residual method (GMRES) is an iterative method for the numerical solution of an indefinite nonsymmetric system of linear equations. The method approximates the solution by the vector in a Krylov subspace wit ...
(GMRES) — based on the Arnoldi iteration
**
Chebyshev iteration — avoids inner products but needs bounds on the spectrum
**
Stone's method (SIP — Strongly Implicit Procedure) — uses an incomplete LU decomposition
**
Kaczmarz method
**
Preconditioner
In mathematics, preconditioning is the application of a transformation, called the preconditioner, that conditions a given problem into a form that is more suitable for numerical solving methods. Preconditioning is typically related to reducing ...
***
Incomplete Cholesky factorization — sparse approximation to the Cholesky factorization
***
Incomplete LU factorization — sparse approximation to the LU factorization
**
Uzawa iteration — for saddle node problems
*Underdetermined and overdetermined systems (systems that have no or more than one solution):
**
Numerical computation of null space — find all solutions of an underdetermined system
**
Moore–Penrose pseudoinverse — for finding solution with smallest 2-norm (for underdetermined systems) or smallest residual
**
Sparse approximation — for finding the sparsest solution (i.e., the solution with as many zeros as possible)
Eigenvalue algorithms
Eigenvalue algorithm
In numerical analysis, one of the most important problems is designing efficient and Numerical stability, stable algorithms for finding the eigenvalues of a Matrix (mathematics), matrix. These eigenvalue algorithms may also find eigenvectors.
Eig ...
— a numerical algorithm for locating the eigenvalues of a matrix
*
Power iteration
In mathematics, power iteration (also known as the power method) is an eigenvalue algorithm: given a diagonalizable matrix A, the algorithm will produce a number \lambda, which is the greatest (in absolute value) eigenvalue of A, and a nonzero vec ...
*
Inverse iteration
In numerical analysis, inverse iteration (also known as the ''inverse power method'') is an iterative eigenvalue algorithm. It allows one to find an approximate
eigenvector when an approximation to a corresponding eigenvalue is already known.
The m ...
*
Rayleigh quotient iteration
*
Arnoldi iteration
In numerical linear algebra, the Arnoldi iteration is an eigenvalue algorithm and an important example of an iterative method. Arnoldi finds an approximation to the eigenvalues and eigenvectors of general (possibly non- Hermitian) matrices by c ...
— based on Krylov subspaces
*
Lanczos algorithm
The Lanczos algorithm is an iterative method devised by Cornelius Lanczos that is an adaptation of power iteration, power methods to find the m "most useful" (tending towards extreme highest/lowest) eigenvalues and eigenvectors of an n \times n ...
— Arnoldi, specialized for positive-definite matrices
**
Block Lanczos algorithm — for when matrix is over a finite field
*
QR algorithm
*
Jacobi eigenvalue algorithm — select a small submatrix which can be diagonalized exactly, and repeat
**
Jacobi rotation — the building block, almost a Givens rotation
**
Jacobi method for complex Hermitian matrices
*
Divide-and-conquer eigenvalue algorithm
The term divide and conquer in politics refers to an entity gaining and maintaining political power by using divisive measures. This includes the exploitation of existing divisions within a political group by its political opponents, and also ...
*
Folded spectrum method
*
LOBPCG Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) is a matrix-free method for finding the largest (or smallest) eigenvalues and the corresponding eigenvectors of a symmetric generalized eigenvalue problem
:A x= \lambda B x,
for a g ...
— Locally Optimal Block Preconditioned Conjugate Gradient Method
*
Eigenvalue perturbation — stability of eigenvalues under perturbations of the matrix
Other concepts and algorithms
*
Orthogonalization
In linear algebra, orthogonalization is the process of finding a set of orthogonal vectors that span a particular subspace. Formally, starting with a linearly independent set of vectors in an inner product space (most commonly the Euclidean s ...
algorithms:
**
Gram–Schmidt process
In mathematics, particularly linear algebra and numerical analysis, the Gram–Schmidt process or Gram-Schmidt algorithm is a way of finding a set of two or more vectors that are perpendicular to each other.
By technical definition, it is a metho ...
**
Householder transformation
In linear algebra, a Householder transformation (also known as a Householder reflection or elementary reflector) is a linear transformation that describes a reflection (mathematics), reflection about a plane (mathematics), plane or hyperplane conta ...
***
Householder operator — analogue of Householder transformation for general inner product spaces
**
Givens rotation
In numerical linear algebra, a Givens rotation is a rotation in the plane spanned by two coordinates axes. Givens rotations are named after Wallace Givens, who introduced them to numerical analysts in the 1950s while he was working at Argonne Natio ...
*
Krylov subspace
In linear algebra, the order-''r'' Krylov subspace generated by an ''n''-by-''n'' matrix ''A'' and a vector ''b'' of dimension ''n'' is the linear subspace spanned by the images of ''b'' under the first ''r'' powers of ''A'' (starting from A^0=I) ...
*
Block matrix pseudoinverse
In mathematics, a block matrix pseudoinverse is a formula for the pseudoinverse of a partitioned matrix. This is useful for decomposing or approximating many algorithms updating parameters in signal processing, which are based on the least square ...
*
Bidiagonalization
*
Cuthill–McKee algorithm
In numerical linear algebra, the Cuthill–McKee algorithm (CM), named after Elizabeth Cuthill and James McKee,E. Cuthill and J. McKeethe bandwidth of sparse symmetric matrices''In Proc. 24th Nat. Conf. Association for Computing Machinery, ACM, ...
— permutes rows/columns in sparse matrix to yield a narrow band matrix
*
In-place matrix transposition — computing the transpose of a matrix without using much additional storage
*
Pivot element
The pivot or pivot element is the element of a matrix, or an array, which is selected first by an algorithm (e.g. Gaussian elimination, simplex algorithm, etc.), to do certain calculations. In the case of matrix algorithms, a pivot entry is usual ...
— entry in a matrix on which the algorithm concentrates
*
Matrix-free methods In computational mathematics, a matrix-free method is an algorithm for solving a linear system of equations or an eigenvalue problem that does not store the coefficient matrix explicitly, but accesses the matrix by evaluating matrix-vector products ...
— methods that only access the matrix by evaluating matrix-vector products
Interpolation and approximation
Interpolation
In the mathematics, mathematical field of numerical analysis, interpolation is a type of estimation, a method of constructing (finding) new data points based on the range of a discrete set of known data points.
In engineering and science, one ...
— construct a function going through some given data points
*
Nearest-neighbor interpolation
Nearest-neighbor interpolation (also known as proximal interpolation or, in some contexts, point sampling) is a simple method of multivariate interpolation in one or more dimensions.
Interpolation is the problem of approximating the value of a ...
— takes the value of the nearest neighbor
Polynomial interpolation
Polynomial interpolation
In numerical analysis, polynomial interpolation is the interpolation of a given data set by the polynomial of lowest possible degree that passes through the points in the dataset.
Given a set of data points (x_0,y_0), \ldots, (x_n,y_n), with no ...
— interpolation by polynomials
*
Linear interpolation
In mathematics, linear interpolation is a method of curve fitting using linear polynomials to construct new data points within the range of a discrete set of known data points.
Linear interpolation between two known points
If the two known po ...
*
Runge's phenomenon
In the mathematical field of numerical analysis, Runge's phenomenon () is a problem of oscillation at the edges of an interval that occurs when using polynomial interpolation with polynomials of high degree over a set of equispaced interpolation ...
*
Vandermonde matrix
In linear algebra, a Vandermonde matrix, named after Alexandre-Théophile Vandermonde, is a matrix with the terms of a geometric progression in each row: an (m + 1) \times (n + 1) matrix
:V = V(x_0, x_1, \cdots, x_m) =
\begin
1 & x_0 & x_0^2 & \dot ...
*
Chebyshev polynomials
The Chebyshev polynomials are two sequences of orthogonal polynomials related to the cosine and sine functions, notated as T_n(x) and U_n(x). They can be defined in several equivalent ways, one of which starts with trigonometric functions:
...
*
Chebyshev nodes
In numerical analysis, Chebyshev nodes (also called Chebyshev points or a Chebyshev grid) are a set of specific algebraic numbers used as nodes for polynomial interpolation and numerical integration. They are the Projection (linear algebra), pr ...
*
Lebesgue constant
In mathematics, the Lebesgue constants (depending on a set of nodes and of its size) give an idea of how good the interpolant of a function (at the given nodes) is in comparison with the best polynomial approximation of the function (the degree o ...
s
*Different forms for the interpolant:
**
Newton polynomial
In the mathematical field of numerical analysis, a Newton polynomial, named after its inventor Isaac Newton, is an interpolation polynomial for a given set of data points. The Newton polynomial is sometimes called Newton's divided differences inter ...
***
Divided differences
In mathematics, divided differences is an algorithm, historically used for computing tables of logarithms and trigonometric functions. Charles Babbage's difference engine, an early mechanical calculator, was designed to use this algorithm in its ...
***
Neville's algorithm — for evaluating the interpolant; based on the Newton form
**
Lagrange polynomial
In numerical analysis, the Lagrange interpolating polynomial is the unique polynomial of lowest degree that interpolates a given set of data.
Given a data set of coordinate pairs (x_j, y_j) with 0 \leq j \leq k, the x_j are called ''nodes'' ...
**
Bernstein polynomial
In the mathematics, mathematical field of numerical analysis, a Bernstein polynomial is a polynomial expressed as a linear combination of #Bernstein basis polynomials, Bernstein basis polynomials. The idea is named after mathematician Sergei Nata ...
— especially useful for approximation
**
Brahmagupta's interpolation formula — seventh-century formula for quadratic interpolation
*Extensions to multiple dimensions:
**
Bilinear interpolation
In mathematics, bilinear interpolation is a method for interpolating functions of two variables (e.g., ''x'' and ''y'') using repeated linear interpolation. It is usually applied to functions sampled on a 2D rectilinear grid, though it can be ge ...
**
Trilinear interpolation
Trilinear interpolation is a method of multivariate interpolation on a Three dimensional space, 3-dimensional regular grid. It approximates the value of a function at an intermediate point (x, y, z) within the local axial rectangular prism (geo ...
**
Bicubic interpolation
In mathematics, bicubic interpolation is an extension of cubic spline interpolation (a method of applying cubic interpolation to a data set) for interpolating data points on a two-dimensional regular grid. The interpolated surface (meaning the k ...
**
Tricubic interpolation
In the mathematical subfield numerical analysis, tricubic interpolation is a method for obtaining values at arbitrary points in Three-dimensional space, 3D space of a function defined on a regular grid. The approach involves approximating the fun ...
**
Padua points In polynomial interpolation of two variables, the Padua points are the first known example (and up to now the only one) of a unisolvent point set (that is, the interpolating polynomial is unique) with ''minimal growth'' of their Lebesgue constant, ...
— set of points in R
2 with unique polynomial interpolant and minimal growth of Lebesgue constant
*
Hermite interpolation
In numerical analysis, Hermite interpolation, named after Charles Hermite, is a method of polynomial interpolation, which generalizes Lagrange interpolation. Lagrange interpolation allows computing a polynomial of degree less than that takes th ...
*
Birkhoff interpolation
In mathematics, Birkhoff interpolation is an extension of polynomial interpolation. It refers to the problem of finding a polynomial P(x) of degree d such that ''only certain'' derivatives have specified values at specified points:
: P^(x_i) = y_ ...
*
Abel–Goncharov interpolation In mathematics, Abel–Goncharov interpolation determines a polynomial
In mathematics, a polynomial is a Expression (mathematics), mathematical expression consisting of indeterminate (variable), indeterminates (also called variable (mathematics), ...
Spline interpolation
Spline interpolation
In the mathematical field of numerical analysis, spline interpolation is a form of interpolation where the interpolant is a special type of piecewise polynomial called a spline. That is, instead of fitting a single, high-degree polynomial to all ...
— interpolation by piecewise polynomials
*
Spline (mathematics)
In mathematics, a spline is a function defined piecewise by polynomials.
In interpolating problems, spline interpolation is often preferred to polynomial interpolation because it yields similar results, even when using low degree polynomia ...
— the piecewise polynomials used as interpolants
*
Perfect spline — polynomial spline of degree ''m'' whose ''m''th derivate is ±1
*
Cubic Hermite spline
In numerical analysis, a cubic Hermite spline or cubic Hermite interpolator is a spline where each piece is a third-degree polynomial specified in Hermite form, that is, by its values and first derivatives at the end points of the correspondin ...
**
Centripetal Catmull–Rom spline — special case of cubic Hermite splines without self-intersections or cusps
*
Monotone cubic interpolation
*
Hermite spline
*
Bézier curve
A Bézier curve ( , ) is a parametric equation, parametric curve used in computer graphics and related fields. A set of discrete "control points" defines a smooth, continuous curve by means of a formula. Usually the curve is intended to approxima ...
**
De Casteljau's algorithm
**
composite Bézier curve
In geometric modelling and in computer graphics, a composite Bézier curve or Bézier spline is a Spline (mathematics), spline made out of Bézier curves that is at least C^0 continuous function, continuous. In other words, a composite Bézier cu ...
**Generalizations to more dimensions:
***
Bézier triangle — maps a triangle to R
3
***
Bézier surface
Bézier surfaces are a type of mathematical spline used in computer graphics, computer-aided design, and finite element modeling.
As with Bézier curves, a Bézier surface is defined by a set of control points. Similar to interpolation in many ...
— maps a square to R
3
*
B-spline
In numerical analysis, a B-spline (short for basis spline) is a type of Spline (mathematics), spline function designed to have minimal Support (mathematics), support (overlap) for a given Degree of a polynomial, degree, smoothness, and set of bre ...
**
Box spline — multivariate generalization of B-splines
**
Truncated power function In mathematics, the truncated power function with exponent n is defined as
:x_+^n =
\begin
x^n &:\ x > 0 \\
0 &:\ x \le 0.
\end
In particular,
:x_+ =
\begin
x &:\ x > 0 \\
0 &:\ x \le 0.
\end
and interpret the exponent as conventional power ...
**
De Boor's algorithm
In the mathematical subfield of numerical analysis, de Boor's algorithmC. de Boor 971 "Subroutine package for calculating with B-splines", Techn.Rep. LA-4728-MS, Los Alamos Sci.Lab, Los Alamos NM; p. 109, 121. is a polynomial-time and numericall ...
— generalizes De Casteljau's algorithm
*
Non-uniform rational B-spline
Non-uniform rational basis spline (NURBS) is a mathematical model using B-spline, basis splines (B-splines) that is commonly used in computer graphics for representing curves and Surface (mathematics), surfaces. It offers great flexibility and pr ...
(NURBS)
**
T-spline — can be thought of as a NURBS surface for which a row of control points is allowed to terminate
*
Kochanek–Bartels spline
*
Coons patch
In mathematics, a Coons patch, is a type of surface patch or manifold Parametrization (geometry), parametrization used in computer graphics to smoothly join other Surface (topology), surfaces together, and in computational mechanics applications, ...
— type of manifold parametrization used to smoothly join other surfaces together
*
M-spline — a non-negative spline
*
I-spline — a monotone spline, defined in terms of M-splines
*
Smoothing spline
Smoothing splines are function estimates, \hat f(x), obtained from a set of noisy observations y_i of the target f(x_i), in order to balance a measure of goodness of fit of \hat f(x_i) to y_i with a derivative based measure of the smoothness of ...
— a spline fitted smoothly to noisy data
*
Blossom (functional) — a unique, affine, symmetric map associated to a polynomial or spline
*See also:
List of numerical computational geometry topics
Trigonometric interpolation
Trigonometric interpolation
In mathematics, trigonometric interpolation is interpolation with trigonometric polynomials. Interpolation is the process of finding a function which goes through some given data points. For trigonometric interpolation, this function has to be a ...
— interpolation by trigonometric polynomials
*
Discrete Fourier transform
In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced Sampling (signal processing), samples of a function (mathematics), function into a same-length sequence of equally-spaced samples of the discre ...
— can be viewed as trigonometric interpolation at equidistant points
**
Relations between Fourier transforms and Fourier series
*
Fast Fourier transform
A fast Fourier transform (FFT) is an algorithm that computes the discrete Fourier transform (DFT) of a sequence, or its inverse (IDFT). A Fourier transform converts a signal from its original domain (often time or space) to a representation in ...
(FFT) — a fast method for computing the discrete Fourier transform
**
Bluestein's FFT algorithm
**
Bruun's FFT algorithm
**
Cooley–Tukey FFT algorithm
The Cooley–Tukey algorithm, named after James Cooley, J. W. Cooley and John Tukey, is the most common fast Fourier transform (FFT) algorithm. It re-expresses the discrete Fourier transform (DFT) of an arbitrary composite number, composite size ...
**
Split-radix FFT algorithm
The split-radix FFT is a fast Fourier transform (FFT) algorithm for computing the discrete Fourier transform (DFT), and was first described in an initially little-appreciated paper by R. Yavne (196and subsequently rediscovered simultaneously by va ...
— variant of Cooley–Tukey that uses a blend of radices 2 and 4
**
Goertzel algorithm
The Goertzel algorithm is a technique in digital signal processing (DSP) for efficient evaluation of the individual terms of the discrete Fourier transform (DFT). It is useful in certain practical applications, such as recognition of dual-tone mult ...
**
Prime-factor FFT algorithm
The prime-factor algorithm (PFA), also called the Good–Thomas algorithm (1958/1963), is a fast Fourier transform (FFT) algorithm that re-expresses the discrete Fourier transform (DFT) of a size as a two-dimensional DFT, but ''only'' for the ca ...
**
Rader's FFT algorithm
Rader's algorithm (1968), named for Charles M. Rader of MIT Lincoln Laboratory, is a fast Fourier transform (FFT) algorithm that computes the discrete Fourier transform (DFT) of prime number, prime sizes by re-expressing the DFT as a cyclic convolu ...
**
Bit-reversal permutation — particular permutation of vectors with 2
''m'' entries used in many FFTs.
**
Butterfly diagram
**
Twiddle factor
A twiddle factor, in fast Fourier transform (FFT) algorithms, is any of the trigonometric constant coefficients that are multiplied by the data in the course of the algorithm. This term was apparently coined by Gentleman & Sande in 1966, and has ...
— the trigonometric constant coefficients that are multiplied by the data
**
Cyclotomic fast Fourier transform The cyclotomic fast Fourier transform is a type of fast Fourier transform algorithm over finite fields.S.V. Fedorenko and P.V. Trifonov, This algorithm first decomposes a DFT into several circular convolutions, and then derives the DFT results from ...
— for FFT over finite fields
**Methods for computing discrete convolutions with finite impulse response filters using the FFT:
***
Overlap–add method
In signal processing, the overlap–add method is an efficient way to evaluate the discrete convolution of a very long signal x /math> with a finite impulse response (FIR) filter h /math>:
where h = 0 for m outside the region ,M This a ...
***
Overlap–save method
*
Sigma approximation
*
Dirichlet kernel
In mathematical analysis, the Dirichlet kernel, named after the German mathematician Peter Gustav Lejeune Dirichlet, is the collection of periodic functions defined as
D_n(x)= \sum_^n e^ = \left(1+2\sum_^n\cos(kx)\right)=\frac,
where is any non ...
— convolving any function with the Dirichlet kernel yields its trigonometric interpolant
*
Gibbs phenomenon
In mathematics, the Gibbs phenomenon is the oscillatory behavior of the Fourier series of a piecewise continuously differentiable periodic function around a jump discontinuity. The Nth partial Fourier series of the function (formed by summing ...
Other interpolants
*
Simple rational approximation
**
Polynomial and rational function modeling — comparison of polynomial and rational interpolation
*
Wavelet
A wavelet is a wave-like oscillation with an amplitude that begins at zero, increases or decreases, and then returns to zero one or more times. Wavelets are termed a "brief oscillation". A taxonomy of wavelets has been established, based on the n ...
**
Continuous wavelet
**
Transfer matrix
**See also:
List of functional analysis topics,
List of wavelet-related transforms
*
Inverse distance weighting
Inverse distance weighting (IDW) is a type of Deterministic algorithm, deterministic method for multivariate interpolation with a known homogeneously scattered set of points. The assigned values to unknown points are calculated with a Weighted m ...
*
Radial basis function
In mathematics a radial basis function (RBF) is a real-valued function \varphi whose value depends only on the distance between the input and some fixed point, either the origin, so that \varphi(\mathbf) = \hat\varphi(\left\, \mathbf\right\, ), o ...
(RBF) — a function of the form ƒ(''x'') = ''φ''(, ''x''−''x''
0, )
**
Polyharmonic spline
In applied mathematics, polyharmonic splines are used for function approximation and data interpolation. They are very useful for interpolating and fitting scattered data in many dimensions. Special cases include thin plate splines and natural c ...
— a commonly used radial basis function
**
Thin plate spline
Thin plate splines (TPS) are a spline-based technique for data interpolation and smoothing. They were introduced to geometric design by Duchon. They are an important special case of a polyharmonic spline. Robust Point Matching (RPM) is a common ...
— a specific polyharmonic spline: ''r''
2 log ''r''
**
Hierarchical RBF
*
Subdivision surface
In the field of 3D computer graphics, a subdivision surface (commonly shortened to SubD surface or Subsurf) is a curved Computer representation of surfaces, surface represented by the specification of a coarser polygon mesh and produced by a re ...
— constructed by recursively subdividing a piecewise linear interpolant
**
Catmull–Clark subdivision surface
**
Doo–Sabin subdivision surface
**
Loop subdivision surface
*
Slerp
In computer graphics, slerp is shorthand for spherical linear interpolation, introduced by Ken Shoemake in the context of quaternion interpolation for the purpose of animating 3D rotation. It refers to constant-speed motion along a unit-radius gr ...
(spherical linear interpolation) — interpolation between two points on a sphere
**Generalized quaternion interpolation — generalizes slerp for interpolation between more than two quaternions
*
Irrational base discrete weighted transform
*
Nevanlinna–Pick interpolation — interpolation by analytic functions in the unit disc subject to a bound
**
Pick matrix — the Nevanlinna–Pick interpolation has a solution if this matrix is positive semi-definite
*
Multivariate interpolation
In numerical analysis, multivariate interpolation or multidimensional interpolation is interpolation on ''multivariate functions'', having more than one variable or defined over a multi-dimensional domain. A common special case is bivariate inter ...
— the function being interpolated depends on more than one variable
**
Barnes interpolation
Barnes interpolation, named after Stanley L. Barnes, is the interpolation of unevenly spread data points from a set of measurements of an unknown function in two dimensions into an analytic function of two variables. An example of a situation where ...
— method for two-dimensional functions using Gaussians common in meteorology
**
Coons surface — combination of linear interpolation and bilinear interpolation
**
Lanczos resampling
Lanczos filtering and Lanczos resampling are two applications of a certain mathematical formula. It can be used as a low-pass filter or used to smoothly interpolate the value of a digital signal between its samples. In the latter case, it maps ...
— based on convolution with a sinc function
**
Natural neighbor interpolation
**
PDE surface
**
Transfinite interpolation — constructs function on planar domain given its values on the boundary
**
Trend surface analysis — based on low-order polynomials of spatial coordinates; uses scattered observations
**Method based on polynomials are listed under ''Polynomial interpolation''
Approximation theory
Approximation theory
In mathematics, approximation theory is concerned with how function (mathematics), functions can best be approximation, approximated with simpler functions, and with quantitative property, quantitatively characterization (mathematics), characteri ...
*
Orders of approximation
In science, engineering, and other quantitative disciplines, order of approximation refers to formal or informal expressions for how accurate an approximation is.
Usage in science and engineering
In formal expressions, the ordinal number used ...
*
Lebesgue's lemma
In mathematics, Lebesgue's lemma is an important statement in approximation theory. It provides a bound for the projection error, controlling the error of approximation by a linear subspace based on a linear projection relative to the optimal err ...
*
Curve fitting
Curve fitting is the process of constructing a curve, or mathematical function, that has the best fit to a series of data points, possibly subject to constraints. Curve fitting can involve either interpolation, where an exact fit to the data is ...
**
Vector field reconstruction
*
Modulus of continuity
In mathematical analysis, a modulus of continuity is a function ω : , ∞→ , ∞used to measure quantitatively the uniform continuity of functions. So, a function ''f'' : ''I'' → R admits ω as a modulus of continuity if
:, f(x)-f(y), \leq\ ...
— measures smoothness of a function
*
Least squares (function approximation) — minimizes the error in the L
2-norm
*
Minimax approximation algorithm A minimax approximation algorithm (or L∞ approximation or uniform approximation) is a method to find an approximation of a mathematical function
In mathematics, a function from a set (mathematics), set to a set assigns to each element of e ...
— minimizes the maximum error over an interval (the L
∞-norm)
**
Equioscillation theorem — characterizes the best approximation in the L
∞-norm
*
Unisolvent point set
In approximation theory, a finite collection of points X \subset R^n is often called unisolvent for a space W if any element w \in W is uniquely determined by its values on X.
X is unisolvent for \Pi^m_n (polynomials in n variables of degree at m ...
— function from given function space is determined uniquely by values on such a set of points
*
Stone–Weierstrass theorem
In mathematical analysis, the Weierstrass approximation theorem states that every continuous function defined on a closed interval (mathematics), interval can be uniform convergence, uniformly approximated as closely as desired by a polynomial fun ...
— continuous functions can be approximated uniformly by polynomials, or certain other function spaces
*Approximation by polynomials:
**
Linear approximation
In mathematics, a linear approximation is an approximation of a general function (mathematics), function using a linear function (more precisely, an affine function). They are widely used in the method of finite differences to produce first order ...
**
Bernstein polynomial
In the mathematics, mathematical field of numerical analysis, a Bernstein polynomial is a polynomial expressed as a linear combination of #Bernstein basis polynomials, Bernstein basis polynomials. The idea is named after mathematician Sergei Nata ...
— basis of polynomials useful for approximating a function
**
Bernstein's constant — error when approximating , ''x'', by a polynomial
**
Remez algorithm
The Remez algorithm or Remez exchange algorithm, published by Evgeny Yakovlevich Remez in 1934, is an iterative algorithm used to find simple approximations to functions, specifically, approximations by functions in a Chebyshev space that are the ...
— for constructing the best polynomial approximation in the L
∞-norm
**
Bernstein's inequality (mathematical analysis) — bound on maximum of derivative of polynomial in unit disk
**
Mergelyan's theorem Mergelyan's theorem is a result from approximation by polynomials in complex analysis proved by the Armenian mathematician Sergei Mergelyan in 1951.
Statement
:Let K be a compact subset of the complex plane \mathbb C such that \mathbb C \setm ...
— generalization of Stone–Weierstrass theorem for polynomials
**
Müntz–Szász theorem — variant of Stone–Weierstrass theorem for polynomials if some coefficients have to be zero
**
Bramble–Hilbert lemma — upper bound on L
p error of polynomial approximation in multiple dimensions
**
Discrete Chebyshev polynomials — polynomials orthogonal with respect to a discrete measure
**
Favard's theorem In mathematics, Favard's theorem, also called the Shohat–Favard theorem, states that a sequence of polynomials satisfying a suitable three-term recurrence relation is a sequence of orthogonal polynomials. The theorem was introduced in the theor ...
— polynomials satisfying suitable 3-term recurrence relations are orthogonal polynomials
*Approximation by Fourier series / trigonometric polynomials:
**
Jackson's inequality
In approximation theory, Jackson's inequality is an inequality bounding the value of function's best approximation by algebraic or trigonometric polynomials in terms of the modulus of continuity or modulus of smoothness of the function or of it ...
— upper bound for best approximation by a trigonometric polynomial
***
Bernstein's theorem (approximation theory) — a converse to Jackson's inequality
**
Fejér's theorem — Cesàro means of partial sums of Fourier series converge uniformly for continuous periodic functions
**
Erdős–Turán inequality — bounds distance between probability and Lebesgue measure in terms of Fourier coefficients
*Different approximations:
**
Moving least squares
**
Padé approximant
In mathematics, a Padé approximant is the "best" approximation of a function near a specific point by a rational function of given order. Under this technique, the approximant's power series agrees with the power series of the function it is ap ...
***
Padé table — table of Padé approximants
**
Hartogs–Rosenthal theorem — continuous functions can be approximated uniformly by rational functions on a set of Lebesgue measure zero
**
Szász–Mirakyan operator — approximation by e
−''n'' ''x''
''k'' on a semi-infinite interval
**
Szász–Mirakjan–Kantorovich operator
**
Baskakov operator — generalize Bernstein polynomials, Szász–Mirakyan operators, and Lupas operators
**
Favard operator — approximation by sums of Gaussians
*
Surrogate model — application: replacing a function that is hard to evaluate by a simpler function
*
Constructive function theory — field that studies connection between degree of approximation and smoothness
*
Universal differential equation — differential–algebraic equation whose solutions can approximate any continuous function
*
Fekete problem — find ''N'' points on a sphere that minimize some kind of energy
*
Carleman's condition In mathematics, particularly, in analysis, Carleman's condition gives a sufficient condition for the determinacy of the moment problem. That is, if a measure \mu satisfies Carleman's condition, there is no other measure \nu having the same moment ...
— condition guaranteeing that a measure is uniquely determined by its moments
*
Krein's condition — condition that exponential sums are dense in weighted L
2 space
*
Lethargy theorem — about distance of points in a metric space from members of a sequence of subspaces
*
Wirtinger's representation and projection theorem
*Journals:
**
Constructive Approximation
''Constructive Approximation'' is "an international mathematics journal dedicated to Approximations, expansions, and related research in: computation, function theory, functional analysis
Functional analysis is a branch of mathematical analys ...
**
Journal of Approximation Theory
The ''Journal of Approximation Theory'' is "devoted to advances in pure and applied approximation theory and related areas."
References
External links
''Journal of Approximation Theory'' web site''Journal of Approximation Theory'' home page a ...
Miscellaneous
*
Extrapolation
In mathematics
Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. ...
**
Linear predictive analysis — linear extrapolation
*
Unisolvent functions
In mathematics, a set of ''n'' functions ''f''1, ''f''2, ..., ''f'n'' is unisolvent (meaning "uniquely solvable") on a domain Ω if the vectors
: \beginf_1(x_1) \\ f_1(x_2) \\ \vdots \\ f_1(x_n)\end, \beginf_2(x_1) \\ f_2(x_2) \\ \vdots \\ f ...
— functions for which the interpolation problem has a unique solution
*
Regression analysis
**
Isotonic regression
In statistics and numerical analysis, isotonic regression or monotonic regression is the technique of fitting a free-form line to a sequence of observations such that the fitted line is non-decreasing function, non-decreasing (or non-increasing) ...
*
Curve-fitting compaction
Curve-fitting compaction is data compaction accomplished by replacing data to be stored or transmitted with an analytical expression.
Examples of curve-fitting compaction consisting of discretization and then interpolation are:
* Breaking of a co ...
*
Interpolation (computer graphics)
In the context of live-action and computer animation, interpolation is inbetweening,{{Cite web, url=https://www.freecodecamp.org/news/understanding-linear-interpolation-in-ui-animations-74701eb9957c/, title=Understanding Linear Interpolation in U ...
Finding roots of nonlinear equations
:''See
#Numerical linear algebra for linear equations''
Root-finding algorithm
In numerical analysis, a root-finding algorithm is an algorithm for finding zeros, also called "roots", of continuous functions. A zero of a function is a number such that . As, generally, the zeros of a function cannot be computed exactly nor ...
— algorithms for solving the equation ''f''(''x'') = 0
*General methods:
**
Bisection method
In mathematics, the bisection method is a root-finding method that applies to any continuous function for which one knows two values with opposite signs. The method consists of repeatedly bisecting the interval defined by these values and t ...
— simple and robust; linear convergence
***
Lehmer–Schur algorithm — variant for complex functions
**
Fixed-point iteration
**
Newton's method
In numerical analysis, the Newton–Raphson method, also known simply as Newton's method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a ...
— based on linear approximation around the current iterate; quadratic convergence
***
Kantorovich theorem
The Kantorovich theorem, or Newton–Kantorovich theorem, is a mathematical statement on the semi-local convergence of Newton's method. It was first stated by Leonid Kantorovich in 1948. It is similar to the form of the Banach fixed-point theorem, ...
— gives a region around solution such that Newton's method converges
***
Newton fractal
The Newton fractal is a boundary set in the complex plane which is characterized by Newton's method applied to a fixed polynomial \mathbb or transcendental function. It is the Julia set of the meromorphic function which is given by Newton's met ...
— indicates which initial condition converges to which root under Newton iteration
***
Quasi-Newton method — uses an approximation of the Jacobian:
****
Broyden's method — uses a rank-one update for the Jacobian
****
Symmetric rank-one — a symmetric (but not necessarily positive definite) rank-one update of the Jacobian
****
Davidon–Fletcher–Powell formula — update of the Jacobian in which the matrix remains positive definite
****
Broyden–Fletcher–Goldfarb–Shanno algorithm — rank-two update of the Jacobian in which the matrix remains positive definite
****
Limited-memory BFGS method — truncated, matrix-free variant of BFGS method suitable for large problems
***
Steffensen's method — uses divided differences instead of the derivative
**
Secant method — based on linear interpolation at last two iterates
**
False position method
In mathematics, the ''regula falsi'', method of false position, or false position method is a very old method for solving an equation with one unknown; this method, in modified form, is still in use. In simple terms, the method is the trial and er ...
— secant method with ideas from the bisection method
**
Muller's method — based on quadratic interpolation at last three iterates
**
Sidi's generalized secant method — higher-order variants of secant method
**
Inverse quadratic interpolation — similar to Muller's method, but interpolates the inverse
**
Brent's method
In numerical analysis, Brent's method is a hybrid root-finding algorithm combining the bisection method, the secant method and inverse quadratic interpolation. It has the reliability of bisection but it can be as quick as some of the less-reliab ...
— combines bisection method, secant method and inverse quadratic interpolation
**
Ridders' method In numerical analysis, Ridders' method is a root-finding algorithm
In numerical analysis, a root-finding algorithm is an algorithm for finding zeros, also called "roots", of continuous functions. A zero of a function is a number such that . As, ...
— fits a linear function times an exponential to last two iterates and their midpoint
**
Halley's method
In numerical analysis, Halley's method is a root-finding algorithm used for functions of one real variable with a continuous second derivative. Edmond Halley was an English mathematician and astronomer who introduced the method now called by his ...
— uses ''f'', ''f''
' and ''f''
''; achieves the cubic convergence
**
Householder's method — uses first ''d'' derivatives to achieve order ''d'' + 1; generalizes Newton's and Halley's method
*Methods for polynomials:
**
Aberth method
**
Bairstow's method
**
Durand–Kerner method
**
Graeffe's method
**
Jenkins–Traub algorithm — fast, reliable, and widely used
**
Laguerre's method
**
Splitting circle method
*Analysis:
**
Wilkinson's polynomial
*
Numerical continuation — tracking a root as one parameter in the equation changes
**
Piecewise linear continuation Simplicial continuation, or piecewise linear continuation (Allgower and Georg),Eugene L. Allgower, K. Georg, "Introduction to Numerical Continuation Methods", ''SIAM Classics in Applied Mathematics'' 45, 2003.E. L. Allgower, K. Georg, "Simplicial an ...
Optimization
Mathematical optimization
Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criteria, from some set of available alternatives. It is generally divided into two subfiel ...
— algorithm for finding maxima or minima of a given function
Basic concepts
*
Active set
In mathematical optimization, the active-set method is an algorithm used to identify the active constraints in a set of inequality constraints. The active constraints are then expressed as equality constraints, thereby transforming an inequalit ...
*
Candidate solution
In mathematical optimization and computer science, a feasible region, feasible set, or solution space is the set of all possible points (sets of values of the choice variables) of an optimization problem that satisfy the problem's constraints, ...
*
Constraint (mathematics)
In mathematics, a constraint is a condition of an optimization problem that the solution must satisfy. There are several types of constraints—primarily equality constraints, inequality constraints, and integer constraints. The set of c ...
**
Constrained optimization
In mathematical optimization, constrained optimization (in some contexts called constraint optimization) is the process of optimizing an objective function with respect to some variables in the presence of constraints on those variables. The obj ...
— studies optimization problems with constraints
**
Binary constraint — a constraint that involves exactly two variables
*
Corner solution
*
Feasible region
In mathematical optimization and computer science, a feasible region, feasible set, or solution space is the set of all possible points (sets of values of the choice variables) of an optimization problem that satisfy the problem's constraints, ...
— contains all solutions that satisfy the constraints but may not be optimal
*
Global optimum
In mathematical analysis, the maximum and minimum of a function are, respectively, the greatest and least value taken by the function. Known generically as extremum, they may be defined either within a given range (the ''local'' or ''relative' ...
and
Local optimum
In mathematical analysis, the maximum and minimum of a function are, respectively, the greatest and least value taken by the function. Known generically as extremum, they may be defined either within a given range (the ''local'' or ''relative' ...
*
Maxima and minima
In mathematical analysis, the maximum and minimum of a function are, respectively, the greatest and least value taken by the function. Known generically as extremum, they may be defined either within a given range (the ''local'' or ''relative ...
*
Slack variable
In an optimization problem, a slack variable is a variable that is added to an inequality constraint to transform it into an equality constraint. A non-negativity constraint on the slack variable is also added.
Slack variables are used in particu ...
*
Continuous optimization
Continuous optimization is a branch of optimization in applied mathematics.
As opposed to discrete optimization, the variables used in the objective function are required to be continuous variables—that is, to be chosen from a set of ...
*
Discrete optimization
Discrete optimization is a branch of optimization in applied mathematics and computer science. As opposed to continuous optimization, some or all of the variables used in a discrete optimization problem are restricted to be discrete variables&mda ...
Linear programming
Linear programming
Linear programming (LP), also called linear optimization, is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements and objective are represented by linear function#As a polynomia ...
(also treats ''integer programming'') — objective function and constraints are linear
* Algorithms for linear programming:
**
Simplex algorithm
In mathematical optimization, Dantzig's simplex algorithm (or simplex method) is a popular algorithm for linear programming.
The name of the algorithm is derived from the concept of a simplex and was suggested by T. S. Motzkin. Simplices are ...
***
Bland's rule — rule to avoid cycling in the simplex method
***
Klee–Minty cube — perturbed (hyper)cube; simplex method has exponential complexity on such a domain
***
Criss-cross algorithm
In optimization (mathematics), mathematical optimization, the criss-cross algorithm is any of a family of algorithms for linear programming. Variants of the criss-cross algorithm also solve more general problems with linear programming, linear ...
— similar to the simplex algorithm
***
Big M method — variation of simplex algorithm for problems with both "less than" and "greater than" constraints
**
Interior point method
Interior-point methods (also referred to as barrier methods or IPMs) are algorithms for solving Linear programming, linear and nonlinear programming, non-linear convex optimization problems. IPMs combine two advantages of previously-known algorit ...
***
Ellipsoid method
In mathematical optimization, the ellipsoid method is an iterative method for convex optimization, minimizing convex functions over convex sets. The ellipsoid method generates a sequence of ellipsoids whose volume uniformly decreases at every ste ...
***
Karmarkar's algorithm
***
Mehrotra predictor–corrector method
**
Column generation
**
k-approximation of k-hitting set — algorithm for specific LP problems (to find a weighted hitting set)
*
Linear complementarity problem In mathematical optimization theory, the linear complementarity problem (LCP) arises frequently in computational mechanics and encompasses the well-known quadratic programming as a special case. It was proposed by Cottle and Dantzig in 1968 ...
*Decompositions:
**
Benders' decomposition
Benders decomposition (or Benders' decomposition) is a technique in mathematical programming that allows the solution of very large linear programming problems that have a special block structure. This block structure often occurs in application ...
**
Dantzig–Wolfe decomposition
Dantzig–Wolfe decomposition is an algorithm for solving linear programming problems with special structure. It was originally developed by George Dantzig and Philip Wolfe and initially published in 1960. Many texts on linear programming have s ...
**
Theory of two-level planning The theory of two-level planning (alternatively, Kornai–Liptak decomposition) is a method that Matrix decomposition , decomposes large problems of Linear programming, linear optimization into sub-problems. This decomposition simplifies the soluti ...
**
Variable splitting
*
Basic solution (linear programming) — solution at vertex of feasible region
*
Fourier–Motzkin elimination
Fourier–Motzkin elimination, also known as the FME method, is a mathematical algorithm for eliminating variables from a system of linear inequalities. It can output real solutions.
The algorithm is named after Joseph Fourier who proposed the ...
*
Hilbert basis (linear programming) — set of integer vectors in a convex cone which generate all integer vectors in the cone
*
LP-type problem
*
Linear inequality
In mathematics a linear inequality is an inequality (mathematics), inequality which involves a linear function. A linear inequality contains one of the symbols of inequality:
* greater than
* ≤ less than or equal to
* ≥ greater than or equal ...
*
Vertex enumeration problem — list all vertices of the feasible set
Convex optimization
Convex optimization
Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets (or, equivalently, maximizing concave functions over convex sets). Many classes of convex optimization problems ...
*
Quadratic programming
Quadratic programming (QP) is the process of solving certain mathematical optimization problems involving quadratic functions. Specifically, one seeks to optimize (minimize or maximize) a multivariate quadratic function subject to linear constr ...
**
Linear least squares (mathematics)
Linear least squares (LLS) is the least squares approximation of linear functions to data.
It is a set of formulations for solving statistical problems involved in linear regression, including variants for ordinary (unweighted), weighted, and ...
**
Total least squares
In applied statistics, total least squares is a type of errors-in-variables regression, a least squares data modeling technique in which observational errors on both dependent and independent variables are taken into account. It is a generaliz ...
**
Frank–Wolfe algorithm
The Frank–Wolfe algorithm is an iterative first-order optimization algorithm for constrained convex optimization. Also known as the conditional gradient method, reduced gradient algorithm and the convex combination algorithm, the method was or ...
**
Sequential minimal optimization — breaks up large QP problems into a series of smallest possible QP problems
**
Bilinear program
*
Basis pursuit — minimize L
1-norm of vector subject to linear constraints
**
Basis pursuit denoising (BPDN) — regularized version of basis pursuit
***
In-crowd algorithm — algorithm for solving basis pursuit denoising
*
Linear matrix inequality
*
Conic optimization
**
Semidefinite programming
Semidefinite programming (SDP) is a subfield of mathematical programming concerned with the optimization of a linear objective function (a user-specified function that the user wants to minimize or maximize)
over the intersection of the cone of po ...
**
Second-order cone programming
A second-order cone program (SOCP) is a convex optimization problem of the form
:minimize \ f^T x \
:subject to
::\lVert A_i x + b_i \rVert_2 \leq c_i^T x + d_i,\quad i = 1,\dots,m
::Fx = g \
where the problem parameters are f \in \mathbb^n, ...
**
Sum-of-squares optimization
A sum-of-squares optimization program is an optimization problem with a linear cost function and a particular type of constraint on the decision variables. These constraints are of the form that when the decision variables are used as coefficien ...
**Quadratic programming (see above)
*
Bregman method
The Bregman method is an iterative algorithm to solve certain convex optimization problems involving regularization. The original version is due to Lev M. Bregman, who published it in 1967.
The algorithm is a row-action method accessing con ...
— row-action method for strictly convex optimization problems
*
Proximal gradient method — use splitting of objective function in sum of possible non-differentiable pieces
*
Subgradient method — extension of steepest descent for problems with a non-differentiable objective function
*
Biconvex optimization — generalization where objective function and constraint set can be biconvex
Nonlinear programming
Nonlinear programming
In mathematics, nonlinear programming (NLP) is the process of solving an optimization problem where some of the constraints are not linear equalities or the objective function is not a linear function. An optimization problem is one of calculation ...
— the most general optimization problem in the usual framework
*Special cases of nonlinear programming:
**See ''Linear programming'' and ''Convex optimization'' above
**
Geometric programming — problems involving signomials or posynomials
***
Signomial — similar to polynomials, but exponents need not be integers
***
Posynomial — a signomial with positive coefficients
**
Quadratically constrained quadratic program
**
Linear-fractional programming In mathematical optimization, linear-fractional programming (LFP) is a generalization of linear programming (LP). Whereas the objective function in a linear program is a linear function, the objective function in a linear-fractional program is a rat ...
— objective is ratio of linear functions, constraints are linear
***
Fractional programming — objective is ratio of nonlinear functions, constraints are linear
**
Nonlinear complementarity problem (NCP) — find ''x'' such that ''x'' ≥ 0, ''f''(''x'') ≥ 0 and ''x''
T ''f''(''x'') = 0
**
Least squares
The method of least squares is a mathematical optimization technique that aims to determine the best fit function by minimizing the sum of the squares of the differences between the observed values and the predicted values of the model. The me ...
— the objective function is a sum of squares
***
Non-linear least squares
Non-linear least squares is the form of least squares analysis used to fit a set of ''m'' observations with a model that is non-linear in ''n'' unknown parameters (''m'' ≥ ''n''). It is used in some forms of nonlinear regression. The ...
***
Gauss–Newton algorithm
****
BHHH algorithm — variant of Gauss–Newton in econometrics
****
Generalized Gauss–Newton method
The generalized Gauss–Newton method is a generalization of the least-squares method originally described by Carl Friedrich Gauss and of Newton's method due to Isaac Newton
Sir Isaac Newton () was an English polymath active as a mathema ...
— for constrained nonlinear least-squares problems
***
Levenberg–Marquardt algorithm
In mathematics and computing, the Levenberg–Marquardt algorithm (LMA or just LM), also known as the damped least-squares (DLS) method, is used to solve non-linear least squares problems. These minimization problems arise especially in least s ...
***
Iteratively reweighted least squares
The method of iteratively reweighted least squares (IRLS) is used to solve certain optimization problems with objective functions of the form of a ''p''-norm:
\mathop_ \sum_^n \big, y_i - f_i (\boldsymbol\beta) \big, ^p,
by an iterative meth ...
(IRLS) — solves a weighted least-squares problem at every iteration
***
Partial least squares — statistical techniques similar to principal components analysis
****
Non-linear iterative partial least squares (NIPLS)
**
Mathematical programming with equilibrium constraints — constraints include variational inequalities or complementarities
**Univariate optimization:
***
Golden section search
***
Successive parabolic interpolation — based on quadratic interpolation through the last three iterates
*General algorithms:
**Concepts:
***
Descent direction
***
Guess value — the initial guess for a solution with which an algorithm starts
***
Line search
In optimization, line search is a basic iterative approach to find a local minimum \mathbf^* of an objective function f:\mathbb R^n\to\mathbb R. It first finds a descent direction along which the objective function f will be reduced, and then co ...
****
Backtracking line search
****
Wolfe conditions
**
Gradient method In optimization, a gradient method is an algorithm
In mathematics and computer science, an algorithm () is a finite sequence of Rigour#Mathematics, mathematically rigorous instructions, typically used to solve a class of specific Computational ...
— method that uses the gradient as the search direction
***
Gradient descent
Gradient descent is a method for unconstrained mathematical optimization. It is a first-order iterative algorithm for minimizing a differentiable multivariate function.
The idea is to take repeated steps in the opposite direction of the gradi ...
****
Stochastic gradient descent
Stochastic gradient descent (often abbreviated SGD) is an Iterative method, iterative method for optimizing an objective function with suitable smoothness properties (e.g. Differentiable function, differentiable or Subderivative, subdifferentiable ...
***
Landweber iteration — mainly used for ill-posed problems
**
Successive linear programming
Successive Linear Programming (SLP), also known as Sequential Linear Programming, is an optimization technique for approximately solving nonlinear optimization problems. It is related to, but distinct from, quasi-Newton methods.
Starting at som ...
(SLP) — replace problem by a linear programming problem, solve that, and repeat
**
Sequential quadratic programming
Sequential quadratic programming (SQP) is an iterative method for constrained nonlinear optimization, also known as Lagrange-Newton method. SQP methods are used on mathematical problems for which the objective function and the constraints are twi ...
(SQP) — replace problem by a quadratic programming problem, solve that, and repeat
**
Newton's method in optimization
In calculus, Newton's method (also called Newton–Raphson) is an iterative method for finding the roots of a differentiable function f, which are solutions to the equation f(x)=0. However, to optimize a twice-differentiable f, our goal is to f ...
***See also under ''Newton algorithm'' in the
section ''Finding roots of nonlinear equations''
**
Nonlinear conjugate gradient method
**Derivative-free methods
***
Coordinate descent — move in one of the coordinate directions
****
Adaptive coordinate descent — adapt coordinate directions to objective function
****
Random coordinate descent — randomized version
***
Nelder–Mead method
The Nelder–Mead method (also downhill simplex method, amoeba method, or polytope method) is a numerical method used to find the minimum or maximum of an objective function in a multidimensional space. It is a direct search method (based on ...
***
Pattern search (optimization)
Pattern search (also known as direct search, derivative-free search, or black-box search) is a family of numerical optimization methods that does not require a gradient. As a result, it can be used on functions that are not continuous or diffe ...
***
Powell's method — based on conjugate gradient descent
***
Rosenbrock methods
Rosenbrock methods refers to either of two distinct ideas in numerical computation, both named for Howard H. Rosenbrock.
Numerical solution of differential equations
Rosenbrock methods for stiff differential equations are a family of single-ste ...
— derivative-free method, similar to Nelder–Mead but with guaranteed convergence
**
Augmented Lagrangian method
Augmented Lagrangian methods are a certain class of algorithms for solving constrained optimization problems. They have similarities to penalty methods in that they replace a constrained optimization problem by a series of unconstrained problems ...
— replaces constrained problems by unconstrained problems with a term added to the objective function
**
Ternary search
A ternary search algorithm is a technique in computer science for finding the minimum or maximum of a unimodal function.
The function
Assume we are looking for a maximum of f(x) and that we know the maximum lies somewhere between A and B. Fo ...
**
Tabu search Tabu search (TS) is a metaheuristic search method employing local search methods used for mathematical optimization. It was created by Fred W. Glover in 1986
and formalized in 1989.
Local (neighborhood) searches take a potential solution to a ...
**
Guided Local Search — modification of search algorithms which builds up penalties during a search
**Reactive search optimization (RSO) — the algorithm adapts its parameters automatically
**
MM algorithm — majorize-minimization, a wide framework of methods
**
Least absolute deviations
Least absolute deviations (LAD), also known as least absolute errors (LAE), least absolute residuals (LAR), or least absolute values (LAV), is a statistical optimality criterion and a statistical optimization technique based on minimizing the su ...
***
Expectation–maximization algorithm
In statistics, an expectation–maximization (EM) algorithm is an iterative method to find (local) maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent varia ...
****
Ordered subset expectation maximization
**
Nearest neighbor search
Nearest neighbor search (NNS), as a form of proximity search, is the optimization problem of finding the point in a given set that is closest (or most similar) to a given point. Closeness is typically expressed in terms of a dissimilarity function: ...
**
Space mapping — uses "coarse" (ideal or low-fidelity) and "fine" (practical or high-fidelity) models
Optimal control and infinite-dimensional optimization
Optimal control
Optimal control theory is a branch of control theory that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in science, engineering and operations ...
*
Pontryagin's minimum principle
Pontryagin's maximum principle is used in optimal control theory to find the best possible control for taking a dynamical system from one state to another, especially in the presence of constraints for the state or input controls. It states that i ...
— infinite-dimensional version of Lagrange multipliers
**
Costate equations
The costate equation is related to the state equation used in optimal control. It is also referred to as auxiliary, adjoint, influence, or multiplier equation. It is stated as a vector of first order differential equations
:
\dot^(t)=-\frac
where ...
— equation for the "Lagrange multipliers" in Pontryagin's minimum principle
**
Hamiltonian (control theory)
The Hamiltonian is a function used to solve a problem of optimal control for a dynamical system. It can be understood as an instantaneous increment of the Lagrangian expression of the problem that is to be optimized over a certain time period. In ...
— minimum principle says that this function should be minimized
*Types of problems:
**
Linear-quadratic regulator — system dynamics is a linear differential equation, objective is quadratic
**
Linear-quadratic-Gaussian control (LQG) — system dynamics is a linear SDE with additive noise, objective is quadratic
***
Optimal projection equations — method for reducing dimension of LQG control problem
*
Algebraic Riccati equation — matrix equation occurring in many optimal control problems
*
Bang–bang control — control that switches abruptly between two states
*
Covector mapping principle
*
Differential dynamic programming — uses locally-quadratic models of the dynamics and cost functions
*
DNSS point — initial state for certain optimal control problems with multiple optimal solutions
*
Legendre–Clebsch condition — second-order condition for solution of optimal control problem
*
Pseudospectral optimal control
**
Bellman pseudospectral method — based on Bellman's principle of optimality
**
Chebyshev pseudospectral method — uses Chebyshev polynomials (of the first kind)
**
Flat pseudospectral method — combines Ross–Fahroo pseudospectral method with differential flatness
**
Gauss pseudospectral method — uses collocation at the Legendre–Gauss points
**
Legendre pseudospectral method — uses Legendre polynomials
**
Pseudospectral knotting method — generalization of pseudospectral methods in optimal control
**
Ross–Fahroo pseudospectral method — class of pseudospectral method including Chebyshev, Legendre and knotting
*
Ross–Fahroo lemma — condition to make discretization and duality operations commute
*
Ross' π lemma — there is fundamental time constant within which a control solution must be computed for controllability and stability
*
Sethi model — optimal control problem modelling advertising
Infinite-dimensional optimization In certain optimization (mathematics), optimization problems the unknown optimal solution might not be a number or a vector, but rather a continuous quantity, for example a function (mathematics), function or the shape of a body. Such a problem is a ...
*
Semi-infinite programming In optimization theory, semi-infinite programming (SIP) is an optimization problem with a finite number of variables and an infinite number of constraints, or an infinite number of variables and a finite number of constraints. In the former case th ...
— infinite number of variables and finite number of constraints, or other way around
*
Shape optimization,
Topology optimization — optimization over a set of regions
**
Topological derivative — derivative with respect to changing in the shape
*
Generalized semi-infinite programming — finite number of variables, infinite number of constraints
Uncertainty and randomness
*Approaches to deal with uncertainty:
**
Markov decision process
**
Partially observable Markov decision process
A partially observable Markov decision process (POMDP) is a generalization of a Markov decision process (MDP). A POMDP models an agent decision process in which it is assumed that the system dynamics are determined by an MDP, but the agent cannot ...
**
Robust optimization
Robust optimization is a field of mathematical optimization theory that deals with optimization problems in which a certain measure of robustness is sought against uncertainty that can be represented as deterministic variability in the value of the ...
***
Wald's maximin model
**
Scenario optimization
The scenario approach or scenario optimization approach is a technique for obtaining solutions to robust optimization and chance-constrained optimization problems based on a sample of the constraint (mathematics), constraints. It also relates to in ...
— constraints are uncertain
**
Stochastic approximation
Stochastic approximation methods are a family of iterative methods typically used for root-finding problems or for optimization problems. The recursive update rules of stochastic approximation methods can be used, among other things, for solving l ...
**
Stochastic optimization
Stochastic optimization (SO) are optimization methods that generate and use random variables. For stochastic optimization problems, the objective functions or constraints are random. Stochastic optimization also include methods with random iter ...
**
Stochastic programming
**
Stochastic gradient descent
Stochastic gradient descent (often abbreviated SGD) is an Iterative method, iterative method for optimizing an objective function with suitable smoothness properties (e.g. Differentiable function, differentiable or Subderivative, subdifferentiable ...
*
Random optimization
Random optimization (RO) is a family of numerical optimization methods that do not require the gradient of the optimization problem and RO can hence be used on functions that are not continuous or differentiable. Such optimization methods are als ...
algorithms:
**
Random search
Random search (RS) is a family of numerical optimization methods that do not require the gradient of the optimization problem, and RS can hence be used on functions that are not continuous or differentiable. Such optimization methods are also kno ...
— choose a point randomly in ball around current iterate
**
Simulated annealing
Simulated annealing (SA) is a probabilistic technique for approximating the global optimum of a given function. Specifically, it is a metaheuristic to approximate global optimization in a large search space for an optimization problem. ...
***
Adaptive simulated annealing — variant in which the algorithm parameters are adjusted during the computation.
***
Great Deluge algorithm
***
Mean field annealing — deterministic variant of simulated annealing
**
Bayesian optimization
Bayesian optimization is a sequential design strategy for global optimization of black-box functions, that does not assume any functional forms. It is usually employed to optimize expensive-to-evaluate functions. With the rise of artificial intell ...
— treats objective function as a random function and places a prior over it
**
Evolutionary algorithm
Evolutionary algorithms (EA) reproduce essential elements of the biological evolution in a computer algorithm in order to solve "difficult" problems, at least Approximation, approximately, for which no exact or satisfactory solution methods are k ...
***
Differential evolution
Differential evolution (DE) is an evolutionary algorithm to optimize a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. Such methods are commonly known as metaheuristics as they make few ...
***
Evolutionary programming
Evolutionary programming is an evolutionary algorithm, where a share of new population is created by mutation of previous population without crossover. Evolutionary programming differs from evolution strategy ES(\mu+\lambda) in one detail. All in ...
***
Genetic algorithm
In computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to g ...
,
Genetic programming
Genetic programming (GP) is an evolutionary algorithm, an artificial intelligence technique mimicking natural evolution, which operates on a population of programs. It applies the genetic operators selection (evolutionary algorithm), selection a ...
****
Genetic algorithms in economics
Genetic algorithms have increasingly been applied to economics since the pioneering work by John H. Miller in 1986. It has been used to characterize a variety of models including the cobweb model, the overlapping generations model, game theory, Ge ...
***
MCACEA (Multiple Coordinated Agents Coevolution Evolutionary Algorithm) — uses an evolutionary algorithm for every agent
***
Simultaneous perturbation stochastic approximation (SPSA)
**
Luus–Jaakola In computational engineering, Luus–Jaakola (LJ) denotes a heuristic for global optimization of a real-valued function. In engineering use, LJ is not an algorithm that terminates with an optimal solution; nor is it an iterative method that generate ...
**
Particle swarm optimization
In computational science, particle swarm optimization (PSO) is a computational method that Mathematical optimization, optimizes a problem by iterative method, iteratively trying to improve a candidate solution with regard to a given measure of qu ...
**
Stochastic tunneling
**
Harmony search — mimicks the improvisation process of musicians
**see also the section ''Monte Carlo method''
Theoretical aspects
*
Convex analysis
Convex analysis is the branch of mathematics devoted to the study of properties of convex functions and convex sets, often with applications in convex optimization, convex minimization, a subdomain of optimization (mathematics), optimization theor ...
— function ''f'' such that ''f''(''tx'' + (1 − ''t'')''y'') ≥ ''tf''(''x'') + (1 − ''t'')''f''(''y'') for ''t'' ∈
,1**
Pseudoconvex function
In convex analysis and the calculus of variations, both branches of mathematics, a pseudoconvex function is a function (mathematics), function that behaves like a convex function with respect to finding its local extrema, local minima, but need not ...
— function ''f'' such that ∇''f'' · (''y'' − ''x'') ≥ 0 implies ''f''(''y'') ≥ ''f''(''x'')
**
Quasiconvex function
In mathematics, a quasiconvex function is a real number, real-valued function (mathematics), function defined on an interval (mathematics), interval or on a convex set, convex subset of a real vector space such that the inverse image of any ...
— function ''f'' such that ''f''(''tx'' + (1 − ''t'')''y'') ≤ max(''f''(''x''), ''f''(''y'')) for ''t'' ∈
,1**
Subderivative
In mathematics, the subderivative (or subgradient) generalizes the derivative to convex functions which are not necessarily differentiable. The set of subderivatives at a point is called the subdifferential at that point. Subderivatives arise in c ...
**
Geodesic convexity — convexity for functions defined on a Riemannian manifold
*
Duality (optimization)
In mathematical optimization theory, duality or the duality principle is the principle that optimization problems may be viewed from either of two perspectives, the primal problem or the dual problem. If the primal is a minimization problem then th ...
**
Weak duality — dual solution gives a bound on the primal solution
**
Strong duality — primal and dual solutions are equivalent
**
Shadow price
A shadow price is the monetary value assigned to an abstract or intangible commodity which is not traded in the marketplace. This often takes the form of an externality. Shadow prices are also known as the recalculation of known market prices in ...
**
Dual cone and polar cone
**
Duality gap — difference between primal and dual solution
**
Fenchel's duality theorem — relates minimization problems with maximization problems of convex conjugates
**
Perturbation function — any function which relates to primal and dual problems
**
Slater's condition
In mathematics, Slater's condition (or Slater condition) is a sufficient condition for strong duality to hold for a convex optimization problem, named after Morton L. Slater. Informally, Slater's condition states that the feasible region must ...
— sufficient condition for strong duality to hold in a convex optimization problem
**
Total dual integrality — concept of duality for integer linear programming
**
Wolfe duality
In mathematical optimization, Wolfe duality, named after Philip Wolfe, is type of dual problem in which the objective function and constraints are all differentiable functions. Using this concept a lower bound for a minimization problem can be f ...
— for when objective function and constraints are differentiable
*
Farkas' lemma
In mathematics, Farkas' lemma is a solvability theorem for a finite system of linear inequalities. It was originally proven by the Hungarian mathematician Gyula Farkas.
Farkas' lemma is the key result underpinning the linear programming duali ...
*
Karush–Kuhn–Tucker conditions
In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–Tucker conditions, are first derivative tests (sometimes called first-order necessary conditions) for a solution in nonlinear programming to be ...
(KKT) — sufficient conditions for a solution to be optimal
**
Fritz John conditions — variant of KKT conditions
*
Lagrange multiplier
In mathematical optimization, the method of Lagrange multipliers is a strategy for finding the local maxima and minima of a function (mathematics), function subject to constraint (mathematics), equation constraints (i.e., subject to the conditio ...
**
Lagrange multipliers on Banach spaces In the field of calculus of variations in mathematics, the method of Lagrange multipliers on Banach spaces can be used to solve certain infinite-dimensional constrained optimization problems. The method is a generalization of the classical method ...
*
Semi-continuity
In mathematical analysis, semicontinuity (or semi-continuity) is a property of extended real-valued functions that is weaker than continuity. An extended real-valued function f is upper (respectively, lower) semicontinuous at a point x_0 if, r ...
*
Complementarity theory
A complementarity problem is a type of mathematical optimization problem. It is the problem of optimizing (minimizing or maximizing) a function of two vector variables subject to certain requirements (constraints) which include: that the inner pro ...
— study of problems with constraints of the form 〈''u'', ''v''〉 = 0
**
Mixed complementarity problem
***
Mixed linear complementarity problem
***
Lemke's algorithm — method for solving (mixed) linear complementarity problems
*
Danskin's theorem — used in the analysis of minimax problems
*
Maximum theorem — the maximum and maximizer are continuous as function of parameters, under some conditions
*
No free lunch in search and optimization
In computational complexity and optimization the no free lunch theorem is a result that states that for certain types of mathematical problems, the computational cost of finding a solution, averaged over all problems in the class, is the same ...
*
Relaxation (approximation) In mathematical optimization and related fields, relaxation is a mathematical model, modeling strategy. A relaxation is an approximation of a difficult problem by a nearby problem that is easier to solve. A solution of the relaxed problem provides ...
— approximating a given problem by an easier problem by relaxing some constraints
**
Lagrangian relaxation
In the field of mathematical optimization, Lagrangian relaxation is a relaxation method which approximates a difficult problem of constrained optimization by a simpler problem. A solution to the relaxed problem is an approximate solution to the o ...
**
Linear programming relaxation
In mathematics, the relaxation of a (mixed) integer linear program is the problem that arises by removing the integrality constraint of each variable.
For example, in a 0–1 integer program, all constraints are of the form
:x_i\in\.
The rela ...
— ignoring the integrality constraints in a linear programming problem
*
Self-concordant function
*
Reduced cost
In linear programming, reduced cost, or opportunity cost, is the amount by which an objective function coefficient would have to improve (so increase for maximization problem, decrease for minimization problem) before it would be possible for a c ...
— cost for increasing a variable by a small amount
*
Hardness of approximation In computer science, hardness of approximation is a field that studies the algorithmic complexity of finding near-optimal solutions to optimization problems.
Scope
Hardness of approximation complements the study of approximation algorithms by pro ...
— computational complexity of getting an approximate solution
Applications
*In geometry:
**
Geometric median
In geometry, the geometric median of a discrete point set in a Euclidean space is the point minimizing the sum of distances to the sample points. This generalizes the median, which has the property of minimizing the sum of distances or absolute ...
— the point minimizing the sum of distances to a given set of points
**
Chebyshev center — the centre of the smallest ball containing a given set of points
*In statistics:
**
Iterated conditional modes — maximizing joint probability of Markov random field
**
Response surface methodology
In statistics, response surface methodology (RSM) explores the relationships between several explanatory variables and one or more response variables. RSM is an empirical model which employs the use of mathematical and statistical techniques to r ...
— used in the design of experiments
*
Automatic label placement Automatic label placement, sometimes called text placement or name placement, comprises the computer methods of placing labels automatically on a map or chart. This is related to the typographic design of such labels.
The typical features depicted ...
*
Compressed sensing
Compressed sensing (also known as compressive sensing, compressive sampling, or sparse sampling) is a signal processing technique for efficiently acquiring and reconstructing a Signal (electronics), signal by finding solutions to Underdetermined s ...
— reconstruct a signal from knowledge that it is sparse or compressible
*
Cutting stock problem
In operations research, the cutting-stock problem is the problem of cutting standard-sized pieces of Inventory, stock material, such as paper rolls or sheet metal, into pieces of specified sizes while minimizing material wasted. It is an optimizat ...
*
Demand optimization
*
Destination dispatch — an optimization technique for dispatching elevators
*
Energy minimization
*
Entropy maximization
*
Highly optimized tolerance
*
Hyperparameter optimization
In machine learning, hyperparameter optimization or tuning is the problem of choosing a set of optimal hyperparameters for a learning algorithm. A hyperparameter is a parameter whose value is used to control the learning process, which must be con ...
*
Inventory control problem
**
Newsvendor model
The newsvendor (or newsboy or single-periodWilliam J. Stevenson, Operations Management. 10th edition, 2009; page 581 or salvageable) model is a mathematical model in operations management and applied economics used to determine Inventory optimizati ...
**
Extended newsvendor model
**
Assemble-to-order system In applied probability, an assemble-to-order system is a model of a warehouse operating a build to order policy where products are assembled from components only once an order has been made.
The time to assemble a product from components is negligi ...
*
Linear programming decoding
*
Linear search problem
In computational complexity theory, the linear search problem is an optimal search problem introduced by Richard E. Bellman and independently considered by Anatole Beck.
The problem
"An immobile hider is located on the real line according to ...
— find a point on a line by moving along the line
*
Low-rank approximation — find best approximation, constraint is that rank of some matrix is smaller than a given number
*
Meta-optimization
Meta-optimization from numerical optimization is the use of one optimization method to tune another optimization method. Meta-optimization is reported to have been used as early as in the late 1970s by Mercer and Sampson for finding optimal paramet ...
— optimization of the parameters in an optimization method
*
Multidisciplinary design optimization
Multi-disciplinary design optimization (MDO) is a field of engineering that uses optimization methods to solve design problems incorporating a number of disciplines. It is also known as multidisciplinary system design optimization (MSDO), and mul ...
*
Optimal computing budget allocation — maximize the overall simulation efficiency for finding an optimal decision
*
Paper bag problem
In geometry, the paper bag problem or teabag problem is to calculate the maximum possible inflated volume of an initially flat sealed rectangular bag which has the same shape as a cushion or pillow, made out of two pieces of material which can ben ...
*
Process optimization
Process optimization is the discipline of adjusting a process so as to make the best or most effective use of some specified set of parameters without violating some constraint. Common goals are minimizing cost and maximizing throughput and/or e ...
*
Recursive economics — individuals make a series of two-period optimization decisions over time.
*
Stigler diet
*
Space allocation problem
*
Stress majorization
*
Trajectory optimization
Trajectory optimization is the process of designing a trajectory that minimizes (or maximizes) some measure of performance while satisfying a set of constraints. Generally speaking, trajectory optimization is a technique for computing an open-loop ...
*
Transportation theory
*
Wing-shape optimization
Miscellaneous
*
Combinatorial optimization
Combinatorial optimization is a subfield of mathematical optimization that consists of finding an optimal object from a finite set of objects, where the set of feasible solutions is discrete or can be reduced to a discrete set. Typical combina ...
*
Dynamic programming
**
Bellman equation
A Bellman equation, named after Richard E. Bellman, is a necessary condition for optimality associated with the mathematical Optimization (mathematics), optimization method known as dynamic programming. It writes the "value" of a decision problem ...
**
Hamilton–Jacobi–Bellman equation
The Hamilton-Jacobi-Bellman (HJB) equation is a nonlinear partial differential equation that provides necessary and sufficient conditions for optimality of a control with respect to a loss function. Its solution is the value function of the opti ...
— continuous-time analogue of Bellman equation
**
Backward induction
Backward induction is the process of determining a sequence of optimal choices by reasoning from the endpoint of a problem or situation back to its beginning using individual events or actions. Backward induction involves examining the final point ...
— solving dynamic programming problems by reasoning backwards in time
**
Optimal stopping
In mathematics, the theory of optimal stopping or early stopping
: (For French translation, secover storyin the July issue of ''Pour la Science'' (2009).) is concerned with the problem of choosing a time to take a particular action, in order to ...
— choosing the optimal time to take a particular action
***
Odds algorithm
In decision theory, the odds algorithm (or Bruss algorithm) is a mathematical method for computing optimal strategies for a class of problems that belong to the domain of optimal stopping problems. Their solution follows from the ''odds strategy' ...
***
Robbins' problem
*
Global optimization
Global optimization is a branch of operations research, applied mathematics, and numerical analysis that attempts to find the global minimum or maximum of a function or a set of functions on a given set. It is usually described as a minimization ...
:
**
BRST algorithm
**
MCS algorithm
For mathematical optimization, Multilevel Coordinate Search (MCS) is an efficient algorithm for bound constrained global optimization using function values only.
To do so, the n-dimensional search space is represented by a set of non-intersec ...
*
Multi-objective optimization
Multi-objective optimization or Pareto optimization (also known as multi-objective programming, vector optimization, multicriteria optimization, or multiattribute optimization) is an area of MCDM, multiple-criteria decision making that is concerned ...
— there are multiple conflicting objectives
**
Benson's algorithm
Benson's algorithm, named after Harold Benson, is a method for solving multi-objective linear programming problems and vector linear programs. This works by finding the "efficient extreme points in the outcome set". The primary concept in Bens ...
— for linear
vector optimization
Vector optimization is a subarea of mathematical optimization where Optimization problem, optimization problems with a vector-valued objective functions are optimized with respect to a given partial ordering and subject to certain constraints. A m ...
problems
*
Bilevel optimization — studies problems in which one problem is embedded in another
*
Optimal substructure
In computer science, a problem is said to have optimal substructure if an optimal solution can be constructed from optimal solutions of its subproblems. This property is used to determine the usefulness of greedy algorithms for a problem.{{cite bo ...
*
Dykstra's projection algorithm — finds a point in intersection of two convex sets
*Algorithmic concepts:
**
Barrier function
**
Penalty method
In mathematical optimization, penalty methods are a certain class of algorithms for solving constrained optimization problems.
A penalty method replaces a constrained optimization problem by a series of unconstrained problems whose solutions idea ...
**
Trust region
*
Test functions for optimization
In applied mathematics, test functions, known as artificial landscapes, are useful to evaluate characteristics of optimization algorithms, such as Rate of convergence, convergence rate, precision, robustness and general performance.
Here some test ...
:
**
Rosenbrock function — two-dimensional function with a banana-shaped valley
**
Himmelblau's function — two-dimensional with four local minima, defined by
**
Rastrigin function — two-dimensional function with many local minima
**
Shekel function — multimodal and multidimensional
*
Mathematical Optimization Society
Numerical quadrature (integration)
Numerical integration
In analysis, numerical integration comprises a broad family of algorithms for calculating the numerical value of a definite integral.
The term numerical quadrature (often abbreviated to quadrature) is more or less a synonym for "numerical integr ...
— the numerical evaluation of an integral
*
Rectangle method
In mathematics, a Riemann sum is a certain kind of approximation of an integral by a finite sum. It is named after nineteenth century German mathematician Bernhard Riemann. One very common application is in numerical integration, i.e., approxima ...
— first-order method, based on (piecewise) constant approximation
*
Trapezoidal rule
In calculus, the trapezoidal rule (or trapezium rule in British English) is a technique for numerical integration, i.e., approximating the definite integral:
\int_a^b f(x) \, dx.
The trapezoidal rule works by approximating the region under the ...
— second-order method, based on (piecewise) linear approximation
*
Simpson's rule
In numerical integration, Simpson's rules are several approximations for definite integrals, named after Thomas Simpson (1710–1761).
The most basic of these rules, called Simpson's 1/3 rule, or just Simpson's rule, reads
\int_a^b f(x) \, ...
— fourth-order method, based on (piecewise) quadratic approximation
**
Adaptive Simpson's method Adaptive Simpson's method, also called adaptive Simpson's rule, is a method of numerical integration proposed by G.F. Kuncir in 1962. It is probably the first recursive adaptive algorithm for numerical integration to appear in print,For an earlier, ...
*
Boole's rule
In mathematics, Boole's rule, named after George Boole, is a method of numerical integration.
Formula Simple Boole's Rule
It approximates an integral:
\int_^ f(x)\,dx
by using the values of at five equally spaced points:
\begin
& x_0 = a\\
& ...
— sixth-order method, based on the values at five equidistant points
*
Newton–Cotes formulas
In numerical analysis, the Newton–Cotes formulas, also called the Newton–Cotes quadrature rules or simply Newton–Cotes rules, are a group of formulas for numerical integration (also called ''quadrature'') based on evaluating the integrand a ...
— generalizes the above methods
*
Romberg's method
In numerical analysis, Romberg's method is used to estimate the Integral, definite integral \int_a^b f(x) \, dx by applying Richardson extrapolation repeatedly on the trapezium rule or the rectangle rule (midpoint rule). The estimates generate ...
— Richardson extrapolation applied to trapezium rule
*
Gaussian quadrature
In numerical analysis, an -point Gaussian quadrature rule, named after Carl Friedrich Gauss, is a quadrature rule constructed to yield an exact result for polynomials of degree or less by a suitable choice of the nodes and weights for .
Th ...
— highest possible degree with given number of points
**
Chebyshev–Gauss quadrature — extension of Gaussian quadrature for integrals with weight on
��1, 1**
Gauss–Hermite quadrature — extension of Gaussian quadrature for integrals with weight exp(−''x''
2) on
��∞, ∞**
Gauss–Jacobi quadrature — extension of Gaussian quadrature for integrals with weight (1 − ''x'')
''α'' (1 + ''x'')
''β'' on
��1, 1**
Gauss–Laguerre quadrature — extension of Gaussian quadrature for integrals with weight exp(−''x'') on
, ∞**
Gauss–Kronrod quadrature formula — nested rule based on Gaussian quadrature
**
Gauss–Kronrod rules
*
Tanh-sinh quadrature
Tanh-sinh quadrature is a method for numerical integration introduced by Hidetoshi Takahashi and Masatake Mori in 1974. It is especially applied where singularities or infinite derivatives exist at one or both endpoints.
The method uses hyperboli ...
— variant of Gaussian quadrature which works well with singularities at the end points
*
Clenshaw–Curtis quadrature
Clenshaw–Curtis quadrature and Fejér quadrature are methods for numerical integration, or "quadrature", that are based on an expansion of the Integrand#Terminology and notation, integrand in terms of Chebyshev polynomials. Equivalently, they em ...
— based on expanding the integrand in terms of Chebyshev polynomials
*
Adaptive quadrature
Adaptive quadrature is a numerical integration method in which the integral of a function f(x) is approximated using static quadrature rules on adaptively refined subintervals of the region of integration. Generally, adaptive algorithms are just ...
— adapting the subintervals in which the integration interval is divided depending on the integrand
*
Monte Carlo integration
In mathematics, Monte Carlo integration is a technique for numerical integration using random numbers. It is a particular Monte Carlo method that numerically computes a definite integral. While other algorithms usually evaluate the integrand at ...
— takes random samples of the integrand
**''See also
#Monte Carlo method''
*
Quantized state systems method (QSS) — based on the idea of state quantization
*
Lebedev quadrature In numerical analysis, Lebedev quadrature, named after Vyacheslav Ivanovich Lebedev, is an approximation to the surface integral of a function over a three-dimensional sphere. The grid is constructed so to have octahedral symmetry, octahedral rotati ...
— uses a grid on a sphere with octahedral symmetry
*
Sparse grid
*Coopmans approximation
*
Numerical differentiation
In numerical analysis, numerical differentiation algorithms estimate the derivative of a mathematical function or subroutine using values of the function and perhaps other knowledge about the function.
Finite differences
The simplest method is ...
— for fractional-order integrals
**
Numerical smoothing and differentiation
**
Adjoint state method — approximates gradient of a function in an optimization problem
*
Euler–Maclaurin formula
In mathematics, the Euler–Maclaurin formula is a formula for the difference between an integral and a closely related sum. It can be used to approximate integrals by finite sums, or conversely to evaluate finite sums and infinite series using ...
Numerical methods for ordinary differential equations
Numerical methods for ordinary differential equations
Numerical methods for ordinary differential equations are methods used to find Numerical analysis, numerical approximations to the solutions of ordinary differential equations (ODEs). Their use is also known as "numerical integration", although ...
— the numerical solution of ordinary differential equations (ODEs)
*
Euler method
In mathematics and computational science, the Euler method (also called the forward Euler method) is a first-order numerical analysis, numerical procedure for solving ordinary differential equations (ODEs) with a given Initial value problem, in ...
— the most basic method for solving an ODE
*
Explicit and implicit methods
Explicit and implicit methods are approaches used in numerical analysis for obtaining numerical approximations to the solutions of time-dependent ordinary differential equation, ordinary and partial differential equations, as is required in comput ...
— implicit methods need to solve an equation at every step
*
Backward Euler method
In numerical analysis and scientific computing, the backward Euler method (or implicit Euler method) is one of the most basic numerical methods for ordinary differential equations, numerical methods for the solution of ordinary differential equatio ...
— implicit variant of the Euler method
*
Trapezoidal rule
In calculus, the trapezoidal rule (or trapezium rule in British English) is a technique for numerical integration, i.e., approximating the definite integral:
\int_a^b f(x) \, dx.
The trapezoidal rule works by approximating the region under the ...
— second-order implicit method
*
Runge–Kutta methods
In numerical analysis, the Runge–Kutta methods ( ) are a family of Explicit and implicit methods, implicit and explicit iterative methods, List of Runge–Kutta methods, which include the Euler method, used in temporal discretization for the a ...
— one of the two main classes of methods for initial-value problems
**
Midpoint method — a second-order method with two stages
**
Heun's method
In mathematics and computational science, Heun's method may refer to the improved or modified Euler's method (that is, the explicit trapezoidal rule), or a similar two-stage Runge–Kutta method. It is named after Karl Heun and is a numerical pr ...
— either a second-order method with two stages, or a third-order method with three stages
**
Bogacki–Shampine method The Bogacki–Shampine method is a method for the numerical solution of ordinary differential equations, that was proposed by Przemysław Bogacki and Lawrence F. Shampine in 1989 . The Bogacki–Shampine method is a Runge–Kutta method of order t ...
— a third-order method with four stages (FSAL) and an embedded fourth-order method
**
Cash–Karp method
In numerical analysis, the Cash–Karp method is a method for solving ordinary differential equations (ODEs). It was proposed by Professor Jeff R. Cash from Imperial College London and Alan H. Karp from IBM Scientific Center. The method is a membe ...
— a fifth-order method with six stages and an embedded fourth-order method
**
Dormand–Prince method
In numerical analysis, the Dormand–Prince (RKDP) method or DOPRI method, is an embedded method for solving ordinary differential equations (ODE). The method is a member of the Runge–Kutta family of ODE solvers. More specifically, it uses six fu ...
— a fifth-order method with seven stages (FSAL) and an embedded fourth-order method
**
Runge–Kutta–Fehlberg method
In mathematics, the Runge–Kutta–Fehlberg method (or Fehlberg method) is an algorithm in numerical analysis for the numerical solution of ordinary differential equations. It was developed by the German mathematician Erwin Fehlberg and is based ...
— a fifth-order method with six stages and an embedded fourth-order method
**
Gauss–Legendre method — family of A-stable method with optimal order based on Gaussian quadrature
**
Butcher group — algebraic formalism involving rooted trees for analysing Runge–Kutta methods
**
List of Runge–Kutta methods
Runge–Kutta methods are methods for the numerical solution of the ordinary differential equation
:\frac = f(t, y).
Explicit and implicit methods, Explicit Runge–Kutta methods take the form
:\begin
y_ &= y_n + h \sum_^s b_i k_i \\
k_1 &= f(t_ ...
*
Linear multistep method
Linear multistep methods are used for the numerical solution of ordinary differential equations. Conceptually, a numerical method starts from an initial point and then takes a short step forward in time to find the next solution point. The proce ...
— the other main class of methods for initial-value problems
**
Backward differentiation formula
The backward differentiation formula (BDF) is a family of implicit methods for the numerical integration of ordinary differential equations. They are linear multistep methods that, for a given function and time, approximate the derivative of that ...
— implicit methods of order 2 to 6; especially suitable for stiff equations
**
Numerov's method — fourth-order method for equations of the form
**
Predictor–corrector method — uses one method to approximate solution and another one to increase accuracy
*
General linear methods — a class of methods encapsulating linear multistep and Runge-Kutta methods
*
Bulirsch–Stoer algorithm In numerical analysis, the Bulirsch–Stoer algorithm is a method for the numerical solution of ordinary differential equations which combines three powerful ideas: Richardson extrapolation, the use of rational function extrapolation in Richardson ...
— combines the midpoint method with Richardson extrapolation to attain arbitrary order
*
Exponential integrator — based on splitting ODE in a linear part, which is solved exactly, and a nonlinear part
*Methods designed for the solution of ODEs from classical physics:
**
Newmark-beta method — based on the extended mean-value theorem
**
Verlet integration
Verlet integration () is a numerical method used to integrate Newton's equations of motion. It is frequently used to calculate trajectories of particles in molecular dynamics simulations and computer graphics. The algorithm was first used in 17 ...
— a popular second-order method
**
Leapfrog integration — another name for Verlet integration
**
Beeman's algorithm — a two-step method extending the Verlet method
**
Dynamic relaxation
*
Geometric integrator
In the mathematical field of numerical ordinary differential equations, a geometric integrator is a numerical method that preserves geometric properties of the exact flow of a differential equation.
Pendulum example
We can motivate the study of g ...
— a method that preserves some geometric structure of the equation
**
Symplectic integrator
In mathematics, a symplectic integrator (SI) is a Numerical ordinary differential equations, numerical integration scheme for Hamiltonian systems. Symplectic integrators form the subclass of geometric integrators which, by definition, are canonical ...
— a method for the solution of Hamilton's equations that preserves the symplectic structure
***
Variational integrator
Variational integrators are Numerical ordinary differential equation, numerical integrators for Hamiltonian systems derived from the Euler–Lagrange equations of a discretized Hamilton's principle. Variational integrators are momentum-preserving a ...
— symplectic integrators derived using the underlying variational principle
***
Semi-implicit Euler method — variant of Euler method which is symplectic when applied to separable Hamiltonians
**
Energy drift
In computer simulations of mechanical systems, energy drift is the gradual change in the total energy of a closed system over time. According to the laws of mechanics, the energy should be a constant of motion and should not change. However, in s ...
— phenomenon that energy, which should be conserved, drifts away due to numerical errors
*Other methods for initial value problems (IVPs):
**
Bi-directional delay line
**
Partial element equivalent circuit
*Methods for solving two-point boundary value problems (BVPs):
**
Shooting method
In numerical analysis, the shooting method is a method for solving a boundary value problem by reducing it to an initial value problem. It involves finding solutions to the initial value problem for different initial conditions until one finds the ...
**
Direct multiple shooting method — divides interval in several subintervals and applies the shooting method on each subinterval
*Methods for solving differential-algebraic equations (DAEs), i.e., ODEs with constraints:
**
Constraint algorithm
In computational chemistry, a constraint algorithm is a method for satisfying the Newtonian motion of a rigid body which consists of mass points. A restraint algorithm is used to ensure that the distance between mass points is maintained. The gen ...
— for solving Newton's equations with constraints
**
Pantelides algorithm — for reducing the index of a DEA
*Methods for solving stochastic differential equations (SDEs):
**
Euler–Maruyama method
In Itô calculus, the Euler–Maruyama method (also simply called the Euler method) is a method for the approximate numerical analysis, numerical solution of a stochastic differential equation (SDE). It is an extension of the Euler method for ord ...
— generalization of the Euler method for SDEs
**
Milstein method
In mathematics, the Milstein method is a technique for the approximate numerical analysis, numerical solution of a stochastic differential equation. It is named after Grigori Milstein who first published it in 1974.
Description
Consider the autono ...
— a method with strong order one
**
Runge–Kutta method (SDE) — generalization of the family of Runge–Kutta methods for SDEs
*Methods for solving integral equations:
**
Nyström method — replaces the integral with a quadrature rule
*Analysis:
**
Truncation error (numerical integration)
Truncation errors in numerical integration are of two kinds:
* ''local truncation errors'' – the error caused by one iteration, and
* ''global truncation errors'' – the cumulative error caused by many iterations.
Definitions
Suppose we have ...
— local and global truncation errors, and their relationships
***
Lady Windermere's Fan (mathematics) — telescopic identity relating local and global truncation errors
*
Stiff equation
In mathematics, a stiff equation is a differential equation for which certain numerical methods for solving the equation are numerically unstable, unless the step size is taken to be extremely small. It has proven difficult to formulate a precise ...
— roughly, an ODE for which unstable methods need a very short step size, but stable methods do not
**
L-stability
Within mathematics regarding differential equations, L-stability is a special case of A-stability, a property of Runge–Kutta methods for solving ordinary differential equations.
A method is L-stable if it is A-stable and \phi(z) \to 0 as ...
— method is A-stable and stability function vanishes at infinity
*
Adaptive stepsize — automatically changing the step size when that seems advantageous
*
Parareal -- a parallel-in-time integration algorithm
Numerical methods for partial differential equations
Numerical partial differential equations
Numerical may refer to:
* Number
* Numerical digit
* Numerical analysis
Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic computation, symbolic manipulations) for the problems of mathematical ...
— the numerical solution of partial differential equations (PDEs)
Finite difference methods
Finite difference method
In numerical analysis, finite-difference methods (FDM) are a class of numerical techniques for solving differential equations by approximating Derivative, derivatives with Finite difference approximation, finite differences. Both the spatial doma ...
— based on approximating differential operators with difference operators
*
Finite difference
A finite difference is a mathematical expression of the form . Finite differences (or the associated difference quotients) are often used as approximations of derivatives, such as in numerical differentiation.
The difference operator, commonly d ...
— the discrete analogue of a differential operator
**
Finite difference coefficient — table of coefficients of finite-difference approximations to derivatives
**
Discrete Laplace operator
In mathematics, the discrete Laplace operator is an analog of the continuous Laplace operator, defined so that it has meaning on a Graph (discrete mathematics), graph or a lattice (group), discrete grid. For the case of a finite-dimensional graph ...
— finite-difference approximation of the Laplace operator
***
Eigenvalues and eigenvectors of the second derivative — includes eigenvalues of discrete Laplace operator
***
Kronecker sum of discrete Laplacians
In mathematics, the Kronecker sum of discrete Laplacians, named after Leopold Kronecker, is a discrete version of the separation of variables for the continuous Laplacian in a rectangular cuboid domain.
General form of the Kronecker sum of discret ...
— used for Laplace operator in multiple dimensions
**
Discrete Poisson equation
In mathematics, the discrete Poisson equation is the finite difference analog of the Poisson equation. In it, the discrete Laplace operator takes the place of the Laplace operator. The discrete Poisson equation is frequently used in numerical an ...
— discrete analogue of the Poisson equation using the discrete Laplace operator
*
Stencil (numerical analysis)
In mathematics, especially the areas of numerical analysis concentrating on the numerical partial differential equations, numerical solution of partial differential equations, a stencil is a geometric arrangement of a nodal group that relate to the ...
— the geometric arrangements of grid points affected by a basic step of the algorithm
**
Compact stencil — stencil which only uses a few grid points, usually only the immediate and diagonal neighbours
***
Higher-order compact finite difference scheme
**
Non-compact stencil — any stencil that is not compact
**
Five-point stencil
In numerical analysis, given a square grid in one or two dimensions, the five-point stencil of a point in the grid is a stencil made up of the point itself together with its four "neighbors". It is used to write finite difference approximations t ...
— two-dimensional stencil consisting of a point and its four immediate neighbours on a rectangular grid
*Finite difference methods for heat equation and related PDEs:
**
FTCS scheme (forward-time central-space) — first-order explicit
**
Crank–Nicolson method
In numerical analysis, the Crank–Nicolson method is a finite difference method used for numerically solving the heat equation and similar partial differential equations. It is a Big O notation, second-order method in time. It is Explicit and im ...
— second-order implicit
*Finite difference methods for hyperbolic PDEs like the wave equation:
**
Lax–Friedrichs method — first-order explicit
**
Lax–Wendroff method — second-order explicit
**
MacCormack method — second-order explicit
**
Upwind scheme
In computational physics, the term advection scheme refers to a class of numerical discretization methods for solving hyperbolic partial differential equations. In the so-called upwind schemes ''typically'', the so-called upstream variables are use ...
***
Upwind differencing scheme for convection — first-order scheme for convection–diffusion problems
**
Lax–Wendroff theorem — conservative scheme for hyperbolic system of conservation laws converges to the weak solution
*
Alternating direction implicit method In numerical linear algebra, the alternating-direction implicit (ADI) method is an iterative method used to solve Sylvester equation, Sylvester matrix equations. It is a popular method for solving the large matrix equations that arise in systems the ...
(ADI) — update using the flow in ''x''-direction and then using flow in ''y''-direction
*
Nonstandard finite difference scheme
*Specific applications:
**
Finite difference methods for option pricing Finite difference methods for option pricing are Numerical analysis, numerical methods used in mathematical finance for the valuation of Option (finance), options. Finite difference methods were first applied to Valuation of options, option pricing ...
**
Finite-difference time-domain method
Finite-difference time-domain (FDTD) or Yee's method (named after the Chinese American applied mathematician Kane S. Yee, born 1934) is a numerical analysis technique used for modeling computational electrodynamics.
History
Finite difference sc ...
— a finite-difference method for electrodynamics
Finite element methods, gradient discretisation methods
Finite element method
Finite element method (FEM) is a popular method for numerically solving differential equations arising in engineering and mathematical modeling. Typical problem areas of interest include the traditional fields of structural analysis, heat tran ...
— based on a discretization of the space of solutions
gradient discretisation method — based on both the discretization of the solution and of its gradient
*Finite element method in structural mechanics — a physical approach to finite element methods
*Galerkin method — a finite element method in which the residual is orthogonal to the finite element space
**Discontinuous Galerkin method — a Galerkin method in which the approximate solution is not continuous
*Rayleigh–Ritz method — a finite element method based on variational principles
*Spectral element method — high-order finite element methods
*hp-FEM — variant in which both the size and the order of the elements are automatically adapted
*Examples of finite elements:
**Bilinear quadrilateral element — also known as the Q4 element
**Constant strain triangle element (CST) — also known as the T3 element
**Quadratic quadrilateral element — also known as the Q8 element
**Barsoum elements
*Direct stiffness method — a particular implementation of the finite element method, often used in structural analysis
*Trefftz method
*Finite element updating
*Extended finite element method — puts functions tailored to the problem in the approximation space
*Functionally graded elements — elements for describing functionally graded materials
*Superelement — particular grouping of finite elements, employed as a single element
*
Interval finite element
In numerical analysis, the interval finite element method (interval FEM) is a finite element method that uses interval parameters. Interval FEM can be applied in situations where it is not possible to get reliable probabilistic characteristics of ...
method — combination of finite elements with interval arithmetic
*Discrete exterior calculus — discrete form of the exterior calculus of differential geometry
*Modal analysis using FEM — solution of eigenvalue problems to find natural vibrations
*Céa's lemma — solution in the finite-element space is an almost best approximation in that space of the true solution
*Patch test (finite elements) — simple test for the quality of a finite element
*MAFELAP (MAthematics of Finite ELements and APplications) — international conference held at Brunel University
*NAFEMS — not-for-profit organisation that sets and maintains standards in computer-aided engineering analysis
*Multiphase topology optimisation — technique based on finite elements for determining optimal composition of a mixture
*
Interval finite element
In numerical analysis, the interval finite element method (interval FEM) is a finite element method that uses interval parameters. Interval FEM can be applied in situations where it is not possible to get reliable probabilistic characteristics of ...
*Applied element method — for simulation of cracks and structural collapse
*Wood–Armer method — structural analysis method based on finite elements used to design reinforcement for concrete slabs
*Isogeometric analysis — integrates finite elements into conventional NURBS-based CAD design tools
*Loubignac iteration
*Stiffness matrix — finite-dimensional analogue of differential operator
*Combination with meshfree methods:
**Weakened weak form — form of a PDE that is weaker than the standard weak form
**G space — functional space used in formulating the weakened weak form
**Smoothed finite element method
*Variational multiscale method
*List of finite element software packages
Other methods
*Spectral method — based on the Fourier transformation
**Pseudo-spectral method
*Method of lines — reduces the PDE to a large system of ordinary differential equations
*Boundary element method (BEM) — based on transforming the PDE to an integral equation on the boundary of the domain
**
Interval boundary element method — a version using interval arithmetics
*Analytic element method — similar to the boundary element method, but the integral equation is evaluated analytically
*Finite volume method — based on dividing the domain in many small domains; popular in computational fluid dynamics
**Godunov's scheme — first-order conservative scheme for fluid flow, based on piecewise constant approximation
**MUSCL scheme — second-order variant of Godunov's scheme
**AUSM — advection upstream splitting method
**Flux limiter — limits spatial derivatives (fluxes) in order to avoid spurious oscillations
**Riemann solver — a solver for Riemann problems (a conservation law with piecewise constant data)
**Properties of discretization schemes — finite volume methods can be conservative, bounded, etc.
*Discrete element method — a method in which the elements can move freely relative to each other
**Extended discrete element method — adds properties such as strain to each particle
**Movable cellular automaton — combination of cellular automata with discrete elements
*Meshfree methods — does not use a mesh, but uses a particle view of the field
**Discrete least squares meshless method — based on minimization of weighted summation of the squared residual
**Diffuse element method
**Finite pointset method — represent continuum by a point cloud
**Moving Particle Semi-implicit Method
**Method of fundamental solutions (MFS) — represents solution as linear combination of fundamental solutions
**Variants of MFS with source points on the physical boundary:
***Boundary knot method (BKM)
***Boundary particle method (BPM)
***Regularized meshless method (RMM)
***Singular boundary method (SBM)
*Methods designed for problems from electromagnetics:
**
Finite-difference time-domain method
Finite-difference time-domain (FDTD) or Yee's method (named after the Chinese American applied mathematician Kane S. Yee, born 1934) is a numerical analysis technique used for modeling computational electrodynamics.
History
Finite difference sc ...
— a finite-difference method
**Rigorous coupled-wave analysis — semi-analytical Fourier-space method based on Floquet's theorem
**Transmission-line matrix method (TLM) — based on analogy between electromagnetic field and mesh of transmission lines
**Uniform theory of diffraction — specifically designed for scattering problems
*Particle-in-cell — used especially in fluid dynamics
**Multiphase particle-in-cell method — considers solid particles as both numerical particles and fluid
*High-resolution scheme
*Shock capturing method
*Vorticity confinement — for vortex-dominated flows in fluid dynamics, similar to shock capturing
*Split-step method
*Fast marching method
*Orthogonal collocation
*Lattice Boltzmann methods — for the solution of the Navier-Stokes equations
*Roe solver — for the solution of the Euler equation
*Relaxation (iterative method) — a method for solving elliptic PDEs by converting them to evolution equations
*Broad classes of methods:
**Mimesis (mathematics), Mimetic methods — methods that respect in some sense the structure of the original problem
**Multiphysics — models consisting of various submodels with different physics
**Immersed boundary method — for simulating elastic structures immersed within fluids
*Multisymplectic integrator — extension of symplectic integrators, which are for ODEs
*Stretched grid method — for problems solution that can be related to an elastic grid behavior.
Techniques for improving these methods
*Multigrid method — uses a hierarchy of nested meshes to speed up the methods
*Domain decomposition methods — divides the domain in a few subdomains and solves the PDE on these subdomains
**Additive Schwarz method
**Abstract additive Schwarz method — abstract version of additive Schwarz without reference to geometric information
**Balancing domain decomposition method (BDD) — preconditioner for symmetric positive definite matrices
**BDDC, Balancing domain decomposition by constraints (BDDC) — further development of BDD
**FETI, Finite element tearing and interconnect (FETI)
**FETI-DP — further development of FETI
**Fictitious domain method — preconditioner constructed with a structured mesh on a fictitious domain of simple shape
**Mortar methods — meshes on subdomain do not mesh
**Neumann–Dirichlet method — combines Neumann problem on one subdomain with Dirichlet problem on other subdomain
**Neumann–Neumann methods — domain decomposition methods that use Neumann problems on the subdomains
**Poincaré–Steklov operator — maps tangential electric field onto the equivalent electric current
**Schur complement method — early and basic method on subdomains that do not overlap
**Schwarz alternating method — early and basic method on subdomains that overlap
*Coarse space (numerical analysis), Coarse space — variant of the problem which uses a discretization with fewer degrees of freedom
*Adaptive mesh refinement — uses the computed solution to refine the mesh only where necessary
*Fast multipole method — hierarchical method for evaluating particle-particle interactions
*Perfectly matched layer — artificial absorbing layer for wave equations, used to implement absorbing boundary conditions
Grids and meshes
*Grid classification / Types of mesh:
**Polygon mesh — consists of polygons in 2D or 3D
**Triangle mesh — consists of triangles in 2D or 3D
***Triangulation (geometry) — subdivision of given region in triangles, or higher-dimensional analogue
***Nonobtuse mesh — mesh in which all angles are less than or equal to 90°
***Point-set triangulation — triangle mesh such that given set of point are all a vertex of a triangle
***Polygon triangulation — triangle mesh inside a polygon
***Delaunay triangulation — triangulation such that no vertex is inside the circumcentre of a triangle
***Constrained Delaunay triangulation — generalization of the Delaunay triangulation that forces certain required segments into the triangulation
***Pitteway triangulation — for any point, triangle containing it has nearest neighbour of the point as a vertex
***Minimum-weight triangulation — triangulation of minimum total edge length
***Kinetic triangulation — a triangulation that moves over time
***Triangulated irregular network
***Quasi-triangulation — subdivision into simplices, where vertices are not points but arbitrary sloped line segments
**Volume mesh — consists of three-dimensional shapes
**Regular grid — consists of congruent parallelograms, or higher-dimensional analogue
**Unstructured grid
**Geodesic grid — isotropic grid on a sphere
*Mesh generation
**Image-based meshing — automatic procedure of generating meshes from 3D image data
**Marching cubes — extracts a polygon mesh from a scalar field
**Parallel mesh generation
**Ruppert's algorithm — creates quality Delauney triangularization from piecewise linear data
*Subdivisions:
*Apollonian network — undirected graph formed by recursively subdividing a triangle
*Barycentric subdivision — standard way of dividing arbitrary convex polygons into triangles, or the higher-dimensional analogue
*Improving an existing mesh:
**Chew's second algorithm — improves Delauney triangularization by refining poor-quality triangles
**Laplacian smoothing — improves polynomial meshes by moving the vertices
*Jump-and-Walk algorithm — for finding triangle in a mesh containing a given point
*Spatial twist continuum — dual representation of a mesh consisting of hexahedra
*Pseudotriangle — simply connected region between any three mutually tangent convex sets
*Simplicial complex — all vertices, line segments, triangles, tetrahedra, ..., making up a mesh
Analysis
*Lax equivalence theorem — a consistent method is convergent if and only if it is stable
*Courant–Friedrichs–Lewy condition — stability condition for hyperbolic PDEs
*Von Neumann stability analysis — all Fourier components of the error should be stable
*Numerical diffusion — diffusion introduced by the numerical method, above to that which is naturally present
**False diffusion
*Numerical dispersion
*Numerical resistivity — the same, with resistivity instead of diffusion
*Weak formulation — a functional-analytic reformulation of the PDE necessary for some methods
*Total variation diminishing — property of schemes that do not introduce spurious oscillations
*Godunov's theorem — linear monotone schemes can only be of first order
*Motz's problem — benchmark problem for singularity problems
Monte Carlo method
*Variants of the Monte Carlo method:
**Direct simulation Monte Carlo
**Quasi-Monte Carlo method
**Markov chain Monte Carlo
***Metropolis–Hastings algorithm
****Multiple-try Metropolis — modification which allows larger step sizes
****Wang and Landau algorithm — extension of Metropolis Monte Carlo
****Equation of State Calculations by Fast Computing Machines — 1953 article proposing the Metropolis Monte Carlo algorithm
****Multicanonical ensemble — sampling technique that uses Metropolis–Hastings to compute integrals
***Gibbs sampling
***Coupling from the past
***Reversible-jump Markov chain Monte Carlo
**Dynamic Monte Carlo method
***Kinetic Monte Carlo
***Gillespie algorithm
**Particle filter
***Auxiliary particle filter
**Reverse Monte Carlo
**Demon algorithm
*Pseudo-random number sampling
**Inverse transform sampling — general and straightforward method but computationally expensive
**Rejection sampling — sample from a simpler distribution but reject some of the samples
***Ziggurat algorithm — uses a pre-computed table covering the probability distribution with rectangular segments
**For sampling from a normal distribution:
***Box–Muller transform
***Marsaglia polar method
**Convolution random number generator — generates a random variable as a sum of other random variables
**Indexed search
*Variance reduction techniques:
**Antithetic variates
**Control variates
**Importance sampling
**Stratified sampling
**VEGAS algorithm
*Low-discrepancy sequence
**Constructions of low-discrepancy sequences
*Event generator
*Parallel tempering
*Umbrella sampling — improves sampling in physical systems with significant energy barriers
*Hybrid Monte Carlo
*Ensemble Kalman filter — recursive filter suitable for problems with a large number of variables
*Transition path sampling
*Walk-on-spheres method — to generate exit-points of Brownian motion from bounded domains
*Applications:
**Ensemble forecasting — produce multiple numerical predictions from slightly initial conditions or parameters
**Bond fluctuation model — for simulating the conformation and dynamics of polymer systems
**Iterated filtering
**Metropolis light transport
**Monte Carlo localization — estimates the position and orientation of a robot
**Monte Carlo methods for electron transport
**Monte Carlo method for photon transport
**Monte Carlo methods in finance
***Monte Carlo methods for option pricing
***Quasi-Monte Carlo methods in finance
**Monte Carlo molecular modeling
***Path integral molecular dynamics — incorporates Feynman path integrals
**Quantum Monte Carlo
***Diffusion Monte Carlo — uses a Green function to solve the Schrödinger equation
***Gaussian quantum Monte Carlo
***Path integral Monte Carlo
***Reptation Monte Carlo
***Variational Monte Carlo
**Methods for simulating the Ising model:
***Swendsen–Wang algorithm — entire sample is divided into equal-spin clusters
***Wolff algorithm — improvement of the Swendsen–Wang algorithm
***Metropolis–Hastings algorithm
**Auxiliary field Monte Carlo — computes averages of operators in many-body quantum mechanical problems
**Cross-entropy method — for multi-extremal optimization and importance sampling
*Also see the list of statistics topics
Applications
*Computational physics
**Computational electromagnetics
**Computational fluid dynamics (CFD)
***Numerical methods in fluid mechanics
***Large eddy simulation
***Smoothed-particle hydrodynamics
***Aeroacoustic analogy — used in numerical aeroacoustics to reduce sound sources to simple emitter types
***Stochastic Eulerian Lagrangian method — uses Eulerian description for fluids and Lagrangian for structures
***Explicit algebraic stress model
**Computational magnetohydrodynamics (CMHD) — studies electrically conducting fluids
**Climate model
**Numerical weather prediction
***Geodesic grid
**Celestial mechanics
***Numerical model of the Solar System
**Quantum jump method — used for simulating open quantum systems, operates on wave function
**Dynamic design analysis method (DDAM) — for evaluating effect of underwater explosions on equipment
*Computational chemistry
**Cell lists
**Coupled cluster
**Density functional theory
**DIIS — direct inversion in (or of) the iterative subspace
*Computational sociology
*Computational statistics
Software
For a large list of software, see the list of numerical-analysis software.
Journals
*Acta Numerica
*Mathematics of Computation (published by the American Mathematical Society)
*Journal of Computational and Applied Mathematics
*BIT Numerical Mathematics
*Numerische Mathematik
*Journals from the Society for Industrial and Applied Mathematics
**SIAM Journal on Numerical Analysis
**SIAM Journal on Scientific Computing
Researchers
*Cleve Moler
*Gene H. Golub
*James H. Wilkinson
*Margaret H. Wright
*Nicholas Higham, Nicholas J. Higham
*
Nick Trefethen
*Peter Lax
*Richard S. Varga
*Ulrich W. Kulisch
*Vladik Kreinovich
*Chi-Wang Shu
References
{{Reflist
Numerical analysis, *Topics
Mathematics-related lists, Numerical analysis topics
Outlines of mathematics and logic, Numerical analysis
Outlines, Numerical analysis
Lists of topics, Numerical analysis