HOME

TheInfoList



OR:

In
mathematics Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics ...
, the matrix exponential is a matrix function on square matrices analogous to the ordinary
exponential function The exponential function is a mathematical function denoted by f(x)=\exp(x) or e^x (where the argument is written as an exponent). Unless otherwise specified, the term generally refers to the positive-valued function of a real variable, ...
. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential gives the exponential map between a matrix
Lie algebra In mathematics, a Lie algebra (pronounced ) is a vector space \mathfrak g together with an operation called the Lie bracket, an alternating bilinear map \mathfrak g \times \mathfrak g \rightarrow \mathfrak g, that satisfies the Jacobi identi ...
and the corresponding
Lie group In mathematics, a Lie group (pronounced ) is a group that is also a differentiable manifold. A manifold is a space that locally resembles Euclidean space, whereas groups define the abstract concept of a binary operation along with the addi ...
. Let be an real or complex
matrix Matrix most commonly refers to: * ''The Matrix'' (franchise), an American media franchise ** '' The Matrix'', a 1999 science-fiction action film ** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchi ...
. The exponential of , denoted by or , is the matrix given by the
power series In mathematics, a power series (in one variable) is an infinite series of the form \sum_^\infty a_n \left(x - c\right)^n = a_0 + a_1 (x - c) + a_2 (x - c)^2 + \dots where ''an'' represents the coefficient of the ''n''th term and ''c'' is a con ...
e^X = \sum_^\infty \frac X^k where X^0 is defined to be the identity matrix I with the same dimensions as X. The above series always converges, so the exponential of is well-defined. If is a 1×1 matrix the matrix exponential of is a 1×1 matrix whose single element is the ordinary exponential of the single element of .


Properties


Elementary properties

Let and be complex matrices and let and be arbitrary complex numbers. We denote the
identity matrix In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere. Terminology and notation The identity matrix is often denoted by I_n, or simply by I if the size is immaterial or ...
by and the zero matrix by 0. The matrix exponential satisfies the following properties. We begin with the properties that are immediate consequences of the definition as a power series: * * , where denotes the transpose of . * , where denotes the conjugate transpose of . * If is invertible then The next key result is this one: * If XY=YX then e^Xe^Y=e^. The proof of this identity is the same as the standard power-series argument for the corresponding identity for the exponential of real numbers. That is to say, ''as long as X and Y commute'', it makes no difference to the argument whether X and Y are numbers or matrices. It is important to note that this identity typically does not hold if X and Y do not commute (see Golden-Thompson inequality below). Consequences of the preceding identity are the following: * * Using the above results, we can easily verify the following claims. If is symmetric then is also symmetric, and if is skew-symmetric then is
orthogonal In mathematics, orthogonality is the generalization of the geometric notion of '' perpendicularity''. By extension, orthogonality is also used to refer to the separation of specific features of a system. The term also has specialized meanings in ...
. If is Hermitian then is also Hermitian, and if is skew-Hermitian then is
unitary Unitary may refer to: Mathematics * Unitary divisor * Unitary element * Unitary group * Unitary matrix * Unitary morphism * Unitary operator * Unitary transformation * Unitary representation In mathematics, a unitary representation of a grou ...
. Finally, a
Laplace transform In mathematics, the Laplace transform, named after its discoverer Pierre-Simon Laplace (), is an integral transform that converts a function of a real variable (usually t, in the '' time domain'') to a function of a complex variable s (in the ...
of matrix exponentials amounts to the resolvent, \int_0^\infty e^e^\,dt = (sI - X)^ for all sufficiently large positive values of .


Linear differential equation systems

One of the reasons for the importance of the matrix exponential is that it can be used to solve systems of linear
ordinary differential equations In mathematics, an ordinary differential equation (ODE) is a differential equation whose unknown(s) consists of one (or more) function(s) of one variable and involves the derivatives of those functions. The term ''ordinary'' is used in contrast ...
. The solution of \frac y(t) = Ay(t), \quad y(0) = y_0, where is a constant matrix, is given by y(t) = e^ y_0. The matrix exponential can also be used to solve the inhomogeneous equation \frac y(t) = Ay(t) + z(t), \quad y(0) = y_0. See the section on applications below for examples. There is no closed-form solution for differential equations of the form \frac y(t) = A(t) \, y(t), \quad y(0) = y_0, where is not constant, but the Magnus series gives the solution as an infinite sum.


The determinant of the matrix exponential

By Jacobi's formula, for any complex square matrix the following trace identity holds: In addition to providing a computational tool, this formula demonstrates that a matrix exponential is always an invertible matrix. This follows from the fact that the right hand side of the above equation is always non-zero, and so , which implies that must be invertible. In the real-valued case, the formula also exhibits the map \exp \colon M_n(\R) \to \mathrm(n, \R) to not be
surjective In mathematics, a surjective function (also known as surjection, or onto function) is a function that every element can be mapped from element so that . In other words, every element of the function's codomain is the image of one element o ...
, in contrast to the complex case mentioned earlier. This follows from the fact that, for real-valued matrices, the right-hand side of the formula is always positive, while there exist invertible matrices with a negative determinant.


Real symmetric matrices

The matrix exponential of a real symmetric matrix is positive definite. Let S be an real symmetric matrix and x \in \R^n a column vector. Using the elementary properties of the matrix exponential and of symmetric matrices, we have: x^Te^Sx=x^Te^e^x=x^T(e^)^Te^x =(e^x)^Te^x=\lVert e^x\rVert^2\geq 0. Since e^ is invertible, the equality only holds for x=0, and we have x^Te^Sx > 0 for all non-zero x. Hence e^S is positive definite.


The exponential of sums

For any real numbers (scalars) and we know that the exponential function satisfies . The same is true for commuting matrices. If matrices and commute (meaning that ), then, e^ = e^Xe^Y. However, for matrices that do not commute the above equality does not necessarily hold.


The Lie product formula

Even if and do not commute, the exponential can be computed by the Lie product formula e^ = \lim_ \left(e^e^\right)^n. Using a large finite to approximate the above is basis of the Suzuki-Trotter expansion, often used in numerical time evolution.


The Baker–Campbell–Hausdorff formula

In the other direction, if and are sufficiently small (but not necessarily commuting) matrices, we have e^Xe^Y = e^Z, where may be computed as a series in
commutator In mathematics, the commutator gives an indication of the extent to which a certain binary operation fails to be commutative. There are different definitions used in group theory and ring theory. Group theory The commutator of two elements, ...
s of and by means of the Baker–Campbell–Hausdorff formula: Z = X + Y + \frac ,Y+ \frac ,[X,Y_-_\frac[Y,[X,Y.html"_;"title=",Y.html"_;"title=",[X,Y">,[X,Y_-_\frac[Y,[X,Y">,Y.html"_;"title=",[X,Y">,[X,Y_-_\frac[Y,[X,Y+_\cdots, where_the_remaining_terms_are_all_iterated_commutators_involving__and_._If__and__commute,_then_all_the_commutators_are_zero_and_we_have_simply_.


__Inequalities_for_exponentials_of_Hermitian_matrices_

For_ ,[X,Y_-_\frac[Y,[X,Y.html"_;"title=",Y.html"_;"title=",[X,Y">,[X,Y_-_\frac[Y,[X,Y">,Y.html"_;"title=",[X,Y">,[X,Y_-_\frac[Y,[X,Y+_\cdots, where_the_remaining_terms_are_all_iterated_commutators_involving__and_._If__and__commute,_then_all_the_commutators_are_zero_and_we_have_simply_.


__Inequalities_for_exponentials_of_Hermitian_matrices_

For_Hermitian_matrix">Hermitian_matrices_there_is_a_notable_theorem_related_to_the_Matrix_trace.html" ;"title="Hermitian_matrix.html" ;"title=",Y">,[X,Y_-_\frac[Y,[X,Y.html" ;"title=",Y.html" ;"title=",[X,Y">,[X,Y - \frac[Y,[X,Y">,Y.html" ;"title=",[X,Y">,[X,Y - \frac[Y,[X,Y+ \cdots, where the remaining terms are all iterated commutators involving and . If and commute, then all the commutators are zero and we have simply .


Inequalities for exponentials of Hermitian matrices

For Hermitian matrix">Hermitian matrices there is a notable theorem related to the Matrix trace">trace of matrix exponentials. If and are Hermitian matrices, then \operatorname\exp(A + B) \leq \operatorname\left[\exp(A)\exp(B)\right]. There is no requirement of commutativity. There are counterexamples to show that the Golden–Thompson inequality cannot be extended to three matrices – and, in any event, is not guaranteed to be real for Hermitian , , . However, Lieb proved that it can be generalized to three matrices if we modify the expression as follows \operatorname\exp(A + B + C) \leq \int_0^\infty \mathrmt\, \operatorname\left ^A\left(e^ + t\right)^e^C \left(e^ + t\right)^\right


The exponential map

The exponential of a matrix is always an invertible matrix. The inverse matrix of is given by . This is analogous to the fact that the exponential of a complex number is always nonzero. The matrix exponential then gives us a map \exp \colon M_n(\Complex) \to \mathrm(n, \Complex) from the space of all ''n''×''n'' matrices to the general linear group of degree , i.e. the group of all ''n''×''n'' invertible matrices. In fact, this map is
surjective In mathematics, a surjective function (also known as surjection, or onto function) is a function that every element can be mapped from element so that . In other words, every element of the function's codomain is the image of one element o ...
which means that every invertible matrix can be written as the exponential of some other matrix (for this, it is essential to consider the field C of complex numbers and not R). For any two matrices and , \left\, e^ - e^X\right\, \le \, Y\, e^ e^, where denotes an arbitrary matrix norm. It follows that the exponential map is
continuous Continuity or continuous may refer to: Mathematics * Continuity (mathematics), the opposing concept to discreteness; common examples include ** Continuous probability distribution or random variable in probability and statistics ** Continuous g ...
and Lipschitz continuous on
compact Compact as used in politics may refer broadly to a pact or treaty; in more specific cases it may refer to: * Interstate compact * Blood compact, an ancient ritual of the Philippines * Compact government, a type of colonial rule utilized in Britis ...
subsets of . The map t \mapsto e^, \qquad t \in \R defines a smooth curve in the general linear group which passes through the identity element at . In fact, this gives a
one-parameter subgroup In mathematics, a one-parameter group or one-parameter subgroup usually means a continuous group homomorphism :\varphi : \mathbb \rightarrow G from the real line \mathbb (as an additive group) to some other topological group G. If \varphi i ...
of the general linear group since e^e^ = e^. The derivative of this curve (or
tangent vector In mathematics, a tangent vector is a vector that is tangent to a curve or surface at a given point. Tangent vectors are described in the differential geometry of curves in the context of curves in R''n''. More generally, tangent vectors are e ...
) at a point ''t'' is given by The derivative at is just the matrix ''X'', which is to say that ''X'' generates this one-parameter subgroup. More generally, for a generic -dependent exponent, , Taking the above expression outside the integral sign and expanding the integrand with the help of the Hadamard lemma one can obtain the following useful expression for the derivative of the matrix exponent, \left(\frace^\right)e^ = \fracX(t) + \frac \left (t), \fracX(t)\right+ \frac \left (t),_\left[X(t),_\fracX(t)\rightright.html" ;"title="(t),_\fracX(t)\right.html" ;"title="(t), \left[X(t), \fracX(t)\right">(t), \left[X(t), \fracX(t)\rightright">(t),_\fracX(t)\right.html" ;"title="(t), \left[X(t), \fracX(t)\right">(t), \left[X(t), \fracX(t)\rightright+ \cdots The coefficients in the expression above are different from what appears in the exponential. For a closed form, see derivative of the exponential map.


Directional derivatives when restricted to Hermitian matrices

Let X be a n \times n Hermitian matrix with distinct eigenvalues. Let X = E \textrm(\Lambda) E^* be its eigen-decomposition where E is a unitary matrix whose columns are the eigenvectors of X, E^* is its conjugate transpose, and \Lambda = \left(\lambda_1, \ldots, \lambda_n\right) the vector of corresponding eigenvalues. Then, for any n \times n Hermitian matrix V, the directional derivative of \exp: X \to e^X at X in the direction V is See Theorem 3.3. See Propositions 1 and 2. D \exp (X) \triangleq \lim_ \frac \left(\displaystyle e^ - e^ \right) = E(G \odot \bar) E^* where \bar = E^* V E, the operator \odot denotes the Hadamard product, and, for all 1 \leq i, j \leq n, the matrix G is defined as G_ = \left\{\begin{align} & \frac{e^{\lambda_i} - e^{\lambda_j{\lambda_i - \lambda_j} & \text{ if } i \neq j,\\ & e^{\lambda_i} & \text{ otherwise}.\\ \end{align}\right. In addition, for any n \times n Hermitian matrix U, the second directional derivative in directions U and V is D^2 \exp (X) , V\triangleq \lim_{\epsilon_u \to 0} \lim_{\epsilon_v \to 0} \frac{1}{4 \epsilon_u \epsilon_v} \left(\displaystyle e^{X + \epsilon_u U + \epsilon_v V} - e^{X - \epsilon_u U + \epsilon_v V} - e^{X + \epsilon_u U - \epsilon_v V} + e^{X - \epsilon_u U - \epsilon_v V} \right) = E F(U, V) E^* where the matrix-valued function F is defined, for all 1 \leq i, j \leq n, as F(U, V)_{i,j} = \sum_{k=1}^n \phi_{i,j,k}(\bar{U}_{ik}\bar{V}_{jk}^* + \bar{V}_{ik}\bar{U}_{jk}^*) with \phi_{i,j,k} = \left\{\begin{align} & \frac{G_{ik} - G_{jk{\lambda_i - \lambda_j} & \text{ if } i \ne j,\\ & \frac{G_{ii} - G_{ik{\lambda_i - \lambda_k} & \text{ if } i = j \text{ and } k \ne i,\\ & \frac{G_{ii{2} & \text{ if } i = j = k.\\ \end{align}\right.


Computing the matrix exponential

Finding reliable and accurate methods to compute the matrix exponential is difficult, and this is still a topic of considerable current research in mathematics and numerical analysis.
Matlab MATLAB (an abbreviation of "MATrix LABoratory") is a proprietary multi-paradigm programming language and numeric computing environment developed by MathWorks. MATLAB allows matrix manipulations, plotting of functions and data, implementat ...
,
GNU Octave GNU Octave is a high-level programming language primarily intended for scientific computing and numerical computation. Octave helps in solving linear and nonlinear problems numerically, and for performing other numerical experiments using a lan ...
, and
SciPy SciPy (pronounced "sigh pie") is a free and open-source Python library used for scientific computing and technical computing. SciPy contains modules for optimization, linear algebra, integration, interpolation, special functions, FFT, ...
all use the Padé approximant. In this section, we discuss methods that are applicable in principle to any matrix, and which can be carried out explicitly for small matrices. Subsequent sections describe methods suitable for numerical evaluation on large matrices.


Diagonalizable case

If a matrix is diagonal: A = \begin{bmatrix} a_1 & 0 & \cdots & 0 \\ 0 & a_2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & a_n \end{bmatrix} , then its exponential can be obtained by exponentiating each entry on the main diagonal: e^A = \begin{bmatrix} e^{a_1} & 0 & \cdots & 0 \\ 0 & e^{a_2} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & e^{a_n} \end{bmatrix} . This result also allows one to exponentiate diagonalizable matrices. If and is diagonal, then Application of Sylvester's formula yields the same result. (To see this, note that addition and multiplication, hence also exponentiation, of diagonal matrices is equivalent to element-wise addition and multiplication, and hence exponentiation; in particular, the "one-dimensional" exponentiation is felt element-wise for the diagonal case.)


Example : Diagonalizable

For example, the matrix A = \begin{bmatrix} 1 & 4\\ 1 & 1\\ \end{bmatrix} can be diagonalized as \begin{bmatrix} -2 & 2\\ 1 & 1\\ \end{bmatrix}\begin{bmatrix} -1 & 0\\ 0 & 3\\ \end{bmatrix}\begin{bmatrix} -2 & 2\\ 1 & 1\\ \end{bmatrix}^{-1}. Thus, e^A = \begin{bmatrix} -2 & 2\\ 1 & 1\\ \end{bmatrix}e^\begin{bmatrix} -1 & 0\\ 0 & 3\\ \end{bmatrix}\begin{bmatrix} -2 & 2\\ 1 & 1\\ \end{bmatrix}^{-1}=\begin{bmatrix} -2 & 2\\ 1 & 1\\ \end{bmatrix}\begin{bmatrix} \frac{1}{e} & 0\\ 0 & e^3\\ \end{bmatrix}\begin{bmatrix} -2 & 2\\ 1 & 1\\ \end{bmatrix}^{-1} = \begin{bmatrix} \frac{e^4+1}{2e} & \frac{e^4-1}{e}\\ \frac{e^4-1}{4e} & \frac{e^4+1}{2e}\\ \end{bmatrix}.


Nilpotent case

A matrix is
nilpotent In mathematics, an element x of a ring R is called nilpotent if there exists some positive integer n, called the index (or sometimes the degree), such that x^n=0. The term was introduced by Benjamin Peirce in the context of his work on the cl ...
if for some integer ''q''. In this case, the matrix exponential can be computed directly from the series expansion, as the series terminates after a finite number of terms: e^N = I + N + \frac{1}{2}N^2 + \frac{1}{6}N^3 + \cdots + \frac{1}{(q - 1)!}N^{q-1} ~. Since the series has a finite number of steps, it is a matrix polynomial, which can be computed efficiently.


General case


Using the Jordan–Chevalley decomposition

By the Jordan–Chevalley decomposition, any n \times n matrix ''X'' with complex entries can be expressed as X = A + N where * ''A'' is diagonalizable * ''N'' is nilpotent * ''A'' commutes with ''N'' This means that we can compute the exponential of ''X'' by reducing to the previous two cases: e^X = e^{A+N} = e^A e^N. Note that we need the commutativity of ''A'' and ''N'' for the last step to work.


Using the Jordan canonical form

A closely related method is, if the field is algebraically closed, to work with the Jordan form of . Suppose that where is the Jordan form of . Then e^{X} = Pe^{J}P^{-1}. Also, since \begin{align} J &= J_{a_1}(\lambda_1) \oplus J_{a_2}(\lambda_2) \oplus \cdots \oplus J_{a_n}(\lambda_n), \\ e^J &= \exp \big( J_{a_1}(\lambda_1) \oplus J_{a_2}(\lambda_2) \oplus \cdots \oplus J_{a_n}(\lambda_n) \big) \\ &= \exp \big( J_{a_1}(\lambda_1) \big) \oplus \exp \big( J_{a_2}(\lambda_2) \big) \oplus \cdots \oplus \exp \big( J_{a_n}(\lambda_n) \big). \end{align} Therefore, we need only know how to compute the matrix exponential of a Jordan block. But each Jordan block is of the form \begin{align} & & J_a(\lambda) &= \lambda I + N \\ &\Rightarrow & e^{J_a(\lambda)} &= e^{\lambda I + N} = e^\lambda e^N. \end{align} where is a special nilpotent matrix. The matrix exponential of is then given by e^J = e^{\lambda_1} e^{N_{a_1 \oplus e^{\lambda_2} e^{N_{a_2 \oplus \cdots \oplus e^{\lambda_n} e^{N_{a_n


Projection case

If is a projection matrix (i.e. is idempotent: ), its matrix exponential is: Deriving this by expansion of the exponential function, each power of reduces to which becomes a common factor of the sum: e^P = \sum_{k=0}^{\infty} \frac{P^k}{k!} = I + \left(\sum_{k=1}^{\infty} \frac{1}{k!}\right)P = I + (e - 1)P ~.


Rotation case

For a simple rotation in which the perpendicular unit vectors and specify a plane, the
rotation matrix In linear algebra, a rotation matrix is a transformation matrix that is used to perform a rotation in Euclidean space. For example, using the convention below, the matrix :R = \begin \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \ ...
can be expressed in terms of a similar exponential function involving a generator and angle . \begin{align} G &= \mathbf{ba}^\mathsf{T} - \mathbf{ab}^\mathsf{T} & P &= -G^2 = \mathbf{aa}^\mathsf{T} + \mathbf{bb}^\mathsf{T} \\ P^2 &= P & PG &= G = GP ~, \end{align} \begin{align} R\left( \theta \right) = e^{G\theta} &= I + G\sin (\theta) + G^2(1 - \cos(\theta)) \\ &= I - P + P\cos (\theta) + G\sin (\theta ) ~.\\ \end{align} The formula for the exponential results from reducing the powers of in the series expansion and identifying the respective series coefficients of and with and respectively. The second expression here for is the same as the expression for in the article containing the derivation of the generator, . In two dimensions, if a = \left begin{smallmatrix} 1 \\ 0 \end{smallmatrix}\right/math> and b = \left \begin{smallmatrix} 0 \\ 1 \end{smallmatrix} \right/math>, then G = \left \begin{smallmatrix} 0 & -1 \\ 1 & 0\end{smallmatrix} \right/math>, G^2 = \left \begin{smallmatrix}-1 & 0 \\ 0 & -1\end{smallmatrix} \right/math>, and R(\theta) = \begin{bmatrix}\cos(\theta) & -\sin(\theta)\\ \sin(\theta) & \cos(\theta)\end{bmatrix} = I \cos(\theta) + G \sin(\theta) reduces to the standard matrix for a plane rotation. The matrix projects a vector onto the -plane and the rotation only affects this part of the vector. An example illustrating this is a rotation of in the plane spanned by and , \begin{align} \mathbf{a} &= \begin{bmatrix} 1 \\ 0 \\ 0 \\ \end{bmatrix} & \mathbf{b} &= \frac{1}{\sqrt{5\begin{bmatrix} 0 \\ 1 \\ 2 \\ \end{bmatrix} \end{align} \begin{align} G = \frac{1}{\sqrt{5&\begin{bmatrix} 0 & -1 & -2 \\ 1 & 0 & 0 \\ 2 & 0 & 0 \\ \end{bmatrix} & P = -G^2 &= \frac{1}{5}\begin{bmatrix} 5 & 0 & 0 \\ 0 & 1 & 2 \\ 0 & 2 & 4 \\ \end{bmatrix} \\ P\begin{bmatrix} 1 \\ 2 \\ 3 \\ \end{bmatrix} = \frac{1}{5}&\begin{bmatrix} 5 \\ 8 \\ 16 \\ \end{bmatrix} = \mathbf{a} + \frac{8}{\sqrt{5\mathbf{b} & R\left(\frac{\pi}{6}\right) &= \frac{1}{10}\begin{bmatrix} 5\sqrt{3} & -\sqrt{5} & -2\sqrt{5} \\ \sqrt{5} & 8 + \sqrt{3} & -4 + 2\sqrt{3} \\ 2\sqrt{5} & -4 + 2\sqrt{3} & 2 + 4\sqrt{3} \\ \end{bmatrix} \\ \end{align} Let , so and its products with and are zero. This will allow us to evaluate powers of . \begin{align} R\left( \frac{\pi}{6} \right) &= N + P\frac{\sqrt{3{2} + G\frac{1}{2} \\ R\left( \frac{\pi}{6} \right)^2 &= N + P\frac{1}{2} + G\frac{\sqrt{3{2} \\ R\left( \frac{\pi}{6} \right)^3 &= N + G \\ R\left( \frac{\pi}{6} \right)^6 &= N - P \\ R\left( \frac{\pi}{6} \right)^{12} &= N + P = I \\ \end{align}


Evaluation by Laurent series

By virtue of the Cayley–Hamilton theorem the matrix exponential is expressible as a polynomial of order −1. If and are nonzero polynomials in one variable, such that , and if the meromorphic function f(z)=\frac{e^{t z}-Q_t(z)}{P(z)} is
entire Entire may refer to: * Entire function, a function that is holomorphic on the whole complex plane * Entire (animal), an indication that an animal is not neutered * Entire (botany) This glossary of botanical terms is a list of definitions of ...
, then e^{t A} = Q_t(A). To prove this, multiply the first of the two above equalities by and replace by . Such a polynomial can be found as follows−see Sylvester's formula. Letting be a root of , is solved from the product of by the
principal part In mathematics, the principal part has several independent meanings, but usually refers to the negative-power portion of the Laurent series of a function. Laurent series definition The principal part at z=a of a function : f(z) = \sum_^\infty a_ ...
of the
Laurent series In mathematics, the Laurent series of a complex function f(z) is a representation of that function as a power series which includes terms of negative degree. It may be used to express complex functions in cases where a Taylor series expansion c ...
of at : It is proportional to the relevant Frobenius covariant. Then the sum ''St'' of the ''Qa,t'', where runs over all the roots of , can be taken as a particular . All the other ''Qt'' will be obtained by adding a multiple of to . In particular, , the Lagrange-Sylvester polynomial, is the only whose degree is less than that of . Example: Consider the case of an arbitrary 2×2 matrix, A := \begin{bmatrix} a & b \\ c & d \end{bmatrix}. The exponential matrix , by virtue of the Cayley–Hamilton theorem, must be of the form e^{tA} = s_0(t)\, I + s_1(t)\,A. (For any complex number and any ''C''-algebra , we denote again by the product of by the unit of .) Let and be the roots of the
characteristic polynomial In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The c ...
of , P(z) = z^2 - (a + d)\ z + ad - bc = (z - \alpha)(z - \beta) ~ . Then we have S_t(z) = e^{\alpha t} \frac{z - \beta}{\alpha - \beta} + e^{\beta t} \frac{z - \alpha}{\beta - \alpha}~, hence \begin{align} s_0(t) &= \frac{\alpha\,e^{\beta t} - \beta\,e^{\alpha t{\alpha - \beta}, & s_1(t) &= \frac{e^{\alpha t} - e^{\beta t{\alpha - \beta} \end{align} if ; while, if , S_t(z) = e^{\alpha t} (1 + t (z - \alpha)) ~, so that \begin{align} s_0(t) &= (1 - \alpha\,t)\,e^{\alpha t},& s_1(t) &= t\,e^{\alpha t}~. \end{align} Defining \begin{align} s &\equiv \frac{\alpha + \beta}{2} = \frac{\operatorname{tr} A}{2}~, & q &\equiv \frac{\alpha - \beta}{2} = \pm\sqrt{-\det\left(A - sI\right)}, \end{align} we have \begin{align} s_0(t) &= e^{st}\left(\cosh(qt) - s\frac{\sinh(qt)}{q}\right), & s_1(t) &= e^{st}\frac{\sinh(qt)}{q}, \end{align} where is 0 if , and if . Thus, Thus, as indicated above, the matrix having decomposed into the sum of two mutually commuting pieces, the traceful piece and the traceless piece, A = sI + (A-sI)~, the matrix exponential reduces to a plain product of the exponentials of the two respective pieces. This is a formula often used in physics, as it amounts to the analog of
Euler's formula Euler's formula, named after Leonhard Euler, is a mathematical formula in complex analysis that establishes the fundamental relationship between the trigonometric functions and the complex exponential function. Euler's formula states that ...
for Pauli spin matrices, that is rotations of the doublet representation of the group SU(2). The polynomial can also be given the following "
interpolation In the mathematical field of numerical analysis, interpolation is a type of estimation, a method of constructing (finding) new data points based on the range of a discrete set of known data points. In engineering and science, one often has ...
" characterization. Define , and . Then is the unique degree polynomial which satisfies whenever is less than the multiplicity of as a root of . We assume, as we obviously can, that is the minimal polynomial of . We further assume that is a
diagonalizable matrix In linear algebra, a square matrix A is called diagonalizable or non-defective if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix P and a diagonal matrix D such that or equivalently (Such D are not unique. ...
. In particular, the roots of are simple, and the "
interpolation In the mathematical field of numerical analysis, interpolation is a type of estimation, a method of constructing (finding) new data points based on the range of a discrete set of known data points. In engineering and science, one often has ...
" characterization indicates that is given by the Lagrange interpolation formula, so it is the Lagrange−Sylvester polynomial . At the other extreme, if , then S_t = e^{at}\ \sum_{k=0}^{n-1}\ \frac{t^k}{k!}\ (z - a)^k ~. The simplest case not covered by the above observations is when P = (z - a)^2\,(z - b) with , which yields S_t = e^{at}\ \frac{z - b}{a - b}\ \left(1 + \left(t + \frac{1}{b - a}\right)(z - a)\right) + e^{bt}\ \frac{(z - a)^2}{(b - a)^2}.


Evaluation by implementation of Sylvester's formula

A practical, expedited computation of the above reduces to the following rapid steps. Recall from above that an ''n×n'' matrix amounts to a linear combination of the first −1 powers of by the Cayley–Hamilton theorem. For diagonalizable matrices, as illustrated above, e.g. in the 2×2 case, Sylvester's formula yields , where the s are the Frobenius covariants of . It is easiest, however, to simply solve for these s directly, by evaluating this expression and its first derivative at , in terms of and , to find the same answer as above. But this simple procedure also works for defective matrices, in a generalization due to Buchheim. This is illustrated here for a 4×4 example of a matrix which is ''not diagonalizable'', and the s are not projection matrices. Consider A = \begin{bmatrix} 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 1 & -\frac{1}{8} \\ 0 & 0 & \frac{1}{2} & \frac{1}{2} \end{bmatrix} ~, with eigenvalues and , each with a multiplicity of two. Consider the exponential of each eigenvalue multiplied by , . Multiply each exponentiated eigenvalue by the corresponding undetermined coefficient matrix . If the eigenvalues have an algebraic multiplicity greater than 1, then repeat the process, but now multiplying by an extra factor of for each repetition, to ensure linear independence. (If one eigenvalue had a multiplicity of three, then there would be the three terms: B_{i_1} e^{\lambda_i t}, ~ B_{i_2} t e^{\lambda_i t}, ~ B_{i_3} t^2 e^{\lambda_i t} . By contrast, when all eigenvalues are distinct, the s are just the Frobenius covariants, and solving for them as below just amounts to the inversion of the Vandermonde matrix of these 4 eigenvalues.) Sum all such terms, here four such, \begin{align} e^{At} &= B_{1_1} e^{\lambda_1 t} + B_{1_2} t e^{\lambda_1 t} + B_{2_1} e^{\lambda_2 t} + B_{2_2} t e^{\lambda_2 t} , \\ e^{At} &= B_{1_1} e^{\frac{3}{4} t} + B_{1_2} t e^{\frac{3}{4} t} + B_{2_1} e^{1 t} + B_{2_2} t e^{1 t} ~. \end{align} To solve for all of the unknown matrices in terms of the first three powers of and the identity, one needs four equations, the above one providing one such at = 0. Further, differentiate it with respect to , A e^{A t} = \frac{3}{4} B_{1_1} e^{\frac{3}{4} t} + \left( \frac{3}{4} t + 1 \right) B_{1_2} e^{\frac{3}{4} t} + 1 B_{2_1} e^{1 t} + \left(1 t + 1 \right) B_{2_2} e^{1 t} ~, and again, \begin{align} A^2 e^{At} &= \left(\frac{3}{4}\right)^2 B_{1_1} e^{\frac{3}{4} t} + \left( \left(\frac{3}{4}\right)^2 t + \left( \frac{3}{4} + 1 \cdot \frac{3}{4}\right) \right) B_{1_2} e^{\frac{3}{4} t} + B_{2_1} e^{1 t} + \left(1^2 t + (1 + 1 \cdot 1 )\right) B_{2_2} e^{1 t} \\ &= \left(\frac{3}{4}\right)^2 B_{1_1} e^{\frac{3}{4} t} + \left( \left(\frac{3}{4}\right)^2 t + \frac{3}{2} \right) B_{1_2} e^{\frac{3}{4} t} + B_{2_1} e^{t} + \left(t + 2\right) B_{2_2} e^{t} ~, \end{align} and once more, \begin{align} A^3 e^{At} &= \left(\frac{3}{4}\right)^3 B_{1_1} e^{\frac{3}{4} t} + \left( \left(\frac{3}{4}\right)^3 t + \left( \left(\frac{3}{4}\right)^2 + \left(\frac{3}{2}\right) \cdot \frac{3}{4}\right) \right) B_{1_2} e^{\frac{3}{4} t} + B_{2_1} e^{1 t} + \left(1^3 t + (1 + 2) \cdot 1 \right) B_{2_2} e^{1 t} \\ &= \left(\frac{3}{4}\right)^3 B_{1_1} e^{\frac{3}{4} t}\! + \left( \left(\frac{3}{4}\right)^3 t\! + \frac{27}{16} \right) B_{1_2} e^{\frac{3}{4} t}\! + B_{2_1} e^{t}\! + \left(t + 3\cdot 1\right) B_{2_2} e^{t} ~. \end{align} (In the general case, −1 derivatives need be taken.) Setting = 0 in these four equations, the four coefficient matrices s may now be solved for, \begin{align} I &= B_{1_1} + B_{2_1} \\ A &= \frac{3}{4} B_{1_1} + B_{1_2} + B_{2_1} + B_{2_2} \\ A^2 &= \left(\frac{3}{4}\right)^2 B_{1_1} + \frac{3}{2} B_{1_2} + B_{2_1} + 2 B_{2_2} \\ A^3 &= \left(\frac{3}{4}\right)^3 B_{1_1} + \frac{27}{16} B_{1_2} + B_{2_1} + 3 B_{2_2} ~, \end{align} to yield \begin{align} B_{1_1} &= 128 A^3 - 366 A^2 + 288 A - 80 I \\ B_{1_2} &= 16 A^3 - 44 A^2 + 40 A - 12 I \\ B_{2_1} &= -128 A^3 + 366 A^2 - 288 A + 80 I \\ B_{2_2} &= 16 A^3 - 40 A^2 + 33 A - 9 I ~. \end{align} Substituting with the value for yields the coefficient matrices \begin{align} B_{1_1} &= \begin{bmatrix}0 & 0 & 48 & -16\\ 0 & 0 & -8 & 2\\ 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 1\end{bmatrix}\\ B_{1_2} &= \begin{bmatrix}0 & 0 & 4 & -2\\ 0 & 0 & -1 & \frac{1}{2}\\ 0 & 0 & \frac{1}{4} & -\frac{1}{8}\\ 0 & 0 & \frac{1}{2} & -\frac{1}{4} \end{bmatrix}\\ B_{2_1} &= \begin{bmatrix}1 & 0 & -48 & 16\\ 0 & 1 & 8 & -2\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\end{bmatrix}\\ B_{2_2} &= \begin{bmatrix}0 & 1 & 8 & -2\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\end{bmatrix} \end{align} so the final answer is e^{tA} = \begin{bmatrix} e^t & te^t & \left(8t - 48\right) e^t\! + \left(4t + 48\right)e^{\frac{3}{4}t} & \left(16 - 2\,t\right)e^t\! + \left(-2t - 16\right)e^{\frac{3}{4}t}\\ 0 & e^t & 8e^t\! + \left(-t - 8\right) e^{\frac{3}{4}t} & -2e^t + \frac{t + 4}{2}e^{\frac{3}{4}t}\\ 0 & 0 & \frac{t + 4}{4}e^{\frac{3}{4}t} & -\frac{t}{8}e^{\frac{3}{4}t}\\ 0 & 0 & \frac{t}{2}e^{\frac{3}{4}t} & -\frac{t - 4}{4}e^{\frac{3}{4}t} ~. \end{bmatrix} The procedure is much shorter than Putzer's algorithm sometimes utilized in such cases.


Illustrations

Suppose that we want to compute the exponential of B = \begin{bmatrix} 21 & 17 & 6 \\ -5 & -1 & -6 \\ 4 & 4 & 16 \end{bmatrix}. Its Jordan form is J = P^{-1}BP = \begin{bmatrix} 4 & 0 & 0 \\ 0 & 16 & 1 \\ 0 & 0 & 16 \end{bmatrix}, where the matrix ''P'' is given by P = \begin{bmatrix} -\frac14 & 2 & \frac54 \\ \frac14 & -2 & -\frac14 \\ 0 & 4 & 0 \end{bmatrix}. Let us first calculate exp(''J''). We have J = J_1(4) \oplus J_2(16) The exponential of a 1×1 matrix is just the exponential of the one entry of the matrix, so . The exponential of ''J''2(16) can be calculated by the formula mentioned above; this yields \begin{align} &\exp \left( \begin{bmatrix} 16 & 1 \\ 0 & 16 \end{bmatrix} \right) = e^{16} \exp \left( \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} \right) = \\ pt {}={} &e^{16} \left(\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} + \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix} + {1 \over 2!}\begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix} + \cdots {} \right) = \begin{bmatrix} e^{16} & e^{16} \\ 0 & e^{16} \end{bmatrix}. \end{align} Therefore, the exponential of the original matrix is \begin{align} \exp(B) &= P \exp(J) P^{-1} = P \begin{bmatrix} e^4 & 0 & 0 \\ 0 & e^{16} & e^{16} \\ 0 & 0 & e^{16} \end{bmatrix} P^{-1} \\ pt &= {1 \over 4} \begin{bmatrix} 13e^{16} - e^4 & 13e^{16} - 5e^4 & 2e^{16} - 2e^4 \\ -9e^{16} + e^4 & -9e^{16} + 5e^4 & -2e^{16} + 2e^4 \\ 16e^{16} & 16e^{16} & 4e^{16} \end{bmatrix}. \end{align}


Applications


Linear differential equations

The matrix exponential has applications to systems of linear differential equations. (See also matrix differential equation.) Recall from earlier in this article that a ''homogeneous'' differential equation of the form \mathbf{y}' = A\mathbf{y} has solution . If we consider the vector \mathbf{y}(t) = \begin{bmatrix} y_1(t) \\ \vdots \\y_n(t) \end{bmatrix} ~, we can express a system of ''inhomogeneous'' coupled linear differential equations as \mathbf{y}'(t) = A\mathbf{y}(t)+\mathbf{b}(t). Making an ansatz to use an integrating factor of and multiplying throughout, yields \begin{align} & & e^{-At}\mathbf{y}'-e^{-At}A\mathbf{y} &= e^{-At}\mathbf{b} \\ &\Rightarrow & e^{-At}\mathbf{y}'-Ae^{-At}\mathbf{y} &= e^{-At}\mathbf{b} \\ &\Rightarrow & \frac{d}{dt} \left(e^{-At}\mathbf{y}\right) &= e^{-At}\mathbf{b}~. \end{align} The second step is possible due to the fact that, if , then . So, calculating leads to the solution to the system, by simply integrating the third step with respect to . A solution to this can be obtained by integrating and multiplying by e^{\textbf{A}t} to eliminate the exponent in the LHS. Notice that while e^{\textbf{A}t} is a matrix, given that it is a matrix exponential, we can say that e^{\textbf{A}t} e^{-\textbf{A}t} = I. In other words, \exp{\textbf{A}t} = \exp.


Example (homogeneous)

Consider the system \begin{matrix} x' &=& 2x & -y & +z \\ y' &=& & 3y & -1z \\ z' &=& 2x & +y & +3z \end{matrix}~. The associated
defective matrix In linear algebra, a defective matrix is a square matrix that does not have a complete basis of eigenvectors, and is therefore not diagonalizable. In particular, an ''n'' × ''n'' matrix is defective if and only if it does not h ...
is A = \begin{bmatrix} 2 & -1 & 1 \\ 0 & 3 & -1 \\ 2 & 1 & 3 \end{bmatrix}~. The matrix exponential is e^{tA} = \frac{1}{2}\begin{bmatrix} e^{2t}\left( 1 + e^{2t} - 2t\right) & -2te^{2t} & e^{2t}\left(-1 + e^{2t}\right) \\ -e^{2t}\left(-1 + e^{2t} - 2t\right) & 2(t + 1)e^{2t} & -e^{2t}\left(-1 + e^{2t}\right) \\ e^{2t}\left(-1 + e^{2t} + 2t\right) & 2te^{2t} & e^{2t}\left( 1 + e^{2t}\right) \end{bmatrix}~, so that the general solution of the homogeneous system is \begin{bmatrix}x \\y \\ z\end{bmatrix} = \frac{x(0)}{2}\begin{bmatrix}e^{2t}\left(1 + e^{2t} - 2t\right) \\ -e^{2t}\left(-1 + e^{2t} - 2t\right) \\ e^{2t}\left(-1 + e^{2t} + 2t\right)\end{bmatrix} + \frac{y(0)}{2}\begin{bmatrix}-2te^{2t} \\ 2(t + 1)e^{2t} \\ 2te^{2t}\end{bmatrix} + \frac{z(0)}{2}\begin{bmatrix}e^{2t}\left(-1 + e^{2t}\right) \\ -e^{2t}\left(-1 + e^{2t}\right) \\ e^{2t}\left(1 + e^{2t}\right)\end{bmatrix} ~, amounting to \begin{align} 2x &= x(0)e^{2t}\left(1 + e^{2t} - 2t\right) + y(0)\left(-2te^{2t}\right) + z(0)e^{2t}\left(-1 + e^{2t}\right) \\ pt 2y &= x(0)\left(-e^{2t}\right)\left(-1 + e^{2t} - 2t\right) + y(0)2(t + 1)e^{2t} + z(0)\left(-e^{2t}\right)\left(-1 + e^{2t}\right) \\ pt 2z &= x(0)e^{2t}\left(-1 + e^{2t} + 2t\right) + y(0)2te^{2t} + z(0)e^{2t}\left(1 + e^{2t}\right) ~. \end{align}


Example (inhomogeneous)

Consider now the inhomogeneous system \begin{matrix} x' &=& 2x & - & y & + & z & + & e^{2t} \\ y' &=& & & 3y& - & z & \\ z' &=& 2x & + & y & + & 3z & + & e^{2t} \end{matrix} ~. We again have A = \left begin{array}{rrr} 2 & -1 & 1 \\ 0 & 3 & -1 \\ 2 & 1 & 3 \end{array}\right~, and \mathbf{b} = e^{2t}\begin{bmatrix}1 \\0\\1\end{bmatrix}. From before, we already have the general solution to the homogeneous equation. Since the sum of the homogeneous and particular solutions give the general solution to the inhomogeneous problem, we now only need find the particular solution. We have, by above, \begin{align} \mathbf{y}_p &= e^{tA}\int_0^t e^{(-u)A}\begin{bmatrix}e^{2u} \\0\\e^{2u}\end{bmatrix}\,du+e^{tA}\mathbf{c} \\ pt &= e^{tA}\int_0^t \begin{bmatrix} 2e^u - 2ue^{2u} & -2ue^{2u} & 0 \\ -2e^u + 2(u+1)e^{2u} & 2(u+1)e^{2u} & 0 \\ 2ue^{2u} & 2ue^{2u} & 2e^u \end{bmatrix}\begin{bmatrix}e^{2u} \\0 \\e^{2u}\end{bmatrix}\,du + e^{tA}\mathbf{c} \\ pt &= e^{tA}\int_0^t \begin{bmatrix} e^{2u}\left( 2e^u - 2ue^{2u}\right) \\ e^{2u}\left(-2e^u + 2(1 + u)e^{2u}\right) \\ 2e^{3u} + 2ue^{4u} \end{bmatrix}\,du + e^{tA}\mathbf{c} \\ pt &= e^{tA}\begin{bmatrix} -{1 \over 24}e^{3t}\left(3e^t(4t - 1) - 16\right) \\ {1 \over 24}e^{3t}\left(3e^t(4t + 4) - 16\right) \\ {1 \over 24}e^{3t}\left(3e^t(4t - 1) - 16\right) \end{bmatrix} + \begin{bmatrix} 2e^t - 2te^{2t} & -2te^{2t} & 0 \\ -2e^t + 2(t + 1)e^{2t} & 2(t + 1)e^{2t} & 0 \\ 2te^{2t} & 2te^{2t} & 2e^t \end{bmatrix}\begin{bmatrix}c_1 \\c_2 \\c_3\end{bmatrix} ~, \end{align} which could be further simplified to get the requisite particular solution determined through variation of parameters. Note c = y''p''(0). For more rigor, see the following generalization.


Inhomogeneous case generalization: variation of parameters

For the inhomogeneous case, we can use
integrating factor In mathematics, an integrating factor is a function that is chosen to facilitate the solving of a given equation involving differentials. It is commonly used to solve ordinary differential equations, but is also used within multivariable calcul ...
s (a method akin to variation of parameters). We seek a particular solution of the form , \begin{align} \mathbf{y}_p'(t) & = \left(e^{tA}\right)'\mathbf{z}(t) + e^{tA}\mathbf{z}'(t) \\ pt & = Ae^{tA}\mathbf{z}(t) + e^{tA}\mathbf{z}'(t) \\ pt & = A\mathbf{y}_p(t) + e^{tA}\mathbf{z}'(t)~. \end{align} For to be a solution, \begin{align} e^{tA}\mathbf{z}'(t) &= \mathbf{b}(t) \\ pt \mathbf{z}'(t) &= \left(e^{tA}\right)^{-1}\mathbf{b}(t) \\ pt \mathbf{z}(t) &= \int_0^t e^{-uA}\mathbf{b}(u)\,du + \mathbf{c} ~. \end{align} Thus, \begin{align} \mathbf{y}_p(t) & = e^{tA}\int_0^t e^{-uA}\mathbf{b}(u)\,du + e^{tA}\mathbf{c} \\ & = \int_0^t e^{(t - u)A}\mathbf{b}(u)\,du + e^{tA}\mathbf{c}~, \end{align} where is determined by the initial conditions of the problem. More precisely, consider the equation Y' - A\ Y = F(t) with the initial condition , where * is an by complex matrix, * is a continuous function from some open interval to ℂ''n'', * t_0 is a point of , and * Y_0 is a vector of . Left-multiplying the above displayed equality by yields Y(t) = e^{(t - t_0)A}\ Y_0 + \int_{t_0}^t e^{(t - x)A}\ F(x)\ dx ~. We claim that the solution to the equation P(d/dt)\ y = f(t) with the initial conditions y^{(k)}(t_0) = y_k for is y(t) = \sum_{k=0}^{n-1}\ y_k\ s_k(t - t_0) + \int_{t_0}^t s_{n-1}(t - x)\ f(x)\ dx ~, where the notation is as follows: * P\in\mathbb{C} /math> is a monic polynomial of degree , * is a continuous complex valued function defined on some open interval , * t_0 is a point of , * y_k is a complex number, and is the coefficient of X^k in the polynomial denoted by S_t\in\mathbb{C} /math> in Subsection Evaluation by Laurent series above. To justify this claim, we transform our order scalar equation into an order one vector equation by the usual reduction to a first order system. Our vector equation takes the form \frac{dY}{dt} - A\ Y = F(t),\quad Y(t_0) = Y_0, where is the transpose companion matrix of . We solve this equation as explained above, computing the matrix exponentials by the observation made in Subsection Evaluation by implementation of Sylvester's formula above. In the case = 2 we get the following statement. The solution to y'' - (\alpha + \beta)\ y' + \alpha\,\beta\ y = f(t),\quad y(t_0) = y_0,\quad y'(t_0) = y_1 is y(t) = y_0\ s_0(t - t_0) + y_1\ s_1(t - t_0) + \int_{t_0}^t s_1(t - x)\,f(x)\ dx, where the functions and are as in Subsection Evaluation by Laurent series above.


Matrix-matrix exponentials

The matrix exponential of another matrix (matrix-matrix exponential), is defined as X^Y = e^{\log(X) \cdot Y} ^Y\!X = e^{Y \cdot \log(X)} for any normal and non-singular matrix , and any complex matrix . For matrix-matrix exponentials, there is a distinction between the left exponential and the right exponential , because the multiplication operator for matrix-to-matrix is not
commutative In mathematics, a binary operation is commutative if changing the order of the operands does not change the result. It is a fundamental property of many binary operations, and many mathematical proofs depend on it. Most familiar as the name of ...
. Moreover, * If is normal and non-singular, then and have the same set of eigenvalues. * If is normal and non-singular, is normal, and , then . * If is normal and non-singular, and , , commute with each other, then and .


See also

* Matrix function * Matrix logarithm * C0-semigroup *
Exponential function The exponential function is a mathematical function denoted by f(x)=\exp(x) or e^x (where the argument is written as an exponent). Unless otherwise specified, the term generally refers to the positive-valued function of a real variable, ...
*
Exponential map (Lie theory) In the theory of Lie groups, the exponential map is a map from the Lie algebra \mathfrak g of a Lie group G to the group, which allows one to recapture the local group structure from the Lie algebra. The existence of the exponential map is one of ...
*
Magnus expansion In mathematics and physics, the Magnus expansion, named after Wilhelm Magnus (1907–1990), provides an exponential representation of the solution of a first-order homogeneous linear differential equation for a linear operator. In particular, it fu ...
* Derivative of the exponential map *
Vector flow Vector most often refers to: *Euclidean vector, a quantity with a magnitude and a direction *Vector (epidemiology), an agent that carries and transmits an infectious pathogen into another living organism Vector may also refer to: Mathematic ...
* Golden–Thompson inequality *
Phase-type distribution A phase-type distribution is a probability distribution constructed by a convolution or mixture of exponential distributions. It results from a system of one or more inter-related Poisson processes occurring in sequence, or phases. The sequence ...
* Lie product formula * Baker–Campbell–Hausdorff formula * Frobenius covariant * Sylvester's formula * Trigonometric functions of matrices


References

* * . * . * * * *


External links

* {{Matrix classes Matrix theory Lie groups Exponentials