HOME

TheInfoList



OR:

In the theory of
Lie group In mathematics, a Lie group (pronounced ) is a group that is also a differentiable manifold. A manifold is a space that locally resembles Euclidean space, whereas groups define the abstract concept of a binary operation along with the addi ...
s, the exponential map is a map from the
Lie algebra In mathematics, a Lie algebra (pronounced ) is a vector space \mathfrak g together with an operation called the Lie bracket, an alternating bilinear map \mathfrak g \times \mathfrak g \rightarrow \mathfrak g, that satisfies the Jacobi identi ...
of a Lie group into . In case is a
matrix Lie group In mathematics, a Lie group (pronounced ) is a group (mathematics), group that is also a differentiable manifold. A manifold is a space that locally resembles Euclidean space, whereas groups define the abstract concept of a binary operation a ...
, the exponential map reduces to the
matrix exponential In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential give ...
. The exponential map, denoted , is
analytic Generally speaking, analytic (from el, ἀναλυτικός, ''analytikos'') refers to the "having the ability to analyze" or "division into elements or principles". Analytic or analytical can also have the following meanings: Chemistry * ...
and has as such a
derivative In mathematics, the derivative of a function of a real variable measures the sensitivity to change of the function value (output value) with respect to a change in its argument (input value). Derivatives are a fundamental tool of calculus. ...
, where is a path in the Lie algebra, and a closely related differential . Appendix on analytic functions. The formula for was first proved by
Friedrich Schur Friedrich Heinrich Schur (27 January 1856, Maciejewo, Krotoschin, Province of Posen – 18 March 1932, Breslau) was a German mathematician who studied geometry. Life and work Schur's family was originally Jewish, but converted to Protestan ...
(1891). It was later elaborated by
Henri Poincaré Jules Henri Poincaré ( S: stress final syllable ; 29 April 1854 – 17 July 1912) was a French mathematician, theoretical physicist, engineer, and philosopher of science. He is often described as a polymath, and in mathematics as "Th ...
(1899) in the context of the problem of expressing Lie group multiplication using Lie algebraic terms. It is also sometimes known as Duhamel's formula. The formula is important both in pure and applied mathematics. It enters into proofs of theorems such as the Baker–Campbell–Hausdorff formula, and it is used frequently in physics for example in
quantum field theory In theoretical physics, quantum field theory (QFT) is a theoretical framework that combines classical field theory, special relativity, and quantum mechanics. QFT is used in particle physics to construct physical models of subatomic particles and ...
, as in the
Magnus expansion In mathematics and physics, the Magnus expansion, named after Wilhelm Magnus (1907–1990), provides an exponential representation of the solution of a first-order homogeneous linear differential equation for a linear operator. In particular, it fu ...
in
perturbation theory In mathematics and applied mathematics, perturbation theory comprises methods for finding an approximate solution to a problem, by starting from the exact solution of a related, simpler problem. A critical feature of the technique is a middle ...
, and in
lattice gauge theory In physics, lattice gauge theory is the study of gauge theories on a spacetime that has been discretized into a lattice. Gauge theories are important in particle physics, and include the prevailing theories of elementary particles: quantum ...
. Throughout, the notations and will be used interchangeably to denote the exponential given an argument, ''except'' when, where as noted, the notations have dedicated ''distinct'' meanings. The calculus-style notation is preferred here for better readability in equations. On the other hand, the -style is sometimes more convenient for inline equations, and is necessary on the rare occasions when there is a real distinction to be made.


Statement

The derivative of the exponential map is given by Theorem 5 Section 1.2 ;Explanation To compute the differential of at , , the standard recipe : d\exp_XY = \left .\frace^\_, Z(0) = X, Z'(0) = Y is employed. With the result follows immediately from . In particular, is the identity because (since is a vector space) and .


Proof

The proof given below assumes a matrix Lie group. This means that the exponential mapping from the Lie algebra to the matrix Lie group is given by the usual power series, i.e. matrix exponentiation. The conclusion of the proof still holds in the general case, provided each occurrence of is correctly interpreted. See comments on the general case below. The outline of proof makes use of the technique of differentiation with respect to of the parametrized expression :\Gamma(s, t) = e^\frac e^ to obtain a first order differential equation for which can then be solved by direct integration in . The solution is then . Lemma Let denote the
adjoint action In mathematics, the adjoint representation (or adjoint action) of a Lie group ''G'' is a way of representing the elements of the group as linear transformations of the group's Lie algebra, considered as a vector space. For example, if ''G'' is GL( ...
of the group on its Lie algebra. The action is given by for . A frequently useful relationship between and is given byA proof of the identity can be found in
here Here is an adverb that means "in, on, or at this place". It may also refer to: Software * Here Technologies, a mapping company * Here WeGo (formerly Here Maps), a mobile app and map website by Here Television * Here TV (formerly "here!"), a ...
. The relationship is simply that between a representation of a Lie group and that of its Lie algebra according to the Lie correspondence, since both and are representations with .
Proof Using the product rule twice one finds, :\frac = e^(-X)\frace^ + e^\frac\left (t)e^\right= e^\frace^. Then one observes that :\frac = \mathrm_X' = e^X', by above. Integration yields :\Gamma(1, t) = e^\frace^ = \int_0^1 \fracds = \int_0^1 e^X'ds. Using the formal power series to expand the exponential, integrating term by term, and finally recognizing (), :\Gamma(1, t) = \int_0^1 \sum_^\infty \frac (\mathrm_X)^k\fracds = \sum_^\infty \frac(\mathrm_X)^k \frac = \frac\frac, and the result follows. The proof, as presented here, is essentially the one given in . A proof with a more algebraic touch can be found in .


Comments on the general case

The formula in the general case is given by :\frac\exp(C(t)) = \exp(C)\phi(-\mathrm(C))C~ ', where It holds that :\tau(\log z)\phi(-\log z) = 1 for , z − 1, < 1 where :\tau(w) = \frac. Here, is the exponential generating function of :(-1)^k b_k, where are the
Bernoulli numbers In mathematics, the Bernoulli numbers are a sequence of rational numbers which occur frequently in analysis. The Bernoulli numbers appear in (and can be defined by) the Taylor series expansions of the tangent and hyperbolic tangent functions, ...
.
:\phi(z) = \frac = 1 + \fracz + \fracz^2 + \cdots, which formally reduces to :\frac\exp(C(t)) = \exp(C)\frac\frac. Here the -notation is used for the exponential mapping of the Lie algebra and the calculus-style notation in the fraction indicates the usual formal series expansion. For more information and two full proofs in the general case, see the freely available reference.


A direct formal argument

An immediate way to see what the answer ''must'' be, provided it exists is the following. Existence needs to be proved separately in each case. By direct differentiation of the standard limit definition of the exponential, and exchanging the order of differentiation and limit, :\begin \frace^ &= \lim_\frac\left(1 + \frac\right)^N\\ &= \lim_\sum_^N\left(1 + \frac\right)^\frac\frac\left(1 + \frac\right)^~, \end where each factor owes its place to the non-commutativity of and . Dividing the unit interval into sections ( since the sum indices are integers) and letting → ∞, , yields :\begin \frace^ &= \int_^1e^X'e^ds = e^X \int_^1 \mathrm_ X' ds \\ &= e^X \int_^1 e^ dsX' = e^X \frac\frac~. \end


Applications


Local behavior of the exponential map

The
inverse function theorem In mathematics, specifically differential calculus, the inverse function theorem gives a sufficient condition for a function to be invertible in a neighborhood of a point in its domain: namely, that its ''derivative is continuous and non-zero at ...
together with the derivative of the exponential map provides information about the local behavior of . Any map between vector spaces (here first considering matrix Lie groups) has a inverse such that is a bijection in an open set around a point in the domain provided is invertible. From () it follows that this will happen precisely when :\frac is invertible. This, in turn, happens when the eigenvalues of this operator are all nonzero. The eigenvalues of are related to those of as follows. If is an analytic function of a complex variable expressed in a power series such that for a matrix converges, then the eigenvalues of will be , where are the eigenvalues of , the double subscript is made clear below.This is seen by choosing a basis for the underlying vector space such that is
triangular A triangle is a polygon with three edges and three vertices. It is one of the basic shapes in geometry. A triangle with vertices ''A'', ''B'', and ''C'' is denoted \triangle ABC. In Euclidean geometry, any three points, when non- collinea ...
, the eigenvalues being the diagonal elements. Then is triangular with diagonal elements . It follows that the eigenvalues of are . See , Lemma 6 in section 1.2.
In the present case with and , the eigenvalues of are :\frac, where the are the eigenvalues of . Putting one sees that is invertible precisely when :\lambda_ \ne k2\pi i, k = \pm1, \pm2, \ldots. The eigenvalues of are, in turn, related to those of . Let the eigenvalues of be . Fix an ordered basis of the underlying vector space such that is lower triangular. Then :Xe_i = \lambda_ie_i + \cdots, with the remaining terms multiples of with . Let be the corresponding basis for matrix space, i.e. . Order this basis such that if . One checks that the action of is given by :\mathrm_XE_ = (\lambda_i - \lambda_j)E_ + \cdots \equiv \lambda_E_ + \cdots, with the remaining terms multiples of . This means that is lower triangular with its eigenvalues on the diagonal. The conclusion is that is invertible, hence is a local bianalytical bijection around , when the eigenvalues of satisfyMatrices whose eigenvalues satisfy are, under the exponential, in bijection with matrices whose eigenvalues are not on the negative real line or zero. The and are related by the complex exponential. See Remark 2c section 1.2. :\lambda_i - \lambda_j \ne k2\pi i, \quad k = \pm1, \pm2, \ldots, \quad 1 \le i, j \le n = \dim V. In particular, in the case of matrix Lie groups, it follows, since is invertible, by the
inverse function theorem In mathematics, specifically differential calculus, the inverse function theorem gives a sufficient condition for a function to be invertible in a neighborhood of a point in its domain: namely, that its ''derivative is continuous and non-zero at ...
that is a bi-analytic bijection in a neighborhood of in matrix space. Furthermore, , is a bi-analytic bijection from a neighborhood of in to a neighborhood of . The same conclusion holds for general Lie groups using the manifold version of the inverse function theorem. It also follows from the implicit function theorem that itself is invertible for sufficiently small.


Derivation of a Baker–Campbell–Hausdorff formula

If is defined such that :e^ = e^ e^, an expression for , the Baker–Campbell–Hausdorff formula, can be derived from the above formula, :\exp(-Z(t))\frac\exp(Z(t)) = \fracZ'(t). Its left-hand side is easy to see to equal ''Y''. Thus, :Y = \fracZ'(t), and hence, formally, : Z'(t) = \frac Y \equiv \psi\left(e^\right)Y, \quad \psi(w) = \frac = 1 + \sum_^\infty \frac(w - 1)^m, \, w\, < 1. However, using the relationship between and given by , it is straightforward to further see that : e^ = e^ e^ and hence :Z'(t) = \psi\left(e^ e^\right)Y. Putting this into the form of an integral in ''t'' from 0 to 1 yields, :Z(1) = \log(\exp X\exp Y) = X + \left( \int^1_0 \psi \left(e^ ~ e^\right) \, dt \right) \, Y, an integral formula for that is more tractable in practice than the explicit Dynkin's series formula due to the simplicity of the series expansion of . Note this expression consists of and nested commutators thereof with or . A textbook proof along these lines can be found in and .


Derivation of Dynkin's series formula

Dynkin's formula mentioned may also be derived analogously, starting from the parametric extension :e^ = e^ e^, whence :e^ \frac = e^X + Y ~, so that, using the above general formula, :Z' = \frac ~ \left(e^X + Y\right) = \frac ~ \left(X + e^Y\right) . Since, however, :\begin \mathrm &= \log\left(\exp\left(\mathrm_Z\right)\right) = \log\left(1 + \left(\exp\left(\mathrm_Z\right) - 1\right)\right) \\ &= \sum\limits^_ \frac (\exp(\mathrm_Z) - 1)^n ~, \quad \, \mathrm_Z\, < \log 2 ~~, \end the last step by virtue of the Mercator series expansion, it follows that and, thus, integrating, :Z(1) = \int^1 _0 dt ~\frac = \sum^_ \frac \int^1 _0 dt ~\left(e^ e^ - 1\right)^ ~ \left(X + e^Y\right) . It is at this point evident that the qualitative statement of the BCH formula holds, namely lies in the Lie algebra generated by and is expressible as a series in repeated brackets . For each , terms for each partition thereof are organized inside the integral . The resulting Dynkin's formula is then For a similar proof with detailed series expansions, see .


Combinatoric details

Change the summation index in () to and expand in a power series. To handle the series expansions simply, consider first . The -series and the -series are given by :\log(A) = \sum_^\infty \frac^k,\quad \text\quad e^X = \sum_^\infty \frac respectively. Combining these one obtains This becomes where is the set of all sequences of length subject to the conditions in . Now substitute for in the LHS of (). Equation then gives :\begin \frac = \sum_^\infty \frac \sum_ &t^\fracX \\ + &t^\fracY, \quad i_r,j_r \ge 0, \quad i_r + j_r > 0,\quad 1 \le r \le k, \end or, with a switch of notation, see An explicit Baker–Campbell–Hausdorff formula, :\begin \frac = \sum_^\infty \frac \sum_ &t^\frac\\ + &t^\frac, \quad i_r,j_r \ge 0, \quad i_r + j_r > 0, \quad 1 \le r \le k \end. Note that the summation index for the rightmost in the second term in () is denoted , but is ''not'' an element of a sequence . Now integrate , using , :\begin Z = \sum_^\infty \frac \sum_ &\frac\frac\\ + &\frac\frac, \quad i_r,j_r \ge 0,\quad i_r + j_r > 0,\quad 1 \le r \le k \end. Write this as :\begin Z = \sum_^\infty \frac \sum_ &\frac\frac\\ + &\frac\frac, \\\\ & (i_r,j_r \ge 0,\quad i_r + j_r > 0,\quad 1 \le r \le k). \end This amounts to where i_r,j_r \ge 0,\quad i_r + j_r > 0,\quad 1 \le r \le k + 1, using the simple observation that for all . That is, in (), the leading term vanishes unless equals or , corresponding to the first and second terms in the equation before it. In case , must equal , else the term vanishes for the same reason ( is not allowed). Finally, shift the index, , This is Dynkin's formula. The striking similarity with (99) is not accidental: It reflects the Dynkin–Specht–Wever map, underpinning the original, different, derivation of the formula. Namely, ''if'' :X^Y^ \cdots X^Y^ is expressible as a bracket series, then necessarily Chapter 1.12.2. Putting observation and theorem () together yields a concise proof of the explicit BCH formula.


See also

* Adjoint representation (ad) * Baker-Campbell-Hausdorff formula * Exponential map *
Matrix exponential In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential give ...
*
Matrix logarithm In mathematics, a logarithm of a matrix is another matrix such that the matrix exponential of the latter matrix equals the original matrix. It is thus a generalization of the scalar logarithm and in some sense an inverse function of the matrix exp ...
*
Magnus expansion In mathematics and physics, the Magnus expansion, named after Wilhelm Magnus (1907–1990), provides an exponential representation of the solution of a first-order homogeneous linear differential equation for a linear operator. In particular, it fu ...


Remarks


Notes


References

* ; translation from
Google books
* * * * * * * * Veltman, M, 't Hooft, G & de Wit, B (2007). "Lie Groups in Physics"
online lectures
*


External links

* *{{citation, last=Schmid, first=Wilfried, author-link=Wilfried Schmid, year=1982, url=https://www.ams.org/journals/bull/1982-06-02/S0273-0979-1982-14972-2/S0273-0979-1982-14972-2.pdf, title=Poincaré and Lie groups, journal=Bull. Amer. Math. Soc., volume=6, issue=2, pages=175–186, doi=10.1090/s0273-0979-1982-14972-2, doi-access=free Mathematical physics Matrix theory Lie groups Exponentials