HOME

TheInfoList



OR:

In
mathematics Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. There are many ar ...
, a logarithm of a matrix is another
matrix Matrix (: matrices or matrixes) or MATRIX may refer to: Science and mathematics * Matrix (mathematics), a rectangular array of numbers, symbols or expressions * Matrix (logic), part of a formula in prenex normal form * Matrix (biology), the m ...
such that the
matrix exponential In mathematics, the matrix exponential is a matrix function on square matrix, square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exp ...
of the latter matrix equals the original matrix. It is thus a generalization of the scalar
logarithm In mathematics, the logarithm of a number is the exponent by which another fixed value, the base, must be raised to produce that number. For example, the logarithm of to base is , because is to the rd power: . More generally, if , the ...
and in some sense an
inverse function In mathematics, the inverse function of a function (also called the inverse of ) is a function that undoes the operation of . The inverse of exists if and only if is bijective, and if it exists, is denoted by f^ . For a function f\colon ...
of the
matrix exponential In mathematics, the matrix exponential is a matrix function on square matrix, square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exp ...
. Not all matrices have a logarithm and those matrices that do have a logarithm may have more than one logarithm. The study of logarithms of matrices leads to
Lie theory In mathematics, the mathematician Sophus Lie ( ) initiated lines of study involving integration of differential equations, transformation groups, and contact (mathematics), contact of spheres that have come to be called Lie theory. For instance, ...
since when a matrix has a logarithm then it is in an element of a
Lie group In mathematics, a Lie group (pronounced ) is a group (mathematics), group that is also a differentiable manifold, such that group multiplication and taking inverses are both differentiable. A manifold is a space that locally resembles Eucli ...
and the logarithm is the corresponding element of the vector space of the
Lie algebra In mathematics, a Lie algebra (pronounced ) is a vector space \mathfrak g together with an operation called the Lie bracket, an alternating bilinear map \mathfrak g \times \mathfrak g \rightarrow \mathfrak g, that satisfies the Jacobi ident ...
.


Definition

The exponential of a matrix ''A'' is defined by : e^ \equiv \sum_^ \frac. Given a matrix ''B'', another matrix ''A'' is said to be a matrix logarithm of . Because the exponential function is not
bijective In mathematics, a bijection, bijective function, or one-to-one correspondence is a function between two sets such that each element of the second set (the codomain) is the image of exactly one element of the first set (the domain). Equival ...
for
complex number In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation i^= -1; every complex number can be expressed in the for ...
s (e.g. e^ = e^ = -1), numbers can have multiple complex logarithms, and as a consequence of this, some matrices may have more than one logarithm, as explained below. If the matrix logarithm of B exists and is unique, then it is written as \log B, in which case e^ = B.


Power series expression

If ''B'' is sufficiently close to the identity matrix, then a logarithm of ''B'' may be computed by means of the
power series In mathematics, a power series (in one variable) is an infinite series of the form \sum_^\infty a_n \left(x - c\right)^n = a_0 + a_1 (x - c) + a_2 (x - c)^2 + \dots where ''a_n'' represents the coefficient of the ''n''th term and ''c'' is a co ...
:\log(B) = \log(I + (B - I)) = \sum_^ \frac (B - I)^k = (B - I) - \frac + \frac - \cdots, which can be rewritten as :\log(B) = -\sum_^ \frac = -(I - B) - \frac - \frac - \cdots. Specifically, if \left\, I-B\right\, <1, then the preceding series converges and e^=B.


Example: Logarithm of rotations in the plane

The rotations in the plane give a simple example. A rotation of angle ''α'' around the origin is represented by the 2×2-matrix : A = \begin \cos(\alpha) & -\sin(\alpha) \\ \sin(\alpha) & \cos(\alpha) \\ \end. For any integer ''n'', the matrix : B_n=(\alpha+2\pi n) \begin 0 & -1 \\ 1 & 0\\ \end, is a logarithm of ''A''. \log(A) =B_n~~~e^ =A e^ = \sum_^\inftyB_n^k ~ where (B_n)^0 = 1~I_2 , (B_n)^1= (\alpha+2\pi n)\begin 0 & -1 \\ +1 & 0\\ \end, (B_n)^2= (\alpha+2\pi n)^2\begin -1 & 0 \\ 0 & -1 \\ \end, (B_n)^3= (\alpha+2\pi n)^3\begin 0 & +1 \\ -1 & 0\\ \end, (B_n)^4= (\alpha+2\pi n)^4~I_2 ... \sum_^\inftyB_n^k =\begin \sum_^\infty(\alpha+2\pi n)^ & -\sum_^\infty(\alpha+2\pi n)^ \\ \sum_^\infty(\alpha+2\pi n)^ & \sum_^\infty(\alpha+2\pi n)^ \\ \end =\begin \cos(\alpha) & -\sin(\alpha) \\ \sin(\alpha) & \cos(\alpha) \\ \end =A~. qed. Thus, the matrix ''A'' has infinitely many logarithms. This corresponds to the fact that the rotation angle is only determined up to multiples of 2''π''. In the language of Lie theory, the rotation matrices ''A'' are elements of the Lie group SO(2). The corresponding logarithms ''B'' are elements of the Lie algebra so(2), which consists of all skew-symmetric matrices. The matrix : \begin 0 & 1 \\ -1 & 0\\ \end is a generator of the
Lie algebra In mathematics, a Lie algebra (pronounced ) is a vector space \mathfrak g together with an operation called the Lie bracket, an alternating bilinear map \mathfrak g \times \mathfrak g \rightarrow \mathfrak g, that satisfies the Jacobi ident ...
so(2).


Existence

The question of whether a matrix has a logarithm has the easiest answer when considered in the complex setting. A complex matrix has a logarithm
if and only if In logic and related fields such as mathematics and philosophy, "if and only if" (often shortened as "iff") is paraphrased by the biconditional, a logical connective between statements. The biconditional is true in two cases, where either bo ...
it is
invertible In mathematics, the concept of an inverse element generalises the concepts of opposite () and reciprocal () of numbers. Given an operation denoted here , and an identity element denoted , if , one says that is a left inverse of , and that ...
. The logarithm is not unique, but if a matrix has no negative real
eigenvalue In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a ...
s, then there is a unique logarithm that has eigenvalues all lying in the strip \ . This logarithm is known as the ''principal logarithm''. The answer is more involved in the real setting. A real matrix has a real logarithm if and only if it is invertible and each Jordan block belonging to a negative eigenvalue occurs an even number of times. If an invertible real matrix does not satisfy the condition with the Jordan blocks, then it has only non-real logarithms. This can already be seen in the scalar case: no branch of the logarithm can be real at -1. The existence of real matrix logarithms of real 2×2 matrices is considered in a later section.


Properties

If ''A'' and ''B'' are both positive-definite matrices, then : \operatorname = \operatorname + \operatorname. Suppose that ''A'' and ''B'' commute, meaning that ''AB'' = ''BA''. Then : \log = \log+\log if and only if \operatorname(\mu_j) + \operatorname(\nu_j) \in (- \pi, \pi], where \mu_j is an
eigenvalue In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a ...
of A and \nu_j is the corresponding
eigenvalue In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a ...
of B. In particular, \log(AB) = \log(A) + \log(B) when ''A'' and ''B'' commute and are both Definite matrix, positive-definite. Setting ''B'' = ''A−1'' in this equation yields : \log = -\log. Similarly, for non-commuting A and B, one can show that : \log = \log + t\int_0^\infty dz ~\frac B \frac + O(t^2). More generally, a series expansion of \log in powers of t can be obtained using the integral definition of the logarithm : \log - \log = \int_0^\lambda dz \frac, applied to both X=A and X=A+tB in the limit \lambda\rightarrow\infty.


Further example: Logarithm of rotations in 3D space

A rotation ∈ SO(3) in \mathbb3 is given by a 3×3
orthogonal matrix In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors. One way to express this is Q^\mathrm Q = Q Q^\mathrm = I, where is the transpose of and is the identi ...
. The logarithm of such a rotation matrix can be readily computed from the antisymmetric part of
Rodrigues' rotation formula In the theory of three-dimensional rotation, Rodrigues' rotation formula, named after Olinde Rodrigues, is an efficient algorithm for rotating a vector in space, given an axis and angle of rotation. By extension, this can be used to transfo ...
, explicitly in Axis angle. It yields the logarithm of minimal
Frobenius norm In the field of mathematics, norms are defined for elements within a vector space. Specifically, when the vector space comprises matrices, such norms are referred to as matrix norms. Matrix norms differ from vector norms in that they must also ...
, but fails when has eigenvalues equal to −1 where this is not unique. Further note that, given rotation matrices ''A'' and ''B'', : d_g(A,B) := \, \log(A^\text B)\, _F is the geodesic distance on the 3D manifold of rotation matrices.


Calculating the logarithm of a diagonalizable matrix

A method for finding log ''A'' for a
diagonalizable matrix In linear algebra, a square matrix A is called diagonalizable or non-defective if it is matrix similarity, similar to a diagonal matrix. That is, if there exists an invertible matrix P and a diagonal matrix D such that . This is equivalent to ...
''A'' is the following: : Find the matrix ''V'' of
eigenvector In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by ...
s of ''A'' (each column of ''V'' is an eigenvector of ''A''). : Find the inverse ''V''−1 of ''V''. : Let :: A' = V^ A V . : Then ' will be a diagonal matrix whose diagonal elements are eigenvalues of ''A''. : Replace each diagonal element of ' by its (natural) logarithm in order to obtain \log A' . : Then :: \log A = V ( \log A' ) V^ . That the logarithm of ''A'' might be a complex matrix even if ''A'' is real then follows from the fact that a matrix with real and positive entries might nevertheless have negative or even complex eigenvalues (this is true for example for rotation matrices). The non-uniqueness of the logarithm of a matrix follows from the non-uniqueness of the logarithm of a complex number.


Logarithm of a non-diagonalizable matrix

The algorithm illustrated above does not work for non-diagonalizable matrices, such as : \begin1 & 1\\ 0 & 1\end. For such matrices one needs to find its Jordan decomposition and, rather than computing the logarithm of diagonal entries as above, one would calculate the logarithm of the Jordan blocks. The latter is accomplished by noticing that one can write a Jordan block as : B=\begin \lambda & 1 & 0 & 0 & \cdots & 0 \\ 0 & \lambda & 1 & 0 & \cdots & 0 \\ 0 & 0 & \lambda & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \ddots & \vdots \\ 0 & 0 & 0 & 0 & \lambda & 1 \\ 0 & 0 & 0 & 0 & 0 & \lambda \\\end = \lambda \begin 1 & \lambda^ & 0 & 0 & \cdots & 0 \\ 0 & 1 & \lambda^ & 0 & \cdots & 0 \\ 0 & 0 & 1 & \lambda^ & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \ddots & \vdots \\ 0 & 0 & 0 & 0 & 1 & \lambda^ \\ 0 & 0 & 0 & 0 & 0 & 1 \\\end=\lambda(I+K) where ''K'' is a matrix with zeros on and under the main diagonal. (The number λ is nonzero by the assumption that the matrix whose logarithm one attempts to take is invertible.) Then, by the Mercator series : \log (1+x)=x-\frac+\frac-\frac+\cdots one gets : \log B=\log \big(\lambda(I+K)\big)=\log (\lambda I) +\log (I+K)= (\log \lambda) I + K-\frac+\frac-\frac+\cdots This series has a finite number of terms (''K''''m'' is zero if ''m'' is equal to or greater than the dimension of ''K''), and so its sum is well-defined. Example. Using this approach, one finds : \log \begin1 & 1\\ 0 & 1\end =\begin0 & 1\\ 0 & 0\end, which can be verified by plugging the right-hand side into the matrix exponential: \exp \begin0 & 1\\ 0 & 0\end = I + \begin0 & 1\\ 0 & 0\end + \frac\underbrace_ + \cdots = \begin1 & 1\\ 0 & 1\end.


A functional analysis perspective

A square matrix represents a
linear operator In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that pr ...
on the
Euclidean space Euclidean space is the fundamental space of geometry, intended to represent physical space. Originally, in Euclid's ''Elements'', it was the three-dimensional space of Euclidean geometry, but in modern mathematics there are ''Euclidean spaces ...
R''n'' where ''n'' is the dimension of the matrix. Since such a space is finite-dimensional, this operator is actually bounded. Using the tools of holomorphic functional calculus, given a
holomorphic function In mathematics, a holomorphic function is a complex-valued function of one or more complex variables that is complex differentiable in a neighbourhood of each point in a domain in complex coordinate space . The existence of a complex de ...
''f'' defined on an
open set In mathematics, an open set is a generalization of an Interval (mathematics)#Definitions_and_terminology, open interval in the real line. In a metric space (a Set (mathematics), set with a metric (mathematics), distance defined between every two ...
in the
complex plane In mathematics, the complex plane is the plane (geometry), plane formed by the complex numbers, with a Cartesian coordinate system such that the horizontal -axis, called the real axis, is formed by the real numbers, and the vertical -axis, call ...
and a bounded linear operator ''T'', one can calculate ''f''(''T'') as long as ''f'' is defined on the
spectrum A spectrum (: spectra or spectrums) is a set of related ideas, objects, or properties whose features overlap such that they blend to form a continuum. The word ''spectrum'' was first used scientifically in optics to describe the rainbow of co ...
of ''T''. The function ''f''(''z'') = log ''z'' can be defined on any
simply connected In topology, a topological space is called simply connected (or 1-connected, or 1-simply connected) if it is path-connected and every Path (topology), path between two points can be continuously transformed into any other such path while preserving ...
open set in the complex plane not containing the origin, and it is holomorphic on such a domain. This implies that one can define ln ''T'' as long as the spectrum of ''T'' does not contain the origin and there is a path going from the origin to infinity not crossing the spectrum of ''T'' (e.g., if the spectrum of ''T'' is a circle with the origin inside of it, it is impossible to define ln ''T''). The spectrum of a linear operator on R''n'' is the set of eigenvalues of its matrix, and so is a finite set. As long as the origin is not in the spectrum (the matrix is invertible), the path condition from the previous paragraph is satisfied, and ln ''T'' is well-defined. The non-uniqueness of the matrix logarithm follows from the fact that one can choose more than one branch of the logarithm which is defined on the set of eigenvalues of a matrix.


A Lie group theory perspective

In the theory of
Lie group In mathematics, a Lie group (pronounced ) is a group (mathematics), group that is also a differentiable manifold, such that group multiplication and taking inverses are both differentiable. A manifold is a space that locally resembles Eucli ...
s, there is an exponential map from a
Lie algebra In mathematics, a Lie algebra (pronounced ) is a vector space \mathfrak g together with an operation called the Lie bracket, an alternating bilinear map \mathfrak g \times \mathfrak g \rightarrow \mathfrak g, that satisfies the Jacobi ident ...
\mathfrak to the corresponding Lie group ''G'' : \exp : \mathfrak \rightarrow G. For matrix Lie groups, the elements of \mathfrak and ''G'' are square matrices and the exponential map is given by the
matrix exponential In mathematics, the matrix exponential is a matrix function on square matrix, square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exp ...
. The inverse map \log=\exp^ is multivalued and coincides with the matrix logarithm discussed here. The logarithm maps from the Lie group ''G'' into the Lie algebra \mathfrak. Note that the exponential map is a local diffeomorphism between a neighborhood ''U'' of the zero matrix \underline \in \mathfrak and a neighborhood ''V'' of the identity matrix \underline\in G. Thus the (matrix) logarithm is well-defined as a map, : \log: G\supset V \rightarrow U\subset \mathfrak. An important corollary of Jacobi's formula then is : \log (\det(A)) = \mathrm(\log A)~.


Constraints in the 2 × 2 case

If a 2 × 2 real matrix has a negative
determinant In mathematics, the determinant is a Scalar (mathematics), scalar-valued function (mathematics), function of the entries of a square matrix. The determinant of a matrix is commonly denoted , , or . Its value characterizes some properties of the ...
, it has no real logarithm. Note first that any 2 × 2 real matrix can be considered one of the three types of the complex number ''z'' = ''x'' + ''y'' ''ε'', where ε2 ∈ . This ''z'' is a point on a complex subplane of the ring of matrices. The case where the determinant is negative only arises in a plane with ε2 =+1, that is a
split-complex number In algebra, a split-complex number (or hyperbolic number, also perplex number, double number) is based on a hyperbolic unit satisfying j^2=1, where j \neq \pm 1. A split-complex number has two real number components and , and is written z=x+y ...
plane. Only one quarter of this plane is the image of the exponential map, so the logarithm is only defined on that quarter (quadrant). The other three quadrants are images of this one under the
Klein four-group In mathematics, the Klein four-group is an abelian group with four elements, in which each element is Involution (mathematics), self-inverse (composing it with itself produces the identity) and in which composing any two of the three non-identi ...
generated by ε and −1. For example, let ''a'' = log 2 ; then cosh ''a'' = 5/4 and sinh ''a'' = 3/4. For matrices, this means that : A=\exp \begin0 & a \\ a & 0 \end = \begin\cosh a & \sinh a \\ \sinh a & \cosh a \end = \begin1.25 & 0.75\\ 0.75 & 1.25 \end. So this last matrix has logarithm : \log A = \begin0 & \log 2 \\ \log 2 & 0 \end. These matrices, however, do not have a logarithm: : \begin3/4 & 5/4 \\ 5/4 & 3/4 \end,\ \begin-3/4 & -5/4 \\ -5/4 & -3/4\end, \ \begin-5/4 & -3/4\\ -3/4 & -5/4 \end. They represent the three other conjugates by the four-group of the matrix above that does have a logarithm. A non-singular 2 × 2 matrix does not necessarily have a logarithm, but it is conjugate by the four-group to a matrix that does have a logarithm. It also follows, that, e.g., a square root of this matrix ''A'' is obtainable directly from exponentiating (log''A'')/2, : \sqrt= \begin\cosh ((\log 2)/2) & \sinh ((\log 2)/2) \\ \sinh ((\log 2)/2) & \cosh ((\log 2)/2) \end = \begin1.06 & 0.35\\ 0.35 & 1.06 \end ~. For a richer example, start with a
Pythagorean triple A Pythagorean triple consists of three positive integers , , and , such that . Such a triple is commonly written , a well-known example is . If is a Pythagorean triple, then so is for any positive integer . A triangle whose side lengths are a Py ...
(''p,q,r'') and let . Then : e^a = \frac = \cosh a + \sinh a. Now : \exp \begin0 & a \\ a & 0 \end = \beginr/q & p/q \\ p/q & r/q \end. Thus : \tfrac\beginr & p \\ p & r \end has the logarithm matrix : \begin0 & a \\ a & 0 \end , where .


See also

* Matrix function * Square root of a matrix *
Matrix exponential In mathematics, the matrix exponential is a matrix function on square matrix, square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exp ...
*
Baker–Campbell–Hausdorff formula In mathematics, the Baker–Campbell–Hausdorff formula gives the value of Z that solves the equation e^X e^Y = e^Z for possibly noncommutative and in the Lie algebra of a Lie group. There are various ways of writing the formula, but all ultima ...
* Derivative of the exponential map


Notes


References

* . * * . * . * {{DEFAULTSORT:Logarithm Of A Matrix Matrix theory Inverse functions Logarithms