HOME

TheInfoList



OR:

In
mathematics Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics ...
, the polar decomposition of a square real or complex
matrix Matrix most commonly refers to: * ''The Matrix'' (franchise), an American media franchise ** '' The Matrix'', a 1999 science-fiction action film ** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchi ...
A is a
factorization In mathematics, factorization (or factorisation, see English spelling differences) or factoring consists of writing a number or another mathematical object as a product of several ''factors'', usually smaller or simpler objects of the same kind ...
of the form A = U P, where U is an
orthogonal matrix In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors. One way to express this is Q^\mathrm Q = Q Q^\mathrm = I, where is the transpose of and is the identity m ...
and P is a positive semi-definite symmetric matrix (U is a
unitary matrix In linear algebra, a complex square matrix is unitary if its conjugate transpose is also its inverse, that is, if U^* U = UU^* = UU^ = I, where is the identity matrix. In physics, especially in quantum mechanics, the conjugate transpose is ...
and P is a positive semi-definite
Hermitian matrix In mathematics, a Hermitian matrix (or self-adjoint matrix) is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the -th row and -th column is equal to the complex conjugate of the element in the -t ...
in the complex case), both square and of the same size. Intuitively, if a real n\times n matrix A is interpreted as a
linear transformation In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that pre ...
of n-dimensional
space Space is the boundless three-dimensional extent in which objects and events have relative position and direction. In classical physics, physical space is often conceived in three linear dimensions, although modern physicists usually consi ...
\mathbb^n, the polar decomposition separates it into a
rotation Rotation, or spin, is the circular movement of an object around a '' central axis''. A two-dimensional rotating object has only one possible central axis and can rotate in either a clockwise or counterclockwise direction. A three-dimensional ...
or
reflection Reflection or reflexion may refer to: Science and technology * Reflection (physics), a common wave phenomenon ** Specular reflection, reflection from a smooth surface *** Mirror image, a reflection in a mirror or in water ** Signal reflection, in ...
U of \mathbb^n, and a scaling of the space along a set of n orthogonal axes. The polar decomposition of a square matrix A always exists. If A is invertible, the decomposition is unique, and the factor P will be positive-definite. In that case, A can be written uniquely in the form A = U e^X , where U is unitary and X is the unique self-adjoint
logarithm In mathematics, the logarithm is the inverse function to exponentiation. That means the logarithm of a number  to the base  is the exponent to which must be raised, to produce . For example, since , the ''logarithm base'' 10 ...
of the matrix P. This decomposition is useful in computing the
fundamental group In the mathematical field of algebraic topology, the fundamental group of a topological space is the group of the equivalence classes under homotopy of the loops contained in the space. It records information about the basic shape, or holes, o ...
of (matrix)
Lie group In mathematics, a Lie group (pronounced ) is a group that is also a differentiable manifold. A manifold is a space that locally resembles Euclidean space, whereas groups define the abstract concept of a binary operation along with the addi ...
s. The polar decomposition can also be defined as A = P' U where P' = U P U^ is a symmetric positive-definite matrix with the same eigenvalues as P but different eigenvectors. The polar decomposition of a matrix can be seen as the matrix analog of the polar form of a
complex number In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation i^= -1; every complex number can be expressed in the fo ...
z as z = u r, where r is its
absolute value In mathematics, the absolute value or modulus of a real number x, is the non-negative value without regard to its sign. Namely, , x, =x if is a positive number, and , x, =-x if x is negative (in which case negating x makes -x positive), ...
(a non-negative
real number In mathematics, a real number is a number that can be used to measure a ''continuous'' one-dimensional quantity such as a distance, duration or temperature. Here, ''continuous'' means that values can have arbitrarily small variations. Every ...
), and u is a complex number with unit norm (an element of the
circle group In mathematics, the circle group, denoted by \mathbb T or \mathbb S^1, is the multiplicative group of all complex numbers with absolute value 1, that is, the unit circle in the complex plane or simply the unit complex numbers. \mathbb T = \. ...
). The definition A = UP may be extended to rectangular matrices A\in\mathbb^ by requiring U\in\mathbb^ to be a semi-unitary matrix and P\in\mathbb^ to be a positive-semidefinite Hermitian matrix. The decomposition always exists and P is always unique. The matrix U is unique if and only if A has full rank.


Intuitive interpretation

A real square m\times m matrix A can be interpreted as the
linear transformation In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that pre ...
of \mathbb^m that takes a column vector x to A x. Then, in the polar decomposition A = RP, the factor R is an m\times m real orthonormal matrix. The polar decomposition then can be seen as expressing the linear transformation defined by A into a scaling of the space \mathbb^m along each eigenvector e_i of A by a scale factor \sigma_i (the action of P), followed by a single rotation or reflection of \mathbb^m (the action of R). Alternatively, the decomposition A=P R expresses the transformation defined by A as a rotation (R) followed by a scaling (P) along certain orthogonal directions. The scale factors are the same, but the directions are different.


Properties

The polar decomposition of the
complex conjugate In mathematics, the complex conjugate of a complex number is the number with an equal real part and an imaginary part equal in magnitude but opposite in sign. That is, (if a and b are real, then) the complex conjugate of a + bi is equal to a - ...
of A is given by \overline = \overline\overline. Note that\det A = \det U \det P = e^ rgives the corresponding polar decomposition of the
determinant In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. It characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if a ...
of ''A'', since \det U = e^ and \det P = r = \left, \det A\. In particular, if A has determinant 1 then both U and P have determinant 1. The positive-semidefinite matrix ''P'' is always unique, even if ''A'' is
singular Singular may refer to: * Singular, the grammatical number that denotes a unit quantity, as opposed to the plural and other forms * Singular homology * SINGULAR, an open source Computer Algebra System (CAS) * Singular or sounder, a group of boar ...
, and is denoted asP = \left(A^* A\right)^\frac,where A^* denotes the
conjugate transpose In mathematics, the conjugate transpose, also known as the Hermitian transpose, of an m \times n complex matrix \boldsymbol is an n \times m matrix obtained by transposing \boldsymbol and applying complex conjugate on each entry (the complex c ...
of A. The uniqueness of ''P'' ensures that this expression is well-defined. The uniqueness is guaranteed by the fact that A^* A is a positive-semidefinite Hermitian matrix and, therefore, has a unique positive-semidefinite Hermitian
square root In mathematics, a square root of a number is a number such that ; in other words, a number whose '' square'' (the result of multiplying the number by itself, or  ⋅ ) is . For example, 4 and −4 are square roots of 16, because . ...
. If ''A'' is invertible, then ''P'' is positive-definite, thus also invertible and the matrix ''U'' is uniquely determined byU = AP^.


Relation to the SVD

In terms of the
singular value decomposition In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any \ m \times n\ matrix. It is re ...
(SVD) of A, A = W\Sigma V^*, one has\begin P &= V\Sigma V^* \\ U &= WV^* \endwhere U, V, and W are unitary matrices (called orthogonal matrices if the field is the reals \mathbb). This confirms that P is positive-definite and U is unitary. Thus, the existence of the SVD is equivalent to the existence of polar decomposition. One can also decompose A in the formA = P'UHere U is the same as before and P' is given byP' = UPU^ = \left(AA^*\right)^\frac = W \Sigma W^*.This is known as the left polar decomposition, whereas the previous decomposition is known as the right polar decomposition. Left polar decomposition is also known as reverse polar decomposition. The polar decomposition of a square invertible real matrix A is of the form A = , A, R where , A, = \left(AA^\textsf\right)^\frac is a positive-definite
matrix Matrix most commonly refers to: * ''The Matrix'' (franchise), an American media franchise ** '' The Matrix'', a 1999 science-fiction action film ** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchi ...
and R = , A, ^A is an orthogonal matrix.


Relation to Normal matrices

The matrix A with polar decomposition A=UP is normal if and only U and P
commute Commute, commutation or commutative may refer to: * Commuting, the process of travelling between a place of residence and a place of work Mathematics * Commutative property, a property of a mathematical operation whose result is insensitive to th ...
: UP = PU, or equivalently, they are
simultaneously diagonalizable In linear algebra, a square matrix A is called diagonalizable or non-defective if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix P and a diagonal matrix D such that or equivalently (Such D are not unique. ...
.


Construction and proofs of existence

The core idea behind the construction of the polar decomposition is similar to that used to compute the
singular-value decomposition In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any \ m \times n\ matrix. It is rel ...
.


Derivation for normal matrices

If A is normal, then it is unitarily equivalent to a diagonal matrix: A = V\Lambda V^* for some unitary matrix V and some diagonal matrix \Lambda. This makes the derivation of its polar decomposition particularly straightforward, as we can then write A = V\Phi_\Lambda , \Lambda, V^* = \underbrace_ \underbrace_, where \Phi_\Lambda is a diagonal matrix containing the ''phases'' of the elements of \Lambda, that is, (\Phi_\Lambda)_\equiv \Lambda_/ , \Lambda_, when \Lambda_\neq 0, and (\Phi_\Lambda)_=0 when \Lambda_=0. The polar decomposition is thus A=UP, with U and P diagonal in the eigenbasis of A and having eigenvalues equal to the phases and absolute values of those of A, respectively.


Derivation for invertible matrices

From the
singular-value decomposition In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any \ m \times n\ matrix. It is rel ...
, it can be shown that a matrix A is invertible if and only if A^* A (equivalently, AA^*) is. Moreover, this is true if and only if the eigenvalues of A^* A are all not zero.Note how this implies, by the positivity of A^* A, that the eigenvalues are all real and strictly positive. In this case, the polar decomposition is directly obtained by writing A = A\left(A^* A\right)^\left(A^* A\right)^\frac, and observing that A\left(A^* A\right)^ is unitary. To see this, we can exploit the spectral decomposition of A^* A to write A\left(A^* A\right)^ = AVD^V^*. In this expression, V^* is unitary because V is. To show that also AVD^ is unitary, we can use the SVD to write A = WD^\fracV^*, so that AV D^ = WD^\fracV^* VD^ = W, where again W is unitary by construction. Yet another way to directly show the unitarity of A\left(A^* A\right)^ is to note that, writing the SVD of A in terms of rank-1 matrices as A = \sum_k s_k v_k w_k^*, where s_kare the singular values of A, we have A\left(A^* A\right)^ = \left(\sum_j \lambda_j v_j w_j^*\right)\left(\sum_k , \lambda_k, ^ w_k w_k^*\right) = \sum_k \frac v_k w_k^*, which directly implies the unitarity of A\left(A^* A\right)^ because a matrix is unitary if and only if its singular values have unitary absolute value. Note how, from the above construction, it follows that ''the unitary matrix in the polar decomposition of an invertible matrix is uniquely defined''.


General derivation

The SVD of a squared matrix A reads A = W D^\frac V^*, with W, V unitary matrices, and D a diagonal, positive semi-definite matrix. By simply inserting an additional pair of Ws or Vs, we obtain the two forms of the polar decomposition of A: A = WD^\fracV^* = \underbrace_P \underbrace_U = \underbrace_U \underbrace_. More generally, if A is some rectangular n\times m matrix, its SVD can be written as A=WD^V^* where now W and V are isometries with dimensions n\times r and m\times r , respectively, where r\equiv\operatorname(A) , and D is again a diagonal positive semi-definite squared matrix with dimensions r\times r . We can now apply the same reasoning used in the above equation to write A=PU=UP' , but now U\equiv WV^* is not in general unitary. Nonetheless, U has the same support and range as A , and it satisfies U^* U=VV^* and UU^*=WW^* . This makes U into an isometry when its action is restricted onto the support of A , that is, it means that U is a
partial isometry In functional analysis a partial isometry is a linear map between Hilbert spaces such that it is an isometry on the orthogonal complement of its kernel. The orthogonal complement of its kernel is called the initial subspace and its range is call ...
. As an explicit example of this more general case, consider the SVD of the following matrix: A\equiv \begin1&1\\2&-2\\0&0\end = \underbrace_ \underbrace_ \underbrace_ . We then have WV^\dagger = \frac1\begin1&1 \\ 1&-1 \\ 0&0\end which is an isometry, but not unitary. On the other hand, if we consider the decomposition of A\equiv \begin1&0&0\\0&2&0\end = \begin1&0\\0&1\end \begin1&0\\0&2\end \begin1&0&0\\0&1&0\end, we find WV^\dagger =\begin1&0&0\\0&1&0\end, which is a partial isometry (but not an isometry).


Bounded operators on Hilbert space

The polar decomposition of any
bounded linear operator In functional analysis and operator theory, a bounded linear operator is a linear transformation L : X \to Y between topological vector spaces (TVSs) X and Y that maps bounded subsets of X to bounded subsets of Y. If X and Y are normed vect ...
''A'' between complex
Hilbert space In mathematics, Hilbert spaces (named after David Hilbert) allow generalizing the methods of linear algebra and calculus from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. Hilbert spaces arise natural ...
s is a canonical factorization as the product of a
partial isometry In functional analysis a partial isometry is a linear map between Hilbert spaces such that it is an isometry on the orthogonal complement of its kernel. The orthogonal complement of its kernel is called the initial subspace and its range is call ...
and a non-negative operator. The polar decomposition for matrices generalizes as follows: if ''A'' is a bounded linear operator then there is a unique factorization of ''A'' as a product ''A'' = ''UP'' where ''U'' is a partial isometry, ''P'' is a non-negative self-adjoint operator and the initial space of ''U'' is the closure of the range of ''P''. The operator ''U'' must be weakened to a partial isometry, rather than unitary, because of the following issues. If ''A'' is the one-sided shift on ''l''2(N), then , ''A'', = 1/2 = ''I''. So if ''A'' = ''U'' , ''A'', , ''U'' must be ''A'', which is not unitary. The existence of a polar decomposition is a consequence of Douglas' lemma: The operator ''C'' can be defined by ''C''(''Bh'') := ''Ah'' for all ''h'' in ''H'', extended by continuity to the closure of ''Ran''(''B''), and by zero on the orthogonal complement to all of ''H''. The lemma then follows since ''AA'' ≤ ''BB'' implies ''Ker''(''B'') ⊂ ''Ker''(''A''). In particular. If ''AA'' = ''BB'', then ''C'' is a partial isometry, which is unique if ''Ker''(''B'') ⊂ ''Ker''(''C''). In general, for any bounded operator ''A'', A^*A = \left(A^*A\right)^\frac \left(A^*A\right)^\frac, where (''AA'')1/2 is the unique positive square root of ''AA'' given by the usual functional calculus. So by the lemma, we have A = U\left(A^*A\right)^\frac for some partial isometry ''U'', which is unique if ''Ker''(''A'') ⊂ ''Ker''(''U''). Take ''P'' to be (''AA'')1/2 and one obtains the polar decomposition ''A'' = ''UP''. Notice that an analogous argument can be used to show ''A = P'U'', where ''P' '' is positive and ''U'' a partial isometry. When ''H'' is finite-dimensional, ''U'' can be extended to a unitary operator; this is not true in general (see example above). Alternatively, the polar decomposition can be shown using the operator version of
singular value decomposition In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any \ m \times n\ matrix. It is re ...
. By property of the
continuous functional calculus In mathematics, particularly in operator theory and C*-algebra theory, a continuous functional calculus is a functional calculus which allows the application of a continuous function to normal elements of a C*-algebra. Theorem Theorem. Let ...
, , ''A'', is in the
C*-algebra In mathematics, specifically in functional analysis, a C∗-algebra (pronounced "C-star") is a Banach algebra together with an involution satisfying the properties of the adjoint. A particular case is that of a complex algebra ''A'' of continuou ...
generated by ''A''. A similar but weaker statement holds for the partial isometry: ''U'' is in the von Neumann algebra generated by ''A''. If ''A'' is invertible, the polar part ''U'' will be in the
C*-algebra In mathematics, specifically in functional analysis, a C∗-algebra (pronounced "C-star") is a Banach algebra together with an involution satisfying the properties of the adjoint. A particular case is that of a complex algebra ''A'' of continuou ...
as well.


Unbounded operators

If ''A'' is a closed, densely defined unbounded operator between complex Hilbert spaces then it still has a (unique) polar decomposition A = U , A, where , ''A'', is a (possibly unbounded) non-negative self adjoint operator with the same domain as ''A'', and ''U'' is a partial isometry vanishing on the orthogonal complement of the range ''Ran''(, ''A'', ). The proof uses the same lemma as above, which goes through for unbounded operators in general. If ''Dom''(''AA'') = ''Dom''(''BB'') and ''AAh'' = ''BBh'' for all ''h'' ∈ ''Dom''(''AA''), then there exists a partial isometry ''U'' such that ''A'' = ''UB''. ''U'' is unique if ''Ran''(''B'') ⊂ ''Ker''(''U''). The operator ''A'' being closed and densely defined ensures that the operator ''AA'' is self-adjoint (with dense domain) and therefore allows one to define (''AA'')1/2. Applying the lemma gives polar decomposition. If an unbounded operator ''A'' is affiliated to a von Neumann algebra M, and ''A'' = ''UP'' is its polar decomposition, then ''U'' is in M and so is the spectral projection of ''P'', 1''B''(''P''), for any Borel set ''B'' in .


Quaternion polar decomposition

The polar decomposition of
quaternion In mathematics, the quaternion number system extends the complex numbers. Quaternions were first described by the Irish mathematician William Rowan Hamilton in 1843 and applied to mechanics in three-dimensional space. Hamilton defined a quater ...
s H depends on the unit 2-dimensional sphere \lbrace x i + y j + z k \in H : x^2 + y^2 +z^2 = 1 \rbrace of square roots of minus one. Given any ''r'' on this sphere, and an angle −π < ''a'' ≤ π, the
versor In mathematics, a versor is a quaternion of norm one (a ''unit quaternion''). The word is derived from Latin ''versare'' = "to turn" with the suffix ''-or'' forming a noun from the verb (i.e. ''versor'' = "the turner"). It was introduced by Will ...
e^ = \cos (a) + r\ \sin (a) is on the unit
3-sphere In mathematics, a 3-sphere is a higher-dimensional analogue of a sphere. It may be embedded in 4-dimensional Euclidean space as the set of points equidistant from a fixed central point. Analogous to how the boundary of a ball in three dimensio ...
of H. For ''a'' = 0 and ''a'' = π, the versor is 1 or −1 regardless of which ''r'' is selected. The norm ''t'' of a quaternion ''q'' is the
Euclidean distance In mathematics, the Euclidean distance between two points in Euclidean space is the length of a line segment between the two points. It can be calculated from the Cartesian coordinates of the points using the Pythagorean theorem, therefore ...
from the origin to ''q''. When a quaternion is not just a real number, then there is a ''unique'' polar decomposition q = t e^.


Alternative planar decompositions

In the
Cartesian plane A Cartesian coordinate system (, ) in a plane is a coordinate system that specifies each point uniquely by a pair of numerical coordinates, which are the signed distances to the point from two fixed perpendicular oriented lines, measured in ...
, alternative planar ring decompositions arise as follows:


Numerical determination of the matrix polar decomposition

To compute an approximation of the polar decomposition ''A'' = ''UP'', usually the unitary factor ''U'' is approximated. The iteration is based on Heron's method for the square root of ''1'' and computes, starting from U_0 = A, the sequence U_ = \frac\left(U_k + \left(U_k^*\right )^\right),\qquad k = 0, 1, 2, \ldots The combination of inversion and Hermite conjugation is chosen so that in the singular value decomposition, the unitary factors remain the same and the iteration reduces to Heron's method on the singular values. This basic iteration may be refined to speed up the process:


See also

* Cartan decomposition * Algebraic polar decomposition * Polar decomposition of a complex measure * Lie group decomposition


References

* Conway, J.B.: A Course in Functional Analysis.
Graduate Texts in Mathematics Graduate Texts in Mathematics (GTM) (ISSN 0072-5285) is a series of graduate-level textbooks in mathematics published by Springer-Verlag. The books in this series, like the other Springer-Verlag mathematics series, are yellow books of a standard ...
. New York: Springer 1990 * Douglas, R.G.: On Majorization, Factorization, and Range Inclusion of Operators on Hilbert Space. Proc. Amer. Math. Soc. 17, 413-415 (1966) * . * {{SpectralTheory Lie groups Operator theory Matrix theory Matrix decompositions