Square Root Of A Matrix
In mathematics, the square root of a matrix extends the notion of square root from numbers to matrices. A matrix is said to be a square root of if the matrix product is equal to . Some authors use the name ''square root'' or the notation only for the specific case when is positive semidefinite, to denote the unique matrix that is positive semidefinite and such that (for real-valued matrices, where is the transpose of ). Less frequently, the name ''square root'' may be used for any factorization of a positive semidefinite matrix as , as in the Cholesky factorization, even if . This distinct meaning is discussed in '. Examples In general, a matrix can have several square roots. In particular, if A = B^2 then A=(-B)^2 as well. For example, the 2×2 identity matrix \textstyle\begin1 & 0\\ 0 & 1\end has infinitely many square roots. They are given by :\begin \pm 1 & ~~0\\ ~~0 & \pm 1\end and \begin a & ~~b\\ c & -a\end where (a, b, c) are any numbers (real or comp ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Mathematics
Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. There are many areas of mathematics, which include number theory (the study of numbers), algebra (the study of formulas and related structures), geometry (the study of shapes and spaces that contain them), Mathematical analysis, analysis (the study of continuous changes), and set theory (presently used as a foundation for all mathematics). Mathematics involves the description and manipulation of mathematical object, abstract objects that consist of either abstraction (mathematics), abstractions from nature orin modern mathematicspurely abstract entities that are stipulated to have certain properties, called axioms. Mathematics uses pure reason to proof (mathematics), prove properties of objects, a ''proof'' consisting of a succession of applications of in ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Integer
An integer is the number zero (0), a positive natural number (1, 2, 3, ...), or the negation of a positive natural number (−1, −2, −3, ...). The negations or additive inverses of the positive natural numbers are referred to as negative integers. The set (mathematics), set of all integers is often denoted by the boldface or blackboard bold The set of natural numbers \mathbb is a subset of \mathbb, which in turn is a subset of the set of all rational numbers \mathbb, itself a subset of the real numbers \mathbb. Like the set of natural numbers, the set of integers \mathbb is Countable set, countably infinite. An integer may be regarded as a real number that can be written without a fraction, fractional component. For example, 21, 4, 0, and −2048 are integers, while 9.75, , 5/4, and Square root of 2, are not. The integers form the smallest Group (mathematics), group and the smallest ring (mathematics), ring containing the natural numbers. In algebraic number theory, the ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Eigenvalue
In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a constant factor \lambda when the linear transformation is applied to it: T\mathbf v=\lambda \mathbf v. The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor \lambda (possibly a negative or complex number). Geometrically, vectors are multi-dimensional quantities with magnitude and direction, often pictured as arrows. A linear transformation rotates, stretches, or shears the vectors upon which it acts. A linear transformation's eigenvectors are those vectors that are only stretched or shrunk, with neither rotation nor shear. The corresponding eigenvalue is the factor by which an eigenvector is stretched or shrunk. If the eigenvalue is negative, the eigenvector's direction is reversed. Th ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Square Root
In mathematics, a square root of a number is a number such that y^2 = x; in other words, a number whose ''square'' (the result of multiplying the number by itself, or y \cdot y) is . For example, 4 and −4 are square roots of 16 because 4^2 = (-4)^2 = 16. Every nonnegative real number has a unique nonnegative square root, called the ''principal square root'' or simply ''the square root'' (with a definite article, see below), which is denoted by \sqrt, where the symbol "\sqrt" is called the '' radical sign'' or ''radix''. For example, to express the fact that the principal square root of 9 is 3, we write \sqrt = 3. The term (or number) whose square root is being considered is known as the ''radicand''. The radicand is the number or expression underneath the radical sign, in this case, 9. For non-negative , the principal square root can also be written in exponent notation, as x^. Every positive number has two square roots: \sqrt (which is positive) and -\sqrt (which i ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Upper Triangular Matrix
In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called if all the entries ''above'' the main diagonal are zero. Similarly, a square matrix is called if all the entries ''below'' the main diagonal are zero. Because matrix equations with triangular matrices are easier to solve, they are very important in numerical analysis. By the LU decomposition algorithm, an invertible matrix may be written as the product of a lower triangular matrix ''L'' and an upper triangular matrix ''U'' if and only if all its leading principal minors are non-zero. Description A matrix of the form :L = \begin \ell_ & & & & 0 \\ \ell_ & \ell_ & & & \\ \ell_ & \ell_ & \ddots & & \\ \vdots & \vdots & \ddots & \ddots & \\ \ell_ & \ell_ & \ldots & \ell_ & \ell_ \end is called a lower triangular matrix or left triangular matrix, and ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Diagonal Matrix
In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is \left begin 3 & 0 \\ 0 & 2 \end\right/math>, while an example of a 3×3 diagonal matrix is \left begin 6 & 0 & 0 \\ 0 & 5 & 0 \\ 0 & 0 & 4 \end\right/math>. An identity matrix of any size, or any multiple of it is a diagonal matrix called a ''scalar matrix'', for example, \left begin 0.5 & 0 \\ 0 & 0.5 \end\right/math>. In geometry, a diagonal matrix may be used as a '' scaling matrix'', since matrix multiplication with it results in changing scale (size) and possibly also shape; only a scalar matrix results in uniform change in scale. Definition As stated above, a diagonal matrix is a matrix in which all off-diagonal entries are zero. That is, the matrix with columns and rows is diagonal if \forall i,j \in \, i \ne j \ ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Idempotent Matrix
In linear algebra, an idempotent matrix is a matrix which, when multiplied by itself, yields itself. That is, the matrix A is idempotent if and only if A^2 = A. For this product A^2 to be defined, A must necessarily be a square matrix. Viewed this way, idempotent matrices are idempotent elements of matrix rings. Example Examples of 2 \times 2 idempotent matrices are: \begin 1 & 0 \\ 0 & 1 \end \qquad \begin 3 & -6 \\ 1 & -2 \end Examples of 3 \times 3 idempotent matrices are: \begin 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end \qquad \begin 2 & -2 & -4 \\ -1 & 3 & 4 \\ 1 & -2 & -3 \end Real 2 × 2 case If a matrix \begina & b \\ c & d \end is idempotent, then * a = a^2 + bc, * b = ab + bd, implying b(1 - a - d) = 0 so b = 0 or d = 1 - a, * c = ca + cd, implying c(1 - a - d) = 0 so c = 0 or d = 1 - a, * d = bc + d^2. Thus, a necessary condition for a 2\times2 matrix to be idempotent is that either it is diagonal or its trace equals 1. For idempotent diagonal matrices, ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Eigendecomposition Of A Matrix
In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem. Fundamental theory of matrix eigenvectors and eigenvalues A (nonzero) vector of dimension is an eigenvector of a square matrix if it satisfies a linear equation of the form \mathbf \mathbf = \lambda \mathbf for some scalar . Then is called the eigenvalue corresponding to . Geometrically speaking, the eigenvectors of are the vectors that merely elongates or shrinks, and the amount that they elongate/shrink by is the eigenvalue. The above equation is called the eigenvalue equation or the eigenvalue problem. This yields an equation for the eigenvalues p\left(\lambda\right) = ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Jordan Normal Form
\begin \lambda_1 1\hphantom\hphantom\\ \hphantom\lambda_1 1\hphantom\\ \hphantom\lambda_1\hphantom\\ \hphantom\lambda_2 1\hphantom\hphantom\\ \hphantom\hphantom\lambda_2\hphantom\\ \hphantom\lambda_3\hphantom\\ \hphantom\ddots\hphantom\\ \hphantom\lambda_n 1\hphantom\\ \hphantom\hphantom\lambda_n \end Example of a matrix in Jordan normal form. All matrix entries not shown are zero. The outlined squares are known as "Jordan blocks". Each Jordan block contains one number ''λi'' on its main diagonal, and 1s directly above the main diagonal. The ''λi''s are the eigenvalues of the matrix; they need not be distinct. In linear algebra, a Jordan normal form, also known as a Jordan canonical form, is an upper triangular matrix of a particular form called a Jordan matrix representing a linear operator on a finite-dimensional vector space with respect to some basis. Such a matrix has each non-zero off-diagonal entry equal to 1, immediately above the main diagonal (on the ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Holomorphic Functional Calculus
In mathematics, holomorphic functional calculus is functional calculus with holomorphic functions. That is to say, given a holomorphic function ''f'' of a complex argument ''z'' and an operator ''T'', the aim is to construct an operator, ''f''(''T''), which naturally extends the function ''f'' from complex argument to operator argument. More precisely, the functional calculus defines a continuous algebra homomorphism from the holomorphic functions on a neighbourhood of the spectrum of ''T'' to the bounded operators. This article will discuss the case where ''T'' is a bounded linear operator on some Banach space. In particular, ''T'' can be a square matrix with complex entries, a case which will be used to illustrate functional calculus and provide some heuristic insights for the assumptions involved in the general construction. Motivation Need for a general functional calculus In this section ''T'' will be assumed to be a ''n'' × ''n'' matrix with complex entrie ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Positive Definite Matrix
In mathematics, a symmetric matrix M with real entries is positive-definite if the real number \mathbf^\mathsf M \mathbf is positive for every nonzero real column vector \mathbf, where \mathbf^\mathsf is the row vector transpose of \mathbf. More generally, a Hermitian matrix (that is, a complex matrix equal to its conjugate transpose) is positive-definite if the real number \mathbf^* M \mathbf is positive for every nonzero complex column vector \mathbf, where \mathbf^* denotes the conjugate transpose of \mathbf. Positive semi-definite matrices are defined similarly, except that the scalars \mathbf^\mathsf M \mathbf and \mathbf^* M \mathbf are required to be positive ''or zero'' (that is, nonnegative). Negative-definite and negative semi-definite matrices are defined analogously. A matrix that is not positive semi-definite and not negative semi-definite is sometimes called ''indefinite''. Some authors use more general definitions of definiteness, permitting the matrices to b ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Hermitian Matrix
In mathematics, a Hermitian matrix (or self-adjoint matrix) is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the -th row and -th column is equal to the complex conjugate of the element in the -th row and -th column, for all indices and : A \text \quad \iff \quad a_ = \overline or in matrix form: A \text \quad \iff \quad A = \overline . Hermitian matrices can be understood as the complex extension of real symmetric matrices. If the conjugate transpose of a matrix A is denoted by A^\mathsf, then the Hermitian property can be written concisely as A \text \quad \iff \quad A = A^\mathsf Hermitian matrices are named after Charles Hermite, who demonstrated in 1855 that matrices of this form share a property with real symmetric matrices of always having real eigenvalues. Other, equivalent notations in common use are A^\mathsf = A^\dagger = A^\ast, although in quantum mechanics, A^\ast typically means the complex conjugate onl ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |