Involutory Matrix
In mathematics, an involutory matrix is a square matrix that is its own inverse. That is, multiplication by the matrix \bold A_ is an involution if and only if \bold A^2 = \bold I, where \bold I is the n \times n identity matrix. Involutory matrices are all square roots of the identity matrix. This is a consequence of the fact that any invertible matrix multiplied by its inverse is the identity.. Examples The 2\times2 real matrix \begina & b \\ c & -a \end is involutory provided that a^2 + bc = 1 . The Pauli matrices in are involutory: \begin \sigma_1 = \sigma_x &= \begin 0 & 1 \\ 1 & 0 \end, \\ \sigma_2 = \sigma_y &= \begin 0 & -i \\ i & 0 \end, \\ \sigma_3 = \sigma_z &= \begin 1 & 0 \\ 0 & -1 \end. \end One of the three classes of elementary matrix is involutory, namely the row-interchange elementary matrix. A special case of another class of elementary matrix, that which represents multiplication of a row ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Normal Matrix
In mathematics, a complex square matrix is normal if it commutes with its conjugate transpose : :A \text \iff A^*A = AA^* . The concept of normal matrices can be extended to normal operators on infinite-dimensional normed spaces and to normal elements in C*-algebras. As in the matrix case, normality means commutativity is preserved, to the extent possible, in the noncommutative setting. This makes normal operators, and normal elements of C*-algebras, more amenable to analysis. The spectral theorem states that a matrix is normal if and only if it is unitarily similar to a diagonal matrix, and therefore any matrix satisfying the equation is diagonalizable. (The converse does not hold because diagonalizable matrices may have non-orthogonal eigenspaces.) Thus A = U D U^* and A^* = U D^* U^*where D is a diagonal matrix whose diagonal values are in general complex. The left and right singular vectors in the singular value decomposition of a normal matrix A = U D V^* dif ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Mathematics
Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. There are many areas of mathematics, which include number theory (the study of numbers), algebra (the study of formulas and related structures), geometry (the study of shapes and spaces that contain them), Mathematical analysis, analysis (the study of continuous changes), and set theory (presently used as a foundation for all mathematics). Mathematics involves the description and manipulation of mathematical object, abstract objects that consist of either abstraction (mathematics), abstractions from nature orin modern mathematicspurely abstract entities that are stipulated to have certain properties, called axioms. Mathematics uses pure reason to proof (mathematics), prove properties of objects, a ''proof'' consisting of a succession of applications of in ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Linear Transformation
In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that preserves the operations of vector addition and scalar multiplication. The same names and the same definition are also used for the more general case of modules over a ring; see Module homomorphism. If a linear map is a bijection then it is called a . In the case where V = W, a linear map is called a linear endomorphism. Sometimes the term refers to this case, but the term "linear operator" can have different meanings for different conventions: for example, it can be used to emphasize that V and W are real vector spaces (not necessarily with V = W), or it can be used to emphasize that V is a function space, which is a common convention in functional analysis. Sometimes the term ''linear function'' has the same meaning as ''linear map' ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Bijection
In mathematics, a bijection, bijective function, or one-to-one correspondence is a function between two sets such that each element of the second set (the codomain) is the image of exactly one element of the first set (the domain). Equivalently, a bijection is a relation between two sets such that each element of either set is paired with exactly one element of the other set. A function is bijective if it is invertible; that is, a function f:X\to Y is bijective if and only if there is a function g:Y\to X, the ''inverse'' of , such that each of the two ways for composing the two functions produces an identity function: g(f(x)) = x for each x in X and f(g(y)) = y for each y in Y. For example, the ''multiplication by two'' defines a bijection from the integers to the even numbers, which has the ''division by two'' as its inverse function. A function is bijective if and only if it is both injective (or ''one-to-one'')—meaning that each element in the codomain is mappe ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Idempotent Matrix
In linear algebra, an idempotent matrix is a matrix which, when multiplied by itself, yields itself. That is, the matrix A is idempotent if and only if A^2 = A. For this product A^2 to be defined, A must necessarily be a square matrix. Viewed this way, idempotent matrices are idempotent elements of matrix rings. Example Examples of 2 \times 2 idempotent matrices are: \begin 1 & 0 \\ 0 & 1 \end \qquad \begin 3 & -6 \\ 1 & -2 \end Examples of 3 \times 3 idempotent matrices are: \begin 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end \qquad \begin 2 & -2 & -4 \\ -1 & 3 & 4 \\ 1 & -2 & -3 \end Real 2 × 2 case If a matrix \begina & b \\ c & d \end is idempotent, then * a = a^2 + bc, * b = ab + bd, implying b(1 - a - d) = 0 so b = 0 or d = 1 - a, * c = ca + cd, implying c(1 - a - d) = 0 so c = 0 or d = 1 - a, * d = bc + d^2. Thus, a necessary condition for a 2\times2 matrix to be idempotent is that either it is diagonal or its trace equals 1. For idempotent diagonal matrices, ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Field (mathematics)
In mathematics, a field is a set (mathematics), set on which addition, subtraction, multiplication, and division (mathematics), division are defined and behave as the corresponding operations on rational number, rational and real numbers. A field is thus a fundamental algebraic structure which is widely used in algebra, number theory, and many other areas of mathematics. The best known fields are the field of rational numbers, the field of real numbers and the field of complex numbers. Many other fields, such as field of rational functions, fields of rational functions, algebraic function fields, algebraic number fields, and p-adic number, ''p''-adic fields are commonly used and studied in mathematics, particularly in number theory and algebraic geometry. Most cryptographic protocols rely on finite fields, i.e., fields with finitely many element (set), elements. The theory of fields proves that angle trisection and squaring the circle cannot be done with a compass and straighte ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Determinant
In mathematics, the determinant is a Scalar (mathematics), scalar-valued function (mathematics), function of the entries of a square matrix. The determinant of a matrix is commonly denoted , , or . Its value characterizes some properties of the matrix and the linear map represented, on a given basis (linear algebra), basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible matrix, invertible and the corresponding linear map is an linear isomorphism, isomorphism. However, if the determinant is zero, the matrix is referred to as singular, meaning it does not have an inverse. The determinant is completely determined by the two following properties: the determinant of a product of matrices is the product of their determinants, and the determinant of a triangular matrix is the product of its diagonal entries. The determinant of a matrix is :\begin a & b\\c & d \end=ad-bc, and the determinant of a matrix is : \begin a & b & c \\ d & e ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Unitary Matrix
In linear algebra, an invertible complex square matrix is unitary if its matrix inverse equals its conjugate transpose , that is, if U^* U = UU^* = I, where is the identity matrix. In physics, especially in quantum mechanics, the conjugate transpose is referred to as the Hermitian adjoint of a matrix and is denoted by a dagger (), so the equation above is written U^\dagger U = UU^\dagger = I. A complex matrix is special unitary if it is unitary and its matrix determinant equals . For real numbers, the analogue of a unitary matrix is an orthogonal matrix. Unitary matrices have significant importance in quantum mechanics because they preserve norms, and thus, probability amplitudes. Properties For any unitary matrix of finite size, the following hold: * Given two complex vectors and , multiplication by preserves their inner product; that is, . * is normal (U^* U = UU^*). * is diagonalizable; that is, is unitarily similar to a diagonal matrix, as a consequence of ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Hermitian Matrix
In mathematics, a Hermitian matrix (or self-adjoint matrix) is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the -th row and -th column is equal to the complex conjugate of the element in the -th row and -th column, for all indices and : A \text \quad \iff \quad a_ = \overline or in matrix form: A \text \quad \iff \quad A = \overline . Hermitian matrices can be understood as the complex extension of real symmetric matrices. If the conjugate transpose of a matrix A is denoted by A^\mathsf, then the Hermitian property can be written concisely as A \text \quad \iff \quad A = A^\mathsf Hermitian matrices are named after Charles Hermite, who demonstrated in 1855 that matrices of this form share a property with real symmetric matrices of always having real eigenvalues. Other, equivalent notations in common use are A^\mathsf = A^\dagger = A^\ast, although in quantum mechanics, A^\ast typically means the complex conjugate onl ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Diagonalizable Matrix
In linear algebra, a square matrix A is called diagonalizable or non-defective if it is matrix similarity, similar to a diagonal matrix. That is, if there exists an invertible matrix P and a diagonal matrix D such that . This is equivalent to (Such D are not unique.) This property exists for any linear map: for a dimension (vector space), finite-dimensional vector space a linear map T:V\to V is called diagonalizable if there exists an Basis (linear algebra)#Ordered bases and coordinates, ordered basis of V consisting of eigenvectors of T. These definitions are equivalent: if T has a matrix (mathematics), matrix representation A = PDP^ as above, then the column vectors of P form a basis consisting of eigenvectors of and the diagonal entries of D are the corresponding eigenvalues of with respect to this eigenvector basis, T is represented by Diagonalization is the process of finding the above P and and makes many subsequent computations easier. One can raise a diag ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Eigenvalues And Eigenvectors
In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a constant factor \lambda when the linear transformation is applied to it: T\mathbf v=\lambda \mathbf v. The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor \lambda (possibly a negative or complex number). Geometrically, vectors are multi-dimensional quantities with magnitude and direction, often pictured as arrows. A linear transformation rotates, stretches, or shears the vectors upon which it acts. A linear transformation's eigenvectors are those vectors that are only stretched or shrunk, with neither rotation nor shear. The corresponding eigenvalue is the factor by which an eigenvector is stretched or shrunk. If the eigenvalue is negative, the eigenvector's direction is reversed. Th ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Defective Matrix
In linear algebra, a defective matrix is a square matrix that does not have a complete basis of eigenvectors, and is therefore not diagonalizable. In particular, an n \times n matrix is defective if and only if it does not have n linearly independent eigenvectors. A complete basis is formed by augmenting the eigenvectors with generalized eigenvectors, which are necessary for solving defective systems of ordinary differential equations and other problems. An n \times n defective matrix always has fewer than n distinct eigenvalues, since distinct eigenvalues always have linearly independent eigenvectors. In particular, a defective matrix has one or more eigenvalues \lambda with algebraic multiplicity m > 1 (that is, they are multiple roots of the characteristic polynomial), but fewer than m linearly independent eigenvectors associated with \lambda. If the algebraic multiplicity of \lambda exceeds its geometric multiplicity (that is, the number of linearly independent eigenvectors ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |