HOME

TheInfoList



OR:

In
linear algebra Linear algebra is the branch of mathematics concerning linear equations such as: :a_1x_1+\cdots +a_nx_n=b, linear maps such as: :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrice ...
, the adjugate or classical adjoint of a
square matrix In mathematics, a square matrix is a matrix with the same number of rows and columns. An ''n''-by-''n'' matrix is known as a square matrix of order Any two square matrices of the same order can be added and multiplied. Square matrices are often ...
is the
transpose In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal; that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other notations). The tr ...
of its
cofactor matrix In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix, cut down from A by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square matrices (first mino ...
and is denoted by . It is also occasionally known as adjunct matrix, or "adjoint", though the latter today normally refers to a different concept, the adjoint operator which is the
conjugate transpose In mathematics, the conjugate transpose, also known as the Hermitian transpose, of an m \times n complex matrix \boldsymbol is an n \times m matrix obtained by transposing \boldsymbol and applying complex conjugate on each entry (the complex c ...
of the matrix. The product of a matrix with its adjugate gives a
diagonal matrix In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal m ...
(entries not on the main diagonal are zero) whose diagonal entries are the
determinant In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. It characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if a ...
of the original matrix: :\mathbf \operatorname(\mathbf) = \det(\mathbf) \mathbf, where is the
identity matrix In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere. Terminology and notation The identity matrix is often denoted by I_n, or simply by I if the size is immaterial or ...
of the same size as . Consequently, the multiplicative inverse of an
invertible matrix In linear algebra, an -by- square matrix is called invertible (also nonsingular or nondegenerate), if there exists an -by- square matrix such that :\mathbf = \mathbf = \mathbf_n \ where denotes the -by- identity matrix and the multiplicati ...
can be found by dividing its adjugate by its determinant.


Definition

The adjugate of is the
transpose In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal; that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other notations). The tr ...
of the
cofactor matrix In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix, cut down from A by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square matrices (first mino ...
of , :\operatorname(\mathbf) = \mathbf^\mathsf. In more detail, suppose is a unital
commutative ring In mathematics, a commutative ring is a ring in which the multiplication operation is commutative. The study of commutative rings is called commutative algebra. Complementarily, noncommutative algebra is the study of ring properties that are not ...
and is an matrix with entries from . The -'' minor'' of , denoted , is the
determinant In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. It characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if a ...
of the matrix that results from deleting row and column of . The
cofactor matrix In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix, cut down from A by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square matrices (first mino ...
of is the matrix whose entry is the '' cofactor'' of , which is the -minor times a sign factor: :\mathbf = \left((-1)^ \mathbf_\right)_. The adjugate of is the transpose of , that is, the matrix whose entry is the cofactor of , :\operatorname(\mathbf) = \mathbf^\mathsf = \left((-1)^ \mathbf_\right)_.


Important consequence

The adjugate is defined so that the product of with its adjugate yields a
diagonal matrix In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal m ...
whose diagonal entries are the determinant . That is, :\mathbf \operatorname(\mathbf) = \operatorname(\mathbf) \mathbf = \det(\mathbf) \mathbf, where is the
identity matrix In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere. Terminology and notation The identity matrix is often denoted by I_n, or simply by I if the size is immaterial or ...
. This is a consequence of the Laplace expansion of the determinant. The above formula implies one of the fundamental results in matrix algebra, that is invertible
if and only if In logic and related fields such as mathematics and philosophy, "if and only if" (shortened as "iff") is a biconditional logical connective between statements, where either both statements are true or both are false. The connective is bic ...
is an
invertible element In mathematics, the concept of an inverse element generalises the concepts of opposite () and reciprocal () of numbers. Given an operation denoted here , and an identity element denoted , if , one says that is a left inverse of , and that is ...
of . When this holds, the equation above yields :\begin \operatorname(\mathbf) &= \det(\mathbf) \mathbf^, \\ \mathbf^ &= \det(\mathbf)^ \operatorname(\mathbf). \end


Examples


1 × 1 generic matrix

Since the determinant of a 0 x 0 matrix is 1, the adjugate of any 1 × 1 matrix ( complex scalar) is \mathbf = \begin 1 \end. Observe that \mathbf \operatorname(\mathbf) = \mathbf \mathbf = (\det \mathbf) \mathbf .


2 × 2 generic matrix

The adjugate of the 2 × 2 matrix :\mathbf = \begin a & b \\ c & d \end is :\operatorname(\mathbf) = \begin d & -b \\ -c & a \end. By direct computation, :\mathbf \operatorname(\mathbf) = \begin ad - bc & 0 \\ 0 & ad - bc \end = (\det \mathbf)\mathbf. In this case, it is also true that ((A)) = (A) and hence that ((A)) = A.


3 × 3 generic matrix

Consider a 3 × 3 matrix :\mathbf = \begin a_ & a_ & a_ \\ a_ & a_ & a_ \\ a_ & a_ & a_ \end. Its cofactor matrix is :\mathbf = \begin +\begin a_ & a_ \\ a_ & a_ \end & -\begin a_ & a_ \\ a_ & a_ \end & +\begin a_ & a_ \\ a_ & a_ \end \\ \\ -\begin a_ & a_ \\ a_ & a_ \end & +\begin a_ & a_ \\ a_ & a_ \end & -\begin a_ & a_ \\ a_ & a_ \end \\ \\ +\begin a_ & a_ \\ a_ & a_ \end & -\begin a_ & a_ \\ a_ & a_ \end & +\begin a_ & a_ \\ a_ & a_ \end \end, where :\begin a_ & a_ \\ a_ & a_ \end = \det\!\begin a_ & a_ \\ a_ & a_ \end . Its adjugate is the transpose of its cofactor matrix, :\operatorname(\mathbf) = \mathbf^\mathsf = \begin +\begin a_ & a_ \\ a_ & a_ \end & -\begin a_ & a_ \\ a_ & a_ \end & +\begin a_ & a_ \\ a_ & a_ \end \\ & & \\ -\begin a_ & a_ \\ a_ & a_ \end & +\begin a_ & a_ \\ a_ & a_ \end & -\begin a_ & a_ \\ a_ & a_ \end \\ & & \\ +\begin a_ & a_ \\ a_ & a_ \end & -\begin a_ & a_ \\ a_ & a_ \end & +\begin a_ & a_ \\ a_ & a_ \end \end.


3 × 3 numeric matrix

As a specific example, we have :\operatorname\!\begin -3 & 2 & -5 \\ -1 & 0 & -2 \\ 3 & -4 & 1 \end = \begin -8 & 18 & -4 \\ -5 & 12 & -1 \\ 4 & -6 & 2 \end. It is easy to check the adjugate is the
inverse Inverse or invert may refer to: Science and mathematics * Inverse (logic), a type of conditional sentence which is an immediate inference made from another conditional sentence * Additive inverse (negation), the inverse of a number that, when a ...
times the determinant, . The in the second row, third column of the adjugate was computed as follows. The (2,3) entry of the adjugate is the (3,2) cofactor of A. This cofactor is computed using the submatrix obtained by deleting the third row and second column of the original matrix A, :\begin -3 & -5 \\ -1 & -2 \end. The (3,2) cofactor is a sign times the determinant of this submatrix: :(-1)^\operatorname\!\begin-3&-5\\-1&-2\end = -(-3 \cdot -2 - -5 \cdot -1) = -1, and this is the (2,3) entry of the adjugate.


Properties

For any matrix , elementary computations show that adjugates have the following properties: * \operatorname(\mathbf) = \mathbf, where \mathbf is the
identity matrix In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere. Terminology and notation The identity matrix is often denoted by I_n, or simply by I if the size is immaterial or ...
. * \operatorname(\mathbf) = \mathbf, where \mathbf is the zero matrix, except that if n=1 then \operatorname(\mathbf) = \mathbf. * \operatorname(c \mathbf) = c^\operatorname(\mathbf) for any scalar . * \operatorname(\mathbf^\mathsf) = \operatorname(\mathbf)^\mathsf. * \det(\operatorname(\mathbf)) = (\det \mathbf)^. * If is invertible, then \operatorname(\mathbf) = (\det \mathbf) \mathbf^. It follows that: ** is invertible with inverse . ** . * is entrywise
polynomial In mathematics, a polynomial is an expression consisting of indeterminates (also called variables) and coefficients, that involves only the operations of addition, subtraction, multiplication, and positive-integer powers of variables. An exampl ...
in . In particular, over the real or complex numbers, the adjugate is a
smooth function In mathematical analysis, the smoothness of a function is a property measured by the number of continuous derivatives it has over some domain, called ''differentiability class''. At the very minimum, a function could be considered smooth if ...
of the entries of . Over the complex numbers, * \operatorname(\overline\mathbf) = \overline, where the bar denotes
complex conjugation In mathematics, the complex conjugate of a complex number is the number with an equal real part and an imaginary part equal in magnitude but opposite in sign. That is, (if a and b are real, then) the complex conjugate of a + bi is equal to a - ...
. * \operatorname(\mathbf^*) = \operatorname(\mathbf)^*, where the asterisk denotes
conjugate transpose In mathematics, the conjugate transpose, also known as the Hermitian transpose, of an m \times n complex matrix \boldsymbol is an n \times m matrix obtained by transposing \boldsymbol and applying complex conjugate on each entry (the complex c ...
. Suppose that is another matrix. Then :\operatorname(\mathbf) = \operatorname(\mathbf)\operatorname(\mathbf). This can be proved in three ways. One way, valid for any commutative ring, is a direct computation using the
Cauchy–Binet formula In mathematics, specifically linear algebra, the Cauchy–Binet formula, named after Augustin-Louis Cauchy and Jacques Philippe Marie Binet, is an identity for the determinant of the product of two rectangular matrices of transpose shapes (so t ...
. The second way, valid for the real or complex numbers, is to first observe that for invertible matrices and , :\operatorname(\mathbf)\operatorname(\mathbf) = (\det \mathbf)\mathbf^(\det \mathbf)\mathbf^ = (\det \mathbf)(\mathbf)^ = \operatorname(\mathbf). Because every non-invertible matrix is the limit of invertible matrices, continuity of the adjugate then implies that the formula remains true when one of or is not invertible. A
corollary In mathematics and logic, a corollary ( , ) is a theorem of less importance which can be readily deduced from a previous, more notable statement. A corollary could, for instance, be a proposition which is incidentally proved while proving another ...
of the previous formula is that, for any non-negative
integer An integer is the number zero (), a positive natural number (, , , etc.) or a negative integer with a minus sign ( −1, −2, −3, etc.). The negative numbers are the additive inverses of the corresponding positive numbers. In the languag ...
, :\operatorname(\mathbf^k) = \operatorname(\mathbf)^k. If is invertible, then the above formula also holds for negative . From the identity :(\mathbf + \mathbf)\operatorname(\mathbf + \mathbf)\mathbf = \det(\mathbf + \mathbf)\mathbf = \mathbf\operatorname(\mathbf + \mathbf)(\mathbf + \mathbf), we deduce :\mathbf\operatorname(\mathbf + \mathbf)\mathbf = \mathbf\operatorname(\mathbf + \mathbf)\mathbf. Suppose that commutes with . Multiplying the identity on the left and right by proves that :\det(\mathbf)\operatorname(\mathbf)\mathbf = \det(\mathbf)\mathbf\operatorname(\mathbf). If is invertible, this implies that also commutes with . Over the real or complex numbers, continuity implies that commutes with even when is not invertible. Finally, there is a more general proof than the second proof, which only requires that an ''n'' × ''n'' matrix has entries over a field with at least 2''n'' + 1 elements (e.g. a 5 × 5 matrix over the integers
modulo In computing, the modulo operation returns the remainder or signed remainder of a division, after one number is divided by another (called the '' modulus'' of the operation). Given two positive numbers and , modulo (often abbreviated as ) is ...
11). is a polynomial in ''t'' with degree at most ''n'', so it has at most ''n''
roots A root is the part of a plant, generally underground, that anchors the plant body, and absorbs and stores water and nutrients. Root or roots may also refer to: Art, entertainment, and media * ''The Root'' (magazine), an online magazine focusing ...
. Note that the ''ij'' th entry of is a polynomial of at most order ''n'', and likewise for . These two polynomials at the ''ij'' th entry agree on at least ''n'' + 1 points, as we have at least ''n'' + 1 elements of the field where is invertible, and we have proven the identity for invertible matrices. Polynomials of degree ''n'' which agree on ''n'' + 1 points must be identical (subtract them from each other and you have ''n'' + 1 roots for a polynomial of degree at most ''n'' – a contradiction unless their difference is identically zero). As the two polynomials are identical, they take the same value for every value of ''t''. Thus, they take the same value when ''t'' = 0. Using the above properties and other elementary computations, it is straightforward to show that if has one of the following properties, then does as well: * Upper triangular, * Lower triangular, *
Diagonal In geometry, a diagonal is a line segment joining two vertices of a polygon or polyhedron, when those vertices are not on the same edge. Informally, any sloping line is called diagonal. The word ''diagonal'' derives from the ancient Greek δ� ...
, *
Orthogonal In mathematics, orthogonality is the generalization of the geometric notion of '' perpendicularity''. By extension, orthogonality is also used to refer to the separation of specific features of a system. The term also has specialized meanings in ...
, *
Unitary Unitary may refer to: Mathematics * Unitary divisor * Unitary element * Unitary group * Unitary matrix * Unitary morphism * Unitary operator * Unitary transformation * Unitary representation In mathematics, a unitary representation of a grou ...
, * Symmetric, *
Hermitian {{Short description, none Numerous things are named after the French mathematician Charles Hermite (1822–1901): Hermite * Cubic Hermite spline, a type of third-degree spline * Gauss–Hermite quadrature, an extension of Gaussian quadrature m ...
, * Skew-symmetric, * Skew-Hermitian, * Normal. If is invertible, then, as noted above, there is a formula for in terms of the determinant and inverse of . When is not invertible, the adjugate satisfies different but closely related formulas. * If , then . * If , then . (Some minor is non-zero, so is non-zero and hence has
rank Rank is the relative position, value, worth, complexity, power, importance, authority, level, etc. of a person or object within a ranking, such as: Level or position in a hierarchical organization * Academic rank * Diplomatic rank * Hierarchy * ...
at least one; the identity implies that the
dimension In physics and mathematics, the dimension of a mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any point within it. Thus, a line has a dimension of one (1D) because only one coord ...
of the nullspace of is at least , so its rank is at most one.) It follows that , where is a scalar and and are vectors such that and .


Column substitution and Cramer's rule

Partition into column vectors: :\mathbf = \begin\mathbf_1 & \cdots & \mathbf_n\end. Let be a column vector of size . Fix and consider the matrix formed by replacing column of by : :(\mathbf \stackrel \mathbf)\ \stackrel\ \begin \mathbf_1 & \cdots & \mathbf_ & \mathbf & \mathbf_ & \cdots & \mathbf_n \end. Laplace expand the determinant of this matrix along column . The result is entry of the product . Collecting these determinants for the different possible yields an equality of column vectors :\left(\det(\mathbf \stackrel \mathbf)\right)_^n = \operatorname(\mathbf)\mathbf. This formula has the following concrete consequence. Consider the linear system of equations :\mathbf\mathbf = \mathbf. Assume that is non-singular. Multiplying this system on the left by and dividing by the determinant yields :\mathbf = \frac. Applying the previous formula to this situation yields Cramer's rule, :x_i = \frac, where is the th entry of .


Characteristic polynomial

Let the
characteristic polynomial In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The c ...
of be :p(s) = \det(s\mathbf - \mathbf) = \sum_^n p_i s^i \in R The first
divided difference In mathematics, divided differences is an algorithm, historically used for computing tables of logarithms and trigonometric functions. Charles Babbage's difference engine, an early mechanical calculator, was designed to use this algorithm in ...
of is a symmetric polynomial of degree , :\Delta p(s, t) = \frac = \sum_ p_ s^j t^k \in R , t Multiply by its adjugate. Since by the Cayley–Hamilton theorem, some elementary manipulations reveal :\operatorname(s\mathbf - \mathbf) = \Delta p(s\mathbf, \mathbf). In particular, the resolvent of is defined to be :R(z; \mathbf) = (z\mathbf - \mathbf)^, and by the above formula, this is equal to :R(z; \mathbf) = \frac.


Jacobi's formula

The adjugate also appears in Jacobi's formula for the
derivative In mathematics, the derivative of a function of a real variable measures the sensitivity to change of the function value (output value) with respect to a change in its argument (input value). Derivatives are a fundamental tool of calculus. ...
of the determinant. If is
continuously differentiable In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. In other words, the graph of a differentiable function has a non- vertical tangent line at each interior point in ...
, then :\frac(t) = \operatorname\left(\operatorname(\mathbf(t)) \mathbf'(t)\right). It follows that the total derivative of the determinant is the transpose of the adjugate: :d(\det \mathbf)_ = \operatorname(\mathbf_0)^.


Cayley–Hamilton formula

Let be the characteristic polynomial of . The Cayley–Hamilton theorem states that :p_(\mathbf) = \mathbf. Separating the constant term and multiplying the equation by gives an expression for the adjugate that depends only on and the coefficients of . These coefficients can be explicitly represented in terms of traces of powers of using complete exponential
Bell polynomials In combinatorial mathematics, the Bell polynomials, named in honor of Eric Temple Bell, are used in the study of set partitions. They are related to Stirling and Bell numbers. They also occur in many applications, such as in the Faà di Bruno ...
. The resulting formula is :\operatorname(\mathbf) = \sum_^ \mathbf^ \sum_ \prod_^ \frac\operatorname(\mathbf^\ell)^, where is the dimension of , and the sum is taken over and all sequences of satisfying the linear
Diophantine equation In mathematics, a Diophantine equation is an equation, typically a polynomial equation in two or more unknowns with integer coefficients, such that the only solutions of interest are the integer ones. A linear Diophantine equation equates to a ...
:s+\sum_^\ell k_\ell = n - 1. For the 2 × 2 case, this gives :\operatorname(\mathbf)=\mathbf_2(\operatorname\mathbf) - \mathbf. For the 3 × 3 case, this gives :\operatorname(\mathbf)=\frac\mathbf_3\!\left( (\operatorname\mathbf)^2-\operatorname\mathbf^2\right) - \mathbf(\operatorname\mathbf) + \mathbf^2 . For the 4 × 4 case, this gives :\operatorname(\mathbf)= \frac\mathbf_4\!\left( (\operatorname\mathbf)^3 - 3\operatorname\mathbf\operatorname\mathbf^2 + 2\operatorname\mathbf^ \right) - \frac\mathbf\!\left( (\operatorname\mathbf)^2 - \operatorname\mathbf^2\right) + \mathbf^2(\operatorname\mathbf) - \mathbf^3. The same formula follows directly from the terminating step of the Faddeev–LeVerrier algorithm, which efficiently determines the
characteristic polynomial In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The c ...
of .


Relation to exterior algebras

The adjugate can be viewed in abstract terms using
exterior algebra In mathematics, the exterior algebra, or Grassmann algebra, named after Hermann Grassmann, is an algebra that uses the exterior product or wedge product as its multiplication. In mathematics, the exterior product or wedge product of vectors is a ...
s. Let be an -dimensional
vector space In mathematics and physics, a vector space (also called a linear space) is a set whose elements, often called '' vectors'', may be added together and multiplied ("scaled") by numbers called ''scalars''. Scalars are often real numbers, but can ...
. The exterior product defines a bilinear pairing :V \times \wedge^ V \to \wedge^n V. Abstractly, \wedge^n V is
isomorphic In mathematics, an isomorphism is a structure-preserving mapping between two structures of the same type that can be reversed by an inverse mapping. Two mathematical structures are isomorphic if an isomorphism exists between them. The word i ...
to , and under any such isomorphism the exterior product is a perfect pairing. Therefore, it yields an isomorphism :\phi \colon V\ \xrightarrow\ \operatorname(\wedge^ V, \wedge^n V). Explicitly, this pairing sends to \phi_, where :\phi_\mathbf(\alpha) = \mathbf \wedge \alpha. Suppose that is a
linear transformation In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that pre ...
. Pullback by the st exterior power of induces a morphism of spaces. The adjugate of is the composite :V\ \xrightarrow\ \operatorname(\wedge^ V, \wedge^n V)\ \xrightarrow\ \operatorname(\wedge^ V, \wedge^n V)\ \xrightarrow\ V. If is endowed with its
canonical basis In mathematics, a canonical basis is a basis of an algebraic structure that is canonical in a sense that depends on the precise context: * In a coordinate space, and more generally in a free module, it refers to the standard basis defined by the K ...
, and if the matrix of in this basis is , then the adjugate of is the adjugate of . To see why, give \wedge^ \mathbf^n the basis :\_^n. Fix a basis vector of . The image of under \phi is determined by where it sends basis vectors: :\phi_(\mathbf_1 \wedge \dots \wedge \hat\mathbf_k \wedge \dots \wedge \mathbf_n) = \begin (-1)^ \mathbf_1 \wedge \dots \wedge \mathbf_n, &\text\ k = i, \\ 0 &\text \end On basis vectors, the st exterior power of is :\mathbf_1 \wedge \dots \wedge \hat\mathbf_j \wedge \dots \wedge \mathbf_n \mapsto \sum_^n (\det A_) \mathbf_1 \wedge \dots \wedge \hat\mathbf_k \wedge \dots \wedge \mathbf_n. Each of these terms maps to zero under \phi_ except the term. Therefore, the pullback of \phi_ is the linear transformation for which :\mathbf_1 \wedge \dots \wedge \hat\mathbf_j \wedge \dots \wedge \mathbf_n \mapsto (-1)^ (\det A_) \mathbf_1 \wedge \dots \wedge \mathbf_n, that is, it equals :\sum_^n (-1)^ (\det A_)\phi_. Applying the inverse of \phi shows that the adjugate of is the linear transformation for which :\mathbf_i \mapsto \sum_^n (-1)^(\det A_)\mathbf_j. Consequently, its matrix representation is the adjugate of . If is endowed with an
inner product In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often ...
and a volume form, then the map can be decomposed further. In this case, can be understood as the composite of the
Hodge star operator In mathematics, the Hodge star operator or Hodge star is a linear map defined on the exterior algebra of a finite-dimensional oriented vector space endowed with a nondegenerate symmetric bilinear form. Applying the operator to an element of the ...
and dualization. Specifically, if is the volume form, then it, together with the inner product, determines an isomorphism :\omega^\vee \colon \wedge^n V \to \mathbf. This induces an isomorphism :\operatorname(\wedge^ \mathbf^n, \wedge^n \mathbf^n) \cong \wedge^ (\mathbf^n)^\vee. A vector in corresponds to the linear functional :(\alpha \mapsto \omega^\vee(\mathbf \wedge \alpha)) \in \wedge^ (\mathbf^n)^\vee. By the definition of the Hodge star operator, this linear functional is dual to . That is, equals .


Higher adjugates

Let be an matrix, and fix . The th higher adjugate of is an \binom \!\times\! \binom matrix, denoted , whose entries are indexed by size
subset In mathematics, set ''A'' is a subset of a set ''B'' if all elements of ''A'' are also elements of ''B''; ''B'' is then a superset of ''A''. It is possible for ''A'' and ''B'' to be equal; if they are unequal, then ''A'' is a proper subset of ...
s and of . Let and denote the complements of and , respectively. Also let \mathbf_ denote the submatrix of containing those rows and columns whose indices are in and , respectively. Then the entry of is :(-1)^\det \mathbf_, where and are the sum of the elements of and , respectively. Basic properties of higher adjugates include: * . * . * . * . * \operatorname_r(\mathbf)C_r(\mathbf) = C_r(\mathbf)\operatorname_r(\mathbf) = (\det \mathbf)I_, where denotes the  th compound matrix. Higher adjugates may be defined in abstract algebraic terms in a similar fashion to the usual adjugate, substituting \wedge^r V and \wedge^ V for V and \wedge^ V, respectively.


Iterated adjugates

Iteratively taking the adjugate of an invertible matrix A times yields :\overbrace^k(\mathbf)=\det(\mathbf)^\mathbf^, :\det(\overbrace^k(\mathbf))=\det(\mathbf)^. For example, :\operatorname(\operatorname(\mathbf)) = \det(\mathbf)^ \mathbf. :\det(\operatorname(\operatorname(\mathbf))) = \det(\mathbf)^.


See also

* Cayley–Hamilton theorem *
Cramer's rule In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants o ...
* Trace diagram * Jacobi's formula * Faddeev–LeVerrier algorithm


References


Bibliography

* Roger A. Horn and Charles R. Johnson (2013), ''Matrix Analysis'', Second Edition. Cambridge University Press, * Roger A. Horn and Charles R. Johnson (1991), ''Topics in Matrix Analysis''. Cambridge University Press,


External links


Matrix Reference ManualOnline matrix calculator (determinant, track, inverse, adjoint, transpose)
Compute Adjugate matrix up to order 8 * {{Matrix classes Matrix theory Linear algebra