In
linear algebra
Linear algebra is the branch of mathematics concerning linear equations such as:
:a_1x_1+\cdots +a_nx_n=b,
linear maps such as:
:(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n,
and their representations in vector spaces and through matrice ...
, the adjugate or classical adjoint of a
square matrix
In mathematics, a square matrix is a matrix with the same number of rows and columns. An ''n''-by-''n'' matrix is known as a square matrix of order Any two square matrices of the same order can be added and multiplied.
Square matrices are often ...
is the
transpose
In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal;
that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other notations).
The tr ...
of its
cofactor matrix
In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix, cut down from A by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square matrices (first mino ...
and is denoted by . It is also occasionally known as adjunct matrix, or "adjoint",
though the latter today normally refers to a different concept, the
adjoint operator which is the
conjugate transpose
In mathematics, the conjugate transpose, also known as the Hermitian transpose, of an m \times n complex matrix \boldsymbol is an n \times m matrix obtained by transposing \boldsymbol and applying complex conjugate on each entry (the complex c ...
of the matrix.
The product of a matrix with its adjugate gives a
diagonal matrix
In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal m ...
(entries not on the main diagonal are zero) whose diagonal entries are the
determinant
In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. It characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if a ...
of the original matrix:
:
where is the
identity matrix
In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere.
Terminology and notation
The identity matrix is often denoted by I_n, or simply by I if the size is immaterial or ...
of the same size as . Consequently, the multiplicative inverse of an
invertible matrix
In linear algebra, an -by- square matrix is called invertible (also nonsingular or nondegenerate), if there exists an -by- square matrix such that
:\mathbf = \mathbf = \mathbf_n \
where denotes the -by- identity matrix and the multiplicati ...
can be found by dividing its adjugate by its determinant.
Definition
The adjugate of is the
transpose
In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal;
that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other notations).
The tr ...
of the
cofactor matrix
In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix, cut down from A by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square matrices (first mino ...
of ,
:
In more detail, suppose is a unital
commutative ring
In mathematics, a commutative ring is a ring in which the multiplication operation is commutative. The study of commutative rings is called commutative algebra. Complementarily, noncommutative algebra is the study of ring properties that are not ...
and is an matrix with entries from . The -''
minor'' of , denoted , is the
determinant
In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. It characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if a ...
of the matrix that results from deleting row and column of . The
cofactor matrix
In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix, cut down from A by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square matrices (first mino ...
of is the matrix whose entry is the ''
cofactor'' of , which is the -minor times a sign factor:
:
The adjugate of is the transpose of , that is, the matrix whose entry is the cofactor of ,
:
Important consequence
The adjugate is defined so that the product of with its adjugate yields a
diagonal matrix
In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal m ...
whose diagonal entries are the determinant . That is,
:
where is the
identity matrix
In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere.
Terminology and notation
The identity matrix is often denoted by I_n, or simply by I if the size is immaterial or ...
. This is a consequence of the
Laplace expansion of the determinant.
The above formula implies one of the fundamental results in matrix algebra, that is
invertible if and only if
In logic and related fields such as mathematics and philosophy, "if and only if" (shortened as "iff") is a biconditional logical connective between statements, where either both statements are true or both are false.
The connective is bic ...
is an
invertible element
In mathematics, the concept of an inverse element generalises the concepts of opposite () and reciprocal () of numbers.
Given an operation denoted here , and an identity element denoted , if , one says that is a left inverse of , and that is ...
of . When this holds, the equation above yields
:
Examples
1 × 1 generic matrix
Since the determinant of a 0 x 0 matrix is 1, the adjugate of any 1 × 1 matrix (
complex scalar) is
. Observe that
2 × 2 generic matrix
The adjugate of the 2 × 2 matrix
:
is
:
By direct computation,
:
In this case, it is also true that ((A)) = (A) and hence that ((A)) = A.
3 × 3 generic matrix
Consider a 3 × 3 matrix
:
Its cofactor matrix is
:
where
:
Its adjugate is the transpose of its cofactor matrix,
:
3 × 3 numeric matrix
As a specific example, we have
:
It is easy to check the adjugate is the
inverse
Inverse or invert may refer to:
Science and mathematics
* Inverse (logic), a type of conditional sentence which is an immediate inference made from another conditional sentence
* Additive inverse (negation), the inverse of a number that, when a ...
times the determinant, .
The in the second row, third column of the adjugate was computed as follows. The (2,3) entry of the adjugate is the (3,2) cofactor of A. This cofactor is computed using the
submatrix obtained by deleting the third row and second column of the original matrix A,
:
The (3,2) cofactor is a sign times the determinant of this submatrix:
:
and this is the (2,3) entry of the adjugate.
Properties
For any matrix , elementary computations show that adjugates have the following properties:
*
, where
is the
identity matrix
In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere.
Terminology and notation
The identity matrix is often denoted by I_n, or simply by I if the size is immaterial or ...
.
*
, where
is the
zero matrix, except that if
then
.
*
for any scalar .
*
.
*
.
* If is invertible, then
. It follows that:
** is invertible with inverse .
** .
* is entrywise
polynomial
In mathematics, a polynomial is an expression consisting of indeterminates (also called variables) and coefficients, that involves only the operations of addition, subtraction, multiplication, and positive-integer powers of variables. An exampl ...
in . In particular, over the
real or complex numbers, the adjugate is a
smooth function
In mathematical analysis, the smoothness of a function is a property measured by the number of continuous derivatives it has over some domain, called ''differentiability class''. At the very minimum, a function could be considered smooth if ...
of the entries of .
Over the complex numbers,
*
, where the bar denotes
complex conjugation
In mathematics, the complex conjugate of a complex number is the number with an equal real part and an imaginary part equal in magnitude but opposite in sign. That is, (if a and b are real, then) the complex conjugate of a + bi is equal to a - ...
.
*
, where the asterisk denotes
conjugate transpose
In mathematics, the conjugate transpose, also known as the Hermitian transpose, of an m \times n complex matrix \boldsymbol is an n \times m matrix obtained by transposing \boldsymbol and applying complex conjugate on each entry (the complex c ...
.
Suppose that is another matrix. Then
:
This can be
proved in three ways. One way, valid for any commutative ring, is a direct computation using the
Cauchy–Binet formula In mathematics, specifically linear algebra, the Cauchy–Binet formula, named after Augustin-Louis Cauchy and Jacques Philippe Marie Binet, is an identity for the determinant of the product of two rectangular matrices of transpose shapes (so t ...
. The second way, valid for the real or complex numbers, is to first observe that for invertible matrices and ,
:
Because every non-invertible matrix is the limit of invertible matrices,
continuity of the adjugate then implies that the formula remains true when one of or is not invertible.
A
corollary
In mathematics and logic, a corollary ( , ) is a theorem of less importance which can be readily deduced from a previous, more notable statement. A corollary could, for instance, be a proposition which is incidentally proved while proving another ...
of the previous formula is that, for any non-negative
integer
An integer is the number zero (), a positive natural number (, , , etc.) or a negative integer with a minus sign ( −1, −2, −3, etc.). The negative numbers are the additive inverses of the corresponding positive numbers. In the languag ...
,
:
If is invertible, then the above formula also holds for negative .
From the identity
:
we deduce
:
Suppose that
commutes with . Multiplying the identity on the left and right by proves that
:
If is invertible, this implies that also commutes with . Over the real or complex numbers, continuity implies that commutes with even when is not invertible.
Finally, there is a more general proof than the second proof, which only requires that an ''n'' × ''n'' matrix has entries over a
field with at least 2''n'' + 1 elements (e.g. a 5 × 5 matrix over the integers
modulo
In computing, the modulo operation returns the remainder or signed remainder of a division, after one number is divided by another (called the '' modulus'' of the operation).
Given two positive numbers and , modulo (often abbreviated as ) is ...
11). is a polynomial in ''t'' with
degree at most ''n'', so it has at most ''n''
roots
A root is the part of a plant, generally underground, that anchors the plant body, and absorbs and stores water and nutrients.
Root or roots may also refer to:
Art, entertainment, and media
* ''The Root'' (magazine), an online magazine focusing ...
. Note that the ''ij'' th entry of is a polynomial of at most order ''n'', and likewise for . These two polynomials at the ''ij'' th entry agree on at least ''n'' + 1 points, as we have at least ''n'' + 1 elements of the field where is invertible, and we have proven the identity for invertible matrices. Polynomials of degree ''n'' which agree on ''n'' + 1 points must be identical (subtract them from each other and you have ''n'' + 1 roots for a polynomial of degree at most ''n'' – a contradiction unless their difference is identically zero). As the two polynomials are identical, they take the same value for every value of ''t''. Thus, they take the same value when ''t'' = 0.
Using the above properties and other elementary computations, it is straightforward to show that if has one of the following properties, then does as well:
*
Upper triangular,
*
Lower triangular,
*
Diagonal
In geometry, a diagonal is a line segment joining two vertices of a polygon or polyhedron, when those vertices are not on the same edge. Informally, any sloping line is called diagonal. The word ''diagonal'' derives from the ancient Greek δ� ...
,
*
Orthogonal
In mathematics, orthogonality is the generalization of the geometric notion of '' perpendicularity''.
By extension, orthogonality is also used to refer to the separation of specific features of a system. The term also has specialized meanings in ...
,
*
Unitary
Unitary may refer to:
Mathematics
* Unitary divisor
* Unitary element
* Unitary group
* Unitary matrix
* Unitary morphism
* Unitary operator
* Unitary transformation
* Unitary representation In mathematics, a unitary representation of a grou ...
,
*
Symmetric,
*
Hermitian {{Short description, none
Numerous things are named after the French mathematician Charles Hermite (1822–1901):
Hermite
* Cubic Hermite spline, a type of third-degree spline
* Gauss–Hermite quadrature, an extension of Gaussian quadrature m ...
,
*
Skew-symmetric,
*
Skew-Hermitian,
*
Normal.
If is invertible, then, as noted above, there is a formula for in terms of the determinant and inverse of . When is not invertible, the adjugate satisfies different but closely related formulas.
* If , then .
* If , then . (Some minor is non-zero, so is non-zero and hence has
rank
Rank is the relative position, value, worth, complexity, power, importance, authority, level, etc. of a person or object within a ranking, such as:
Level or position in a hierarchical organization
* Academic rank
* Diplomatic rank
* Hierarchy
* ...
at least one; the identity implies that the
dimension
In physics and mathematics, the dimension of a mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any point within it. Thus, a line has a dimension of one (1D) because only one coord ...
of the
nullspace of is at least , so its rank is at most one.) It follows that , where is a scalar and and are vectors such that and .
Column substitution and Cramer's rule
Partition into
column vectors:
:
Let be a column vector of size . Fix and consider the matrix formed by replacing column of by :
:
Laplace expand the determinant of this matrix along column . The result is entry of the product . Collecting these determinants for the different possible yields an equality of column vectors
:
This formula has the following concrete consequence. Consider the
linear system of equations
:
Assume that is
non-singular. Multiplying this system on the left by and dividing by the determinant yields
:
Applying the previous formula to this situation yields Cramer's rule,
:
where is the th entry of .
Characteristic polynomial
Let the
characteristic polynomial
In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The c ...
of be
:
The first
divided difference
In mathematics, divided differences is an algorithm, historically used for computing tables of logarithms and trigonometric functions. Charles Babbage's difference engine, an early mechanical calculator, was designed to use this algorithm in ...
of is a
symmetric polynomial of degree ,
:
Multiply by its adjugate. Since by the
Cayley–Hamilton theorem, some elementary manipulations reveal
:
In particular, the
resolvent of is defined to be
:
and by the above formula, this is equal to
:
Jacobi's formula
The adjugate also appears in
Jacobi's formula for the
derivative
In mathematics, the derivative of a function of a real variable measures the sensitivity to change of the function value (output value) with respect to a change in its argument (input value). Derivatives are a fundamental tool of calculus. ...
of the determinant. If is
continuously differentiable
In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. In other words, the graph of a differentiable function has a non- vertical tangent line at each interior point in ...
, then
:
It follows that the
total derivative of the determinant is the transpose of the adjugate:
:
Cayley–Hamilton formula
Let be the characteristic polynomial of . The
Cayley–Hamilton theorem states that
:
Separating the constant term and multiplying the equation by gives an expression for the adjugate that depends only on and the coefficients of . These coefficients can be explicitly represented in terms of
traces of powers of using complete exponential
Bell polynomials
In combinatorial mathematics, the Bell polynomials, named in honor of Eric Temple Bell, are used in the study of set partitions. They are related to Stirling and Bell numbers. They also occur in many applications, such as in the Faà di Bruno ...
. The resulting formula is
:
where is the dimension of , and the sum is taken over and all sequences of satisfying the linear
Diophantine equation
In mathematics, a Diophantine equation is an equation, typically a polynomial equation in two or more unknowns with integer coefficients, such that the only solutions of interest are the integer ones. A linear Diophantine equation equates to a ...
:
For the 2 × 2 case, this gives
:
For the 3 × 3 case, this gives
:
For the 4 × 4 case, this gives
:
The same formula follows directly from the terminating step of the
Faddeev–LeVerrier algorithm, which efficiently determines the
characteristic polynomial
In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The c ...
of .
Relation to exterior algebras
The adjugate can be viewed in abstract terms using
exterior algebra
In mathematics, the exterior algebra, or Grassmann algebra, named after Hermann Grassmann, is an algebra that uses the exterior product or wedge product as its multiplication. In mathematics, the exterior product or wedge product of vectors is a ...
s. Let be an -dimensional
vector space
In mathematics and physics, a vector space (also called a linear space) is a set whose elements, often called '' vectors'', may be added together and multiplied ("scaled") by numbers called ''scalars''. Scalars are often real numbers, but can ...
. The
exterior product defines a bilinear pairing
:
Abstractly,
is
isomorphic
In mathematics, an isomorphism is a structure-preserving mapping between two structures of the same type that can be reversed by an inverse mapping. Two mathematical structures are isomorphic if an isomorphism exists between them. The word i ...
to , and under any such isomorphism the exterior product is a
perfect pairing. Therefore, it yields an isomorphism
:
Explicitly, this pairing sends to
, where
:
Suppose that is a
linear transformation
In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that pre ...
. Pullback by the st exterior power of induces a morphism of spaces. The adjugate of is the composite
:
If is endowed with its
canonical basis
In mathematics, a canonical basis is a basis of an algebraic structure that is canonical in a sense that depends on the precise context:
* In a coordinate space, and more generally in a free module, it refers to the standard basis defined by the K ...
, and if the matrix of in this
basis is , then the adjugate of is the adjugate of . To see why, give
the basis
:
Fix a basis vector of . The image of under
is determined by where it sends basis vectors:
:
On basis vectors, the st exterior power of is
:
Each of these terms maps to zero under
except the term. Therefore, the pullback of
is the linear transformation for which
:
that is, it equals
:
Applying the inverse of
shows that the adjugate of is the linear transformation for which
:
Consequently, its matrix representation is the adjugate of .
If is endowed with an
inner product
In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often ...
and a volume form, then the map can be decomposed further. In this case, can be understood as the composite of the
Hodge star operator
In mathematics, the Hodge star operator or Hodge star is a linear map defined on the exterior algebra of a finite-dimensional oriented vector space endowed with a nondegenerate symmetric bilinear form. Applying the operator to an element of the ...
and dualization. Specifically, if is the volume form, then it, together with the inner product, determines an isomorphism
:
This induces an isomorphism
:
A vector in corresponds to the linear functional
:
By the definition of the Hodge star operator, this linear functional is dual to . That is, equals .
Higher adjugates
Let be an matrix, and fix . The th higher adjugate of is an
matrix, denoted , whose entries are indexed by size
subset
In mathematics, set ''A'' is a subset of a set ''B'' if all elements of ''A'' are also elements of ''B''; ''B'' is then a superset of ''A''. It is possible for ''A'' and ''B'' to be equal; if they are unequal, then ''A'' is a proper subset of ...
s and of . Let and denote the
complements of and , respectively. Also let
denote the submatrix of containing those rows and columns whose indices are in and , respectively. Then the entry of is
:
where and are the sum of the elements of and , respectively.
Basic properties of higher adjugates include:
* .
* .
* .
* .
*
, where denotes the  th
compound matrix.
Higher adjugates may be defined in abstract algebraic terms in a similar fashion to the usual adjugate, substituting
and
for
and
, respectively.
Iterated adjugates
Iteratively taking the adjugate of an invertible matrix A times yields
:
:
For example,
:
:
See also
*
Cayley–Hamilton theorem
*
Cramer's rule
In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants o ...
*
Trace diagram
*
Jacobi's formula
*
Faddeev–LeVerrier algorithm
References
Bibliography
* Roger A. Horn and Charles R. Johnson (2013), ''Matrix Analysis'', Second Edition. Cambridge University Press,
* Roger A. Horn and Charles R. Johnson (1991), ''Topics in Matrix Analysis''. Cambridge University Press,
External links
Matrix Reference ManualOnline matrix calculator (determinant, track, inverse, adjoint, transpose)Compute Adjugate matrix up to order 8
*
{{Matrix classes
Matrix theory
Linear algebra