HOME

TheInfoList



OR:

In
linear algebra Linear algebra is the branch of mathematics concerning linear equations such as: :a_1x_1+\cdots +a_nx_n=b, linear maps such as: :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrice ...
, a minor of a
matrix Matrix most commonly refers to: * ''The Matrix'' (franchise), an American media franchise ** '' The Matrix'', a 1999 science-fiction action film ** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchi ...
A is the
determinant In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. It characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if a ...
of some smaller
square matrix In mathematics, a square matrix is a matrix with the same number of rows and columns. An ''n''-by-''n'' matrix is known as a square matrix of order Any two square matrices of the same order can be added and multiplied. Square matrices are often ...
, cut down from A by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square matrices (first minors) are required for calculating matrix cofactors, which in turn are useful for computing both the determinant and
inverse Inverse or invert may refer to: Science and mathematics * Inverse (logic), a type of conditional sentence which is an immediate inference made from another conditional sentence * Additive inverse (negation), the inverse of a number that, when a ...
of square matrices.


Definition and illustration


First minors

If A is a square matrix, then the ''minor'' of the entry in the ''i''th row and ''j''th column (also called the (''i'', ''j'') ''minor'', or a ''first minor'') is the
determinant In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. It characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if a ...
of the
submatrix In mathematics, a matrix (plural matrices) is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object. For example, \begi ...
formed by deleting the ''i''th row and ''j''th column. This number is often denoted ''M''''i,j''. The (''i'', ''j'') ''cofactor'' is obtained by multiplying the minor by (-1)^. To illustrate these definitions, consider the following 3 by 3 matrix, :\begin 1 & 4 & 7 \\ 3 & 0 & 5 \\ -1 & 9 & 11 \\ \end To compute the minor ''M''2,3 and the cofactor ''C''2,3, we find the determinant of the above matrix with row 2 and column 3 removed. : M_ = \det \begin 1 & 4 & \Box \\ \Box & \Box & \Box \\ -1 & 9 & \Box \\ \end= \det \begin 1 & 4 \\ -1 & 9 \\ \end = 9-(-4) = 13 So the cofactor of the (2,3) entry is :\ C_ = (-1)^(M_) = -13.


General definition

Let A be an ''m'' × ''n'' matrix and ''k'' an
integer An integer is the number zero (), a positive natural number (, , , etc.) or a negative integer with a minus sign ( −1, −2, −3, etc.). The negative numbers are the additive inverses of the corresponding positive numbers. In the languag ...
with 0 < ''k'' ≤ ''m'', and ''k'' ≤ ''n''. A ''k'' × ''k'' ''minor'' of A, also called ''minor determinant of order k'' of A or, if ''m'' = ''n'', (''n''−''k'')''th minor determinant'' of A (the word "determinant" is often omitted, and the word "degree" is sometimes used instead of "order") is the determinant of a ''k'' × ''k'' matrix obtained from A by deleting ''m''−''k'' rows and ''n''−''k'' columns. Sometimes the term is used to refer to the ''k'' × ''k'' matrix obtained from A as above (by deleting ''m''−''k'' rows and ''n''−''k'' columns), but this matrix should be referred to as a ''(square) submatrix'' of A, leaving the term "minor" to refer to the determinant of this matrix. For a matrix A as above, there are a total of \cdot minors of size ''k'' × ''k''. The ''minor of order zero'' is often defined to be 1. For a square matrix, the ''zeroth minor'' is just the determinant of the matrix.Elementary Matrix Algebra (Third edition), Franz E. Hohn, The Macmillan Company, 1973, Let 1 \le i_1 < i_2 < \cdots < i_k \le m and 1 \le j_1 < j_2 < \cdots < j_k \le n be ordered sequences (in natural order, as it is always assumed when talking about minors unless otherwise stated) of indexes, call them ''I'' and ''J'', respectively. The minor \det \left( (A_)_ \right) corresponding to these choices of indexes is denoted \det_ A or \det A_ or or M_ or M_ or M_ (where the (i) denotes the sequence of indexes ''I'', etc.), depending on the source. Also, there are two types of denotations in use in literature: by the minor associated to ordered sequences of indexes ''I'' and ''J'', some authors mean the determinant of the matrix that is formed as above, by taking the elements of the original matrix from the rows whose indexes are in ''I'' and columns whose indexes are in ''J'', whereas some other authors mean by a minor associated to ''I'' and ''J'' the determinant of the matrix formed from the original matrix by deleting the rows in ''I'' and columns in ''J''. Which notation is used should always be checked from the source in question. In this article, we use the inclusive definition of choosing the elements from rows of ''I'' and columns of ''J''. The exceptional case is the case of the first minor or the (''i'', ''j'')-minor described above; in that case, the exclusive meaning M_ = \det \left( \left( A_ \right)_ \right) is standard everywhere in the literature and is used in this article also.


Complement

The complement, ''Bijk...,pqr...'', of a minor, ''Mijk...,pqr...'', of a square matrix, A, is formed by the determinant of the matrix A from which all the rows (''ijk...'') and columns (''pqr...'') associated with ''Mijk...,pqr...'' have been removed. The complement of the first minor of an element ''aij'' is merely that element.


Applications of minors and cofactors


Cofactor expansion of the determinant

The cofactors feature prominently in Laplace's formula for the expansion of determinants, which is a method of computing larger determinants in terms of smaller ones. Given an matrix A = (a_), the determinant of ''A'', denoted det(''A''), can be written as the sum of the cofactors of any row or column of the matrix multiplied by the entries that generated them. In other words, defining C_ = (-1)^ M_ then the cofactor expansion along the ''j''th column gives: :\ \det(\mathbf A) = a_C_ + a_C_ + a_C_ + \cdots + a_C_ = \sum_^ a_ C_ = \sum_^ a_(-1)^ M_ The cofactor expansion along the ''i''th row gives: :\ \det(\mathbf A) = a_C_ + a_C_ + a_C_ + \cdots + a_C_ = \sum_^ a_ C_ =\sum_^ a_ (-1)^ M_


Inverse of a matrix

One can write down the inverse of an
invertible matrix In linear algebra, an -by- square matrix is called invertible (also nonsingular or nondegenerate), if there exists an -by- square matrix such that :\mathbf = \mathbf = \mathbf_n \ where denotes the -by- identity matrix and the multiplicati ...
by computing its cofactors by using
Cramer's rule In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants o ...
, as follows. The matrix formed by all of the cofactors of a square matrix A is called the cofactor matrix (also called the matrix of cofactors or, sometimes, ''comatrix''): :\mathbf C=\begin C_ & C_ & \cdots & C_ \\ C_ & C_ & \cdots & C_ \\ \vdots & \vdots & \ddots & \vdots \\ C_ & C_ & \cdots & C_ \end Then the inverse of A is the transpose of the cofactor matrix times the reciprocal of the determinant of ''A'': :\mathbf A^ = \frac \mathbf C^\mathsf. The transpose of the cofactor matrix is called the adjugate matrix (also called the ''classical adjoint'') of A. The above formula can be generalized as follows: Let 1 \le i_1 < i_2 < \ldots < i_k \le n and 1 \le j_1 < j_2 < \ldots < j_k \le n be ordered sequences (in natural order) of indexes (here A is an ''n'' × ''n'' matrix). Then : mathbf A^ = \pm\frac, where ''I′'', ''J′'' denote the ordered sequences of indices (the indices are in natural order of magnitude, as above) complementary to ''I'', ''J'', so that every index 1, ..., ''n'' appears exactly once in either ''I'' or ''I′'', but not in both (similarly for the ''J'' and ''J′'') and mathbf A denotes the determinant of the submatrix of A formed by choosing the rows of the index set ''I'' and columns of index set ''J''. Also, mathbf A = \det \left( (A_)_ \right). A simple proof can be given using wedge product. Indeed, : mathbf A^(e_1\wedge\ldots \wedge e_n) = \pm(\mathbf A^e_)\wedge \ldots \wedge(\mathbf A^e_)\wedge e_\wedge\ldots \wedge e_, where e_1, \ldots, e_n are the basis vectors. Acting by A on both sides, one gets : mathbf A^\det \mathbf A (e_1\wedge\ldots \wedge e_n) = \pm (e_)\wedge \ldots \wedge(e_)\wedge (\mathbf A e_)\wedge\ldots \wedge (\mathbf A e_)=\pm mathbf A(e_1\wedge\ldots \wedge e_n). The sign can be worked out to be (-1)^, so the sign is determined by the sums of elements in ''I'' and ''J''.


Other applications

Given an ''m'' × ''n'' matrix with
real Real may refer to: Currencies * Brazilian real (R$) * Central American Republic real * Mexican real * Portuguese real * Spanish real * Spanish colonial real Music Albums * ''Real'' (L'Arc-en-Ciel album) (2000) * ''Real'' (Bright album) (2010) ...
entries (or entries from any other field) and
rank Rank is the relative position, value, worth, complexity, power, importance, authority, level, etc. of a person or object within a ranking, such as: Level or position in a hierarchical organization * Academic rank * Diplomatic rank * Hierarchy * ...
''r'', then there exists at least one non-zero ''r'' × ''r'' minor, while all larger minors are zero. We will use the following notation for minors: if A is an ''m'' × ''n'' matrix, ''I'' is a
subset In mathematics, set ''A'' is a subset of a set ''B'' if all elements of ''A'' are also elements of ''B''; ''B'' is then a superset of ''A''. It is possible for ''A'' and ''B'' to be equal; if they are unequal, then ''A'' is a proper subset of ...
of with ''k'' elements, and ''J'' is a subset of with ''k'' elements, then we write ''Asub>''I'',''J'' for the minor of A that corresponds to the rows with index in ''I'' and the columns with index in ''J''. * If ''I'' = ''J'', then ''Asub>''I'',''J'' is called a ''principal minor''. * If the matrix that corresponds to a principal minor is a square upper-left
submatrix In mathematics, a matrix (plural matrices) is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object. For example, \begi ...
of the larger matrix (i.e., it consists of matrix elements in rows and columns from 1 to ''k'', also known as a leading principal submatrix), then the principal minor is called a ''leading principal minor (of order k)'' or ''corner (principal) minor (of order k)''. For an ''n'' × ''n'' square matrix, there are ''n'' leading principal minors. * A ''basic minor'' of a matrix is the determinant of a square submatrix that is of maximal size with nonzero determinant. * For
Hermitian matrices In mathematics, a Hermitian matrix (or self-adjoint matrix) is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the -th row and -th column is equal to the complex conjugate of the element in the -th ...
, the leading principal minors can be used to test for
positive definiteness In mathematics, positive definiteness is a property of any object to which a bilinear form or a sesquilinear form may be naturally associated, which is positive-definite. See, in particular: * Positive-definite bilinear form * Positive-definite f ...
and the principal minors can be used to test for positive semidefiniteness. See Sylvester's criterion for more details. Both the formula for ordinary
matrix multiplication In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the s ...
and the
Cauchy–Binet formula In mathematics, specifically linear algebra, the Cauchy–Binet formula, named after Augustin-Louis Cauchy and Jacques Philippe Marie Binet, is an identity for the determinant of the product of two rectangular matrices of transpose shapes (so t ...
for the determinant of the product of two matrices are special cases of the following general statement about the minors of a product of two matrices. Suppose that A is an ''m'' × ''n'' matrix, B is an ''n'' × ''p'' matrix, ''I'' is a
subset In mathematics, set ''A'' is a subset of a set ''B'' if all elements of ''A'' are also elements of ''B''; ''B'' is then a superset of ''A''. It is possible for ''A'' and ''B'' to be equal; if they are unequal, then ''A'' is a proper subset of ...
of with ''k'' elements and ''J'' is a subset of with ''k'' elements. Then : mathbf = \sum_ mathbf mathbf\, where the sum extends over all subsets ''K'' of with ''k'' elements. This formula is a straightforward extension of the Cauchy–Binet formula.


Multilinear algebra approach

A more systematic, algebraic treatment of minors is given in
multilinear algebra Multilinear algebra is a subfield of mathematics that extends the methods of linear algebra. Just as linear algebra is built on the concept of a vector and develops the theory of vector spaces, multilinear algebra builds on the concepts of ''p' ...
, using the
wedge product A wedge is a triangular shaped tool, and is a portable inclined plane, and one of the six simple machines. It can be used to separate two objects or portions of an object, lift up an object, or hold an object in place. It functions by convert ...
: the ''k''-minors of a matrix are the entries in the ''k''th
exterior power In mathematics, the exterior algebra, or Grassmann algebra, named after Hermann Grassmann, is an algebra that uses the exterior product or wedge product as its multiplication. In mathematics, the exterior product or wedge product of vectors is ...
map. If the columns of a matrix are wedged together ''k'' at a time, the ''k'' × ''k'' minors appear as the components of the resulting ''k''-vectors. For example, the 2 × 2 minors of the matrix :\begin 1 & 4 \\ 3 & \!\!-1 \\ 2 & 1 \\ \end are −13 (from the first two rows), −7 (from the first and last row), and 5 (from the last two rows). Now consider the wedge product :(\mathbf_1 + 3\mathbf_2 +2\mathbf_3)\wedge(4\mathbf_1-\mathbf_2+\mathbf_3) where the two expressions correspond to the two columns of our matrix. Using the properties of the wedge product, namely that it is bilinear and alternating, :\mathbf_i\wedge \mathbf_i = 0, and antisymmetric, :\mathbf_i\wedge \mathbf_j = - \mathbf_j\wedge \mathbf_i, we can simplify this expression to : -13 \mathbf_1\wedge \mathbf_2 -7 \mathbf_1\wedge \mathbf_3 +5 \mathbf_2\wedge \mathbf_3 where the coefficients agree with the minors computed earlier.


A remark about different notation

In some books, instead of ''cofactor'' the term ''adjunct'' is used. Felix Gantmacher, ''Theory of matrices'' (1st ed., original language is Russian), Moscow: State Publishing House of technical and theoretical literature, 1953, p.491, Moreover, it is denoted as A''ij'' and defined in the same way as cofactor: ::\mathbf_ = (-1)^ \mathbf_ Using this notation the inverse matrix is written this way: :\mathbf^ = \frac\begin A_ & A_ & \cdots & A_ \\ A_ & A_ & \cdots & A_ \\ \vdots & \vdots & \ddots & \vdots \\ A_ & A_ & \cdots & A_ \end Keep in mind that ''adjunct'' is not adjugate or adjoint. In modern terminology, the "adjoint" of a matrix most often refers to the corresponding adjoint operator.


See also

*
Submatrix In mathematics, a matrix (plural matrices) is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object. For example, \begi ...


References


External links


MIT Linear Algebra Lecture on Cofactors
at Google Video, from MIT OpenCourseWare


Springer Encyclopedia of Mathematics entry for ''Minor''
{{linear algebra Matrix theory Determinants