HOME

TheInfoList



OR:

In
algebra Algebra is a branch of mathematics that deals with abstract systems, known as algebraic structures, and the manipulation of expressions within those systems. It is a generalization of arithmetic that introduces variables and algebraic ope ...
, the Leibniz formula, named in honor of
Gottfried Leibniz Gottfried Wilhelm Leibniz (or Leibnitz; – 14 November 1716) was a German polymath active as a mathematician, philosopher, scientist and diplomat who is credited, alongside Isaac Newton, Sir Isaac Newton, with the creation of calculus in ad ...
, expresses the
determinant In mathematics, the determinant is a Scalar (mathematics), scalar-valued function (mathematics), function of the entries of a square matrix. The determinant of a matrix is commonly denoted , , or . Its value characterizes some properties of the ...
of a square matrix in terms of permutations of the matrix elements. If A is an n \times n matrix, where a_ is the entry in the i-th row and j-th column of A, the formula is :\det(A) = \sum_ \sgn(\tau) \prod_^n a_ = \sum_ \sgn(\sigma) \prod_^n a_ where \sgn is the
sign function In mathematics, the sign function or signum function (from '' signum'', Latin for "sign") is a function that has the value , or according to whether the sign of a given real number is positive or negative, or the given number is itself zer ...
of
permutation In mathematics, a permutation of a set can mean one of two different things: * an arrangement of its members in a sequence or linear order, or * the act or process of changing the linear order of an ordered set. An example of the first mean ...
s in the
permutation group In mathematics, a permutation group is a group ''G'' whose elements are permutations of a given set ''M'' and whose group operation is the composition of permutations in ''G'' (which are thought of as bijective functions from the set ''M'' to ...
S_n, which returns +1 and -1 for even and odd permutations, respectively. Another common notation used for the formula is in terms of the Levi-Civita symbol and makes use of the Einstein summation notation, where it becomes : \det(A) = \epsilon_ _ \cdots _, which may be more familiar to physicists. Directly evaluating the Leibniz formula from the definition requires \Omega(n! \cdot n) operations in general—that is, a number of operations asymptotically proportional to n
factorial In mathematics, the factorial of a non-negative denoted is the Product (mathematics), product of all positive integers less than or equal The factorial also equals the product of n with the next smaller factorial: \begin n! &= n \times ...
—because n! is the number of order-n permutations. This is impractically difficult for even relatively small n. Instead, the determinant can be evaluated in O(n^3) operations by forming the LU decomposition A = LU (typically via
Gaussian elimination In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of row-wise operations performed on the corresponding matrix of coefficients. This method can a ...
or similar methods), in which case \det A = \det L \cdot \det U and the determinants of the triangular matrices L and U are simply the products of their diagonal entries. (In practical applications of numerical linear algebra, however, explicit computation of the determinant is rarely required.) See, for example, . The determinant can also be evaluated in fewer than O(n^3) operations by reducing the problem to
matrix multiplication In mathematics, specifically in linear algebra, matrix multiplication is a binary operation that produces a matrix (mathematics), matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the n ...
, but most such algorithms are not practical.


Formal statement and proof

Theorem. There exists exactly one function F : M_n (\mathbb K) \rightarrow \mathbb K which is alternating multilinear w.r.t. columns and such that F(I) = 1. Proof. Uniqueness: Let F be such a function, and let A = (a_i^j)_^ be an n \times n matrix. Call A^j the j-th column of A, i.e. A^j = (a_i^j)_, so that A = \left(A^1, \dots, A^n\right). Also, let E^k denote the k-th column vector of the identity matrix. Now one writes each of the A^j's in terms of the E^k, i.e. :A^j = \sum_^n a_k^j E^k. As F is multilinear, one has : \begin F(A)& = F\left(\sum_^n a_^1 E^, \dots, \sum_^n a_^n E^\right) = \sum_^n \left(\prod_^n a_^i\right) F\left(E^, \dots, E^\right). \end From alternation it follows that any term with repeated indices is zero. The sum can therefore be restricted to tuples with non-repeating indices, i.e. permutations: :F(A) = \sum_ \left(\prod_^n a_^i\right) F(E^, \dots , E^). Because F is alternating, the columns E can be swapped until it becomes the identity. The
sign function In mathematics, the sign function or signum function (from '' signum'', Latin for "sign") is a function that has the value , or according to whether the sign of a given real number is positive or negative, or the given number is itself zer ...
\sgn(\sigma) is defined to count the number of swaps necessary and account for the resulting sign change. One finally gets: : \begin F(A)& = \sum_ \sgn(\sigma) \left(\prod_^n a_^i\right) F(I)\\ & = \sum_ \sgn(\sigma) \prod_^n a_^i \end as F(I) is required to be equal to 1. Therefore no function besides the function defined by the Leibniz Formula can be a multilinear alternating function with F\left(I\right)=1. Existence: We now show that F, where F is the function defined by the Leibniz formula, has these three properties. Multilinear: : \begin F(A^1, \dots, cA^j, \dots) & = \sum_ \sgn(\sigma) ca_^j\prod_^n a_^i\\ & = c \sum_ \sgn(\sigma) a_^j\prod_^n a_^i\\ &=c F(A^1, \dots, A^j, \dots)\\ \\ F(A^1, \dots, b+A^j, \dots) & = \sum_ \sgn(\sigma)\left(b_ + a_^j\right)\prod_^n a_^i\\ & = \sum_ \sgn(\sigma) \left( \left(b_\prod_^n a_^i\right) + \left(a_^j\prod_^n a_^i\right)\right)\\ & = \left(\sum_ \sgn(\sigma) b_\prod_^n a_^i\right) + \left(\sum_ \sgn(\sigma) \prod_^n a_^i\right)\\ &= F(A^1, \dots, b, \dots) + F(A^1, \dots, A^j, \dots)\\ \\ \end Alternating: : \begin F(\dots, A^, \dots, A^, \dots) & = \sum_ \sgn(\sigma) \left(\prod_^n a_^i\right) a_^ a_^\\ \end For any \sigma \in S_n let \sigma' be the tuple equal to \sigma with the j_1 and j_2 indices switched. : \begin F(A) & = \sum_\left sgn(\sigma)\left(\prod_^na_^\right)a_^a_^+\sgn(\sigma')\left(\prod_^na_^\right)a_^a_^\right\ & =\sum_\left sgn(\sigma)\left(\prod_^na_^\right)a_^a_^-\sgn(\sigma)\left(\prod_^na_^\right)a_^a_^\right\ & =\sum_\sgn(\sigma)\left(\prod_^na_^\right)\underbrace_\\ \\ \end Thus if A^ = A^ then F(\dots, A^, \dots, A^, \dots)=0. Finally, F(I)=1: : \begin F(I) & = \sum_ \sgn(\sigma) \prod_^n I^i_ = \sum_ \sgn(\sigma) \prod_^n \operatorname_\\ & = \sum_ \sgn(\sigma) \operatorname_ = \sgn(\operatorname_) = 1 \end Thus the only alternating multilinear functions with F(I)=1 are restricted to the function defined by the Leibniz formula, and it in fact also has these three properties. Hence the determinant can be defined as the only function \det : M_n (\mathbb K) \rightarrow \mathbb K with these three properties.


See also

*
Matrix Matrix (: matrices or matrixes) or MATRIX may refer to: Science and mathematics * Matrix (mathematics), a rectangular array of numbers, symbols or expressions * Matrix (logic), part of a formula in prenex normal form * Matrix (biology), the m ...
* Laplace expansion *
Cramer's rule In linear algebra, Cramer's rule is an explicit formula for the solution of a system of linear equations with as many equations as unknowns, valid whenever the system has a unique solution. It expresses the solution in terms of the determinants of ...


References

* * {{cite book , title=Numerical Linear Algebra , last1=Trefethen , first1=Lloyd N. , last2=Bau , first2=David , publisher= SIAM , date=June 1, 1997 , isbn=978-0898713619 Determinants Gottfried Wilhelm Leibniz Linear algebra Articles containing proofs