Gerschgorin Circle Theorem
   HOME



picture info

Gerschgorin Circle Theorem
In mathematics, the Gershgorin circle theorem may be used to bound the Spectrum (functional analysis), spectrum of a square matrix (mathematics), matrix. It was first published by the Soviet mathematician Semyon Aronovich Gershgorin in 1931. Gershgorin's name has been transliterated in several different ways, including Geršgorin, Gerschgorin, Gershgorin, Hershhorn, and Hirschhorn. Statement and proof Let A be a complex number, complex n\times n matrix, with entries a_. For i \in\ let R_i be the sum of the absolute values of the non-diagonal entries in the i-th row: : R_i= \sum_ \left, a_\. Let D(a_, R_i) \subseteq \Complex be a closed disc (mathematics), disc centered at a_ with radius R_i. Such a disc is called a Gershgorin disc. :Theorem. Every eigenvalue of A lies within at least one of the Gershgorin discs D(a_,R_i). ''Proof.'' Let \lambda be an eigenvalue of A with corresponding eigenvector x = (x_j). Find ''i'' such that the element of ''x'' with the largest absolute ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Mathematics
Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. There are many areas of mathematics, which include number theory (the study of numbers), algebra (the study of formulas and related structures), geometry (the study of shapes and spaces that contain them), Mathematical analysis, analysis (the study of continuous changes), and set theory (presently used as a foundation for all mathematics). Mathematics involves the description and manipulation of mathematical object, abstract objects that consist of either abstraction (mathematics), abstractions from nature orin modern mathematicspurely abstract entities that are stipulated to have certain properties, called axioms. Mathematics uses pure reason to proof (mathematics), prove properties of objects, a ''proof'' consisting of a succession of applications of in ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Cambridge University Press
Cambridge University Press was the university press of the University of Cambridge. Granted a letters patent by King Henry VIII in 1534, it was the oldest university press in the world. Cambridge University Press merged with Cambridge Assessment to form Cambridge University Press and Assessment under Queen Elizabeth II's approval in August 2021. With a global sales presence, publishing hubs, and offices in more than 40 countries, it published over 50,000 titles by authors from over 100 countries. Its publications include more than 420 academic journals, monographs, reference works, school and university textbooks, and English language teaching and learning publications. It also published Bibles, runs a bookshop in Cambridge, sells through Amazon, and has a conference venues business in Cambridge at the Pitt Building and the Sir Geoffrey Cass Sports and Social Centre. It also served as the King's Printer. Cambridge University Press, as part of the University of Cambridge, was a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Muirhead's Inequality
In mathematics, Muirhead's inequality, named after Robert Franklin Muirhead, also known as the "bunching" method, generalizes the inequality of arithmetic and geometric means. Preliminary definitions ''a''-mean For any real vector :a=(a_1,\dots,a_n) define the "''a''-mean" 'a''of positive real numbers ''x''1, ..., ''x''''n'' by : \frac\sum_\sigma x_^\cdots x_^, where the sum extends over all permutations σ of . When the elements of ''a'' are nonnegative integers, the ''a''-mean can be equivalently defined via the monomial symmetric polynomial m_a(x_1,\dots,x_n) as : = \frac m_a(x_1,\dots,x_n), where ℓ is the number of distinct elements in ''a'', and ''k''1, ..., ''k''ℓ are their multiplicities. Notice that the ''a''-mean as defined above only has the usual properties of a mean (e.g., if the mean of equal numbers is equal to them) if a_1+\cdots+a_n=1. In the general case, one can consider instead , which is called a Muirhead mean.Bullen, P. S. Handbook of means an ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Metzler Matrix
In mathematics, a Metzler matrix is a matrix in which all the off-diagonal components are nonnegative (equal to or greater than zero): : \forall_\, x_ \geq 0. It is named after the American economist Lloyd Metzler. Metzler matrices appear in stability analysis of time delayed differential equations and positive linear dynamical systems. Their properties can be derived by applying the properties of nonnegative matrices to matrices of the form ''M'' + ''aI'', where ''M'' is a Metzler matrix. Definition and terminology In mathematics, especially linear algebra, a matrix is called Metzler, quasipositive (or quasi-positive) or essentially nonnegative if all of its elements are non-negative except for those on the main diagonal, which are unconstrained. That is, a Metzler matrix is any matrix ''A'' which satisfies :A=(a_);\quad a_\geq 0, \quad i\neq j. Metzler matrices are also sometimes referred to as Z^-matrices, as a ''Z''-matrix is equivalent to a negated quasip ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Joel Lee Brenner
Joel Lee Brenner ( – ) was an American mathematician who specialized in matrix theory, linear algebra, and group theory. He is known as the translator of several popular Russian texts. He was a teaching professor at some dozen colleges and universities and was a Senior Mathematician at Stanford Research Institute from 1956 to 1968. He published over one hundred scholarly papers, 35 with coauthors, and wrote book reviews.LeRoy B. Beasley (1987) "The Mathematical Work of Joel Lee Brenner", Linear Algebra and its Applications 90:1–13 Academic career In 1930 Brenner earned a B.A. degree with major in chemistry from Harvard University. In graduate study there he was influenced by Hans Brinkmann, Garrett Birkhoff, and Marshall Stone. He was granted the Ph.D. in February 1936. Brenner later described some of his reminiscences of his student days at Harvard and of the state of American mathematics in the 1930s in an article for American Mathematical Monthly. In 1951 Brenner publishe ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Hurwitz-stable Matrix
In mathematics, a Hurwitz-stable matrix, or more commonly simply Hurwitz matrix, is a square matrix whose eigenvalues all have strictly negative real part. Some authors also use the term stability matrix. Such matrices play an important role in control theory. Definition A square matrix A is called a Hurwitz matrix if every eigenvalue of A has strictly negative real part, that is, :\operatorname lambda_i< 0\, for each eigenvalue \lambda_i. A is also called a stable matrix, because then the differential equation :\dot x = A x is , that is, x(t)\to 0 as t\to\infty. If G(s) is a (matrix-valued)



Doubly Stochastic Matrix
In mathematics, especially in probability and combinatorics, a doubly stochastic matrix (also called bistochastic matrix) is a square matrix X=(x_) of nonnegative real numbers, each of whose rows and columns sums to 1, i.e., :\sum_i x_=\sum_j x_=1, Thus, a doubly stochastic matrix is both left stochastic and right stochastic. Indeed, any matrix that is both left and right stochastic must be square: if every row sums to 1 then the sum of all entries in the matrix must be equal to the number of rows, and since the same holds for columns, the number of rows and columns must be equal. Birkhoff polytope The class of n\times n doubly stochastic matrices is a convex polytope known as the Birkhoff polytope B_n. Using the matrix entries as Cartesian coordinates, it lies in an (n-1)^2-dimensional affine subspace of n^2-dimensional Euclidean space defined by 2n-1 independent linear constraints specifying that the row and column sums all equal 1. (There are 2n-1 constraints rather than 2 ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Perron–Frobenius Theorem
In matrix theory, the Perron–Frobenius theorem, proved by and , asserts that a real square matrix with positive entries has a unique eigenvalue of largest magnitude and that eigenvalue is real. The corresponding eigenvector can be chosen to have strictly positive components, and also asserts a similar statement for certain classes of nonnegative matrices. This theorem has important applications to probability theory (ergodicity of Markov chains); to the theory of dynamical systems ( subshifts of finite type); to economics ( Okishio's theorem, Hawkins–Simon condition); to demography ( Leslie population age distribution model); to social networks ( DeGroot learning process); to Internet search engines (PageRank); and even to ranking of American football teams. The first to discuss the ordering of players within tournaments using Perron–Frobenius eigenvectors is Edmund Landau. Statement Let positive and non-negative respectively describe matrices with exclusively positi ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Diagonally Dominant Matrix
In mathematics, a square matrix is said to be diagonally dominant if, for every row of the matrix, the magnitude of the diagonal entry in a row is greater than or equal to the sum of the magnitudes of all the other (off-diagonal) entries in that row. More precisely, the matrix A is diagonally dominant if :, a_, \geq \sum_ , a_, \ \ \forall \ i where a_ denotes the entry in the ith row and jth column. This definition uses a weak inequality, and is therefore sometimes called ''weak diagonal dominance''. If a strict inequality (>) is used, this is called ''strict diagonal dominance''. The unqualified term ''diagonal dominance'' can mean both strict and weak diagonal dominance, depending on the context. Variations The definition in the first paragraph sums entries across each row. It is therefore sometimes called ''row diagonal dominance''. If one changes the definition to sum down each column, this is called ''column diagonal dominance''. Any strictly diagonally dominant matr ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Gershgorin Disk Theorem Example
In mathematics, the Gershgorin circle theorem may be used to bound the spectrum of a square matrix. It was first published by the Soviet mathematician Semyon Aronovich Gershgorin in 1931. Gershgorin's name has been transliterated in several different ways, including Geršgorin, Gerschgorin, Gershgorin, Hershhorn, and Hirschhorn. Statement and proof Let A be a complex n\times n matrix, with entries a_. For i \in\ let R_i be the sum of the absolute values of the non-diagonal entries in the i-th row: : R_i= \sum_ \left, a_\. Let D(a_, R_i) \subseteq \Complex be a closed disc centered at a_ with radius R_i. Such a disc is called a Gershgorin disc. :Theorem. Every eigenvalue of A lies within at least one of the Gershgorin discs D(a_,R_i). ''Proof.'' Let \lambda be an eigenvalue of A with corresponding eigenvector x = (x_j). Find ''i'' such that the element of ''x'' with the largest absolute value is x_i. Since Ax=\lambda x, in particular we take the ''i''th component of that eq ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Matrix Inverse
In linear algebra, an invertible matrix (''non-singular'', ''non-degenarate'' or ''regular'') is a square matrix that has an inverse. In other words, if some other matrix is multiplied by the invertible matrix, the result can be multiplied by an inverse to undo the operation. An invertible matrix multiplied by its inverse yields the identity matrix. Invertible matrices are the same size as their inverse. Definition An -by- square matrix is called invertible if there exists an -by- square matrix such that\mathbf = \mathbf = \mathbf_n ,where denotes the -by- identity matrix and the multiplication used is ordinary matrix multiplication. If this is the case, then the matrix is uniquely determined by , and is called the (multiplicative) ''inverse'' of , denoted by . Matrix inversion is the process of finding the matrix which when multiplied by the original matrix gives the identity matrix. Over a field, a square matrix that is ''not'' invertible is called singular or degener ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Preconditioning
In mathematics, preconditioning is the application of a transformation, called the preconditioner, that conditions a given problem into a form that is more suitable for numerical solving methods. Preconditioning is typically related to reducing a condition number of the problem. The preconditioned problem is then usually solved by an iterative method. Preconditioning for linear systems In linear algebra and numerical analysis, a preconditioner P of a matrix A is a matrix such that P^A has a smaller condition number than A. It is also common to call T=P^ the preconditioner, rather than P, since P itself is rarely explicitly available. In modern preconditioning, the application of T = P^, i.e., multiplication of a column vector, or a block of column vectors, by T = P^, is commonly performed in a matrix-free fashion, i.e., where neither P, nor T = P^ (and often not even A) are explicitly available in a matrix form. Preconditioners are useful in iterative methods to solve ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]