HOME





Matrix Pencil
In linear algebra, a matrix pencil is a matrix-valued polynomial function defined on a field K, usually the real or complex numbers. Definition Let K be a field (typically, K \in \; the definition can be generalized to rngs), let \ell \ge 0 be a non-negative integer, let n > 0 be a positive integer, and let A_0, A_1, \dots, A_\ell be n\times n matrices (i. e. A_i \in \mathrm(K, n \times n) for all i = 0, \dots, \ell). Then the matrix pencil defined by A_0, \dots, A_\ell is the matrix-valued function L \colon K \to \mathrm(K, n \times n) defined by :L(\lambda) = \sum_^\ell \lambda^i A_i. The ''degree'' of the matrix pencil is defined as the largest integer 0 \le k \le \ell such that A_k \ne 0, the n \times n zero matrix over K. Linear matrix pencils A particular case is a linear matrix pencil L(\lambda) = A - \lambda B (where B \ne 0). We denote it briefly with the notation (A, B), and note that using the more general notation, A_0 = A and A_1 = -B (not B). Proper ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Linear Algebra
Linear algebra is the branch of mathematics concerning linear equations such as :a_1x_1+\cdots +a_nx_n=b, linear maps such as :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrix (mathematics), matrices. Linear algebra is central to almost all areas of mathematics. For instance, linear algebra is fundamental in modern presentations of geometry, including for defining basic objects such as line (geometry), lines, plane (geometry), planes and rotation (mathematics), rotations. Also, functional analysis, a branch of mathematical analysis, may be viewed as the application of linear algebra to Space of functions, function spaces. Linear algebra is also used in most sciences and fields of engineering because it allows mathematical model, modeling many natural phenomena, and computing efficiently with such models. For nonlinear systems, which cannot be modeled with linear algebra, it is often used for dealing with first-order a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Condition Number
In numerical analysis, the condition number of a function measures how much the output value of the function can change for a small change in the input argument. This is used to measure how sensitive a function is to changes or errors in the input, and how much error in the output results from an error in the input. Very frequently, one is solving the inverse problem: given f(x) = y, one is solving for ''x,'' and thus the condition number of the (local) inverse must be used. The condition number is derived from the theory of propagation of uncertainty, and is formally defined as the value of the asymptotic worst-case relative change in output for a relative change in input. The "function" is the solution of a problem and the "arguments" are the data in the problem. The condition number is frequently applied to questions in linear algebra, in which case the derivative is straightforward but the error could be in many different directions, and is thus computed from the geometry of t ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Peter Lancaster
Peter Lancaster (born 14 November 1929) is a British–Canadian mathematician. He is professor emeritus at the University of Calgary, where he has worked since 1962. His research focuses on matrix analysis and related fields, motivated by problems from vibration theory, numerical analysis, systems theory, and signal processing. Biography Lancaster was born in Appleby, England, and attended Sir John Deane's Grammar School and the Liverpool Collegiate School. After an unsuccessful year in the University of Liverpool School of Architecture, he joined the mathematics program in the same university, graduating with an honours degree in 1952. Lancaster thereupon worked as an aerodynamicist with the English Electric Company until 1957, completing a Master's degree at the same time under the supervision of Louis Rosenhead. He took a teaching post at the University of Malaya, from which he was granted a PhD in 1964, and moved to Canada in November 1962 to work at the University o ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Johns Hopkins University Press
Johns Hopkins University Press (also referred to as JHU Press or JHUP) is the publishing division of Johns Hopkins University. It was founded in 1878 and is the oldest continuously running university press in the United States. The press publishes books and journals, and operates other divisions including fulfillment and electronic databases. Its headquarters are in Charles Village section of Baltimore, Maryland Maryland ( ) is a U.S. state, state in the Mid-Atlantic (United States), Mid-Atlantic region of the United States. It borders the states of Virginia to its south, West Virginia to its west, Pennsylvania to its north, and Delaware to its east .... In 2017, after the retirement of Kathleen Keane, who is credited with modernizing JHU Press for the digital age, the university appointed new director Barbara Pope. Overview Daniel Coit Gilman, the first president of Johns Hopkins University, inaugurated the press in 1878. The press began as the university's Public ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Rayleigh Quotient
In mathematics, the Rayleigh quotient () for a given complex Hermitian matrix M and nonzero vector ''x'' is defined as:R(M,x) = .For real matrices and vectors, the condition of being Hermitian reduces to that of being symmetric, and the conjugate transpose x^ to the usual transpose x'. Note that R(M, c x) = R(M,x) for any non-zero scalar ''c''. Recall that a Hermitian (or real symmetric) matrix is diagonalizable with only real eigenvalues. It can be shown that, for a given matrix, the Rayleigh quotient reaches its minimum value \lambda_\min (the smallest eigenvalue of ''M'') when ''x'' is v_\min (the corresponding eigenvector). Similarly, R(M, x) \leq \lambda_\max and R(M, v_\max) = \lambda_\max. The Rayleigh quotient is used in the min-max theorem to get exact values of all eigenvalues. It is also used in eigenvalue algorithms (such as Rayleigh quotient iteration) to obtain an eigenvalue approximation from an eigenvector approximation. The range of the Rayleigh quotient (fo ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Quadratic Eigenvalue Problem
In mathematics, the quadratic eigenvalue problemF. Tisseur and K. Meerbergen, The quadratic eigenvalue problem, SIAM Rev., 43 (2001), pp. 235–286. (QEP), is to find scalar eigenvalues \lambda, left eigenvectors y and right eigenvectors x such that : Q(\lambda)x = 0 ~ \text ~ y^\ast Q(\lambda) = 0, where Q(\lambda)=\lambda^2 M + \lambda C + K, with matrix coefficients M, \, C, K \in \mathbb^ and we require that M\,\neq 0, (so that we have a nonzero leading coefficient). There are 2n eigenvalues that may be ''infinite'' or finite, and possibly zero. This is a special case of a nonlinear eigenproblem. Q(\lambda) is also known as a quadratic polynomial matrix. Spectral theory A QEP is said to be regular if \text (Q(\lambda)) \not \equiv 0 identically. The coefficient of the \lambda^ term in \text(Q(\lambda)) is \text(M), implying that the QEP is regular if M is nonsingular. Eigenvalues at infinity and eigenvalues at 0 may be exchanged by considering the reversed polynomial, \ ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Nonlinear Eigenproblem
In mathematics, a nonlinear eigenproblem, sometimes nonlinear eigenvalue problem, is a generalization of the (ordinary) eigenvalue problem to equations that depend nonlinearly on the eigenvalue. Specifically, it refers to equations of the form : M (\lambda) x = 0 , where x\neq0 is a vector, and ''M'' is a matrix-valued function of the number \lambda. The number \lambda is known as the (nonlinear) eigenvalue, the vector x as the (nonlinear) eigenvector, and (\lambda,x) as the eigenpair. The matrix M (\lambda) is singular at an eigenvalue \lambda. Definition In the discipline of numerical linear algebra the following definition is typically used. Let \Omega \subseteq \Complex, and let M : \Omega \rightarrow \Complex^ be a function that maps scalars to matrices. A scalar \lambda \in \Complex is called an ''eigenvalue'', and a nonzero vector x \in \Complex^n is called a ''right eigenvector'' if M (\lambda) x = 0. Moreover, a nonzero vector y \in \Complex^n is called a ''left ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Generalized Pencil-of-function Method
Generalized pencil-of-function method (GPOF), also known as matrix pencil method, is a signal processing technique for estimating a signal or extracting information with complex exponentials. Being similar to Prony and original pencil-of-function methods, it is generally preferred to those for its robustness and computational efficiency. The method was originally developed by Yingbo Hua and Tapan Sarkar for estimating the behaviour of electromagnetic systems by its transient response, building on Sarkar's past work on the original pencil-of-function method. The method has a plethora of applications in electrical engineering, particularly related to problems in computational electromagnetics, microwave engineering and antenna theory. Method Mathematical basis A transient electromagnetic signal can be represented as: :y(t)=x(t)+n(t) \approx \sum_^R_i e^ + n(t); 0 \leq t \leq T, where : y(t) is the observed time-domain signal, : n(t) is the signal noise, : x(t) is the actual s ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Generalized Eigenvalue Problem
In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When the matrix being factorized is a normal or real symmetric matrix, the decomposition is called "spectral decomposition", derived from the spectral theorem. Fundamental theory of matrix eigenvectors and eigenvalues A (nonzero) vector of dimension is an eigenvector of a square matrix if it satisfies a linear equation of the form \mathbf \mathbf = \lambda \mathbf for some scalar . Then is called the eigenvalue corresponding to . Geometrically speaking, the eigenvectors of are the vectors that merely elongates or shrinks, and the amount that they elongate/shrink by is the eigenvalue. The above equation is called the eigenvalue equation or the eigenvalue problem. This yields an equation for the eigenvalues p\left(\lambda\right) = \det\lef ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Diagonal Matrix
In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is \left begin 3 & 0 \\ 0 & 2 \end\right/math>, while an example of a 3×3 diagonal matrix is \left begin 6 & 0 & 0 \\ 0 & 5 & 0 \\ 0 & 0 & 4 \end\right/math>. An identity matrix of any size, or any multiple of it is a diagonal matrix called a ''scalar matrix'', for example, \left begin 0.5 & 0 \\ 0 & 0.5 \end\right/math>. In geometry, a diagonal matrix may be used as a '' scaling matrix'', since matrix multiplication with it results in changing scale (size) and possibly also shape; only a scalar matrix results in uniform change in scale. Definition As stated above, a diagonal matrix is a matrix in which all off-diagonal entries are zero. That is, the matrix with columns and rows is diagonal if \forall i,j \in \, i \ne j \ ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Similar Matrices
In linear algebra, two ''n''-by-''n'' matrices and are called similar if there exists an invertible ''n''-by-''n'' matrix such that B = P^ A P . Similar matrices represent the same linear map under two possibly different bases, with being the change-of-basis matrix. A transformation is called a similarity transformation or conjugation of the matrix . In the general linear group, similarity is therefore the same as conjugacy, and similar matrices are also called conjugate; however, in a given subgroup of the general linear group, the notion of conjugacy may be more restrictive than similarity, since it requires that be chosen to lie in . Motivating example When defining a linear transformation, it can be the case that a change of basis can result in a simpler form of the same transformation. For example, the matrix representing a rotation in when the axis of rotation is not aligned with the coordinate axis can be complicated to compute. If the axis of rotation we ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

QR Algorithm
In numerical linear algebra, the QR algorithm or QR iteration is an eigenvalue algorithm: that is, a procedure to calculate the eigenvalues and eigenvectors of a Matrix (mathematics), matrix. The QR algorithm was developed in the late 1950s by John G. F. Francis and by Vera N. Kublanovskaya, working independently. The basic idea is to perform a QR decomposition, writing the matrix as a product of an orthogonal matrix and an upper triangular matrix, multiply the factors in the reverse order, and iterate. The practical QR algorithm Formally, let be a real matrix of which we want to compute the eigenvalues, and let . At the -th step (starting with ), we compute the QR decomposition where is an orthogonal matrix (i.e., ) and is an upper triangular matrix. We then form . Note that A_ = R_k Q_k = Q_k^ Q_k R_k Q_k = Q_k^ A_k Q_k = Q_k^ A_k Q_k, so all the are Similar matrix, similar and hence they have the same eigenvalues. The algorithm is numerical stability, numerically stable b ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]