HOME



picture info

Characteristic Mode Analysis
Characteristic modes (CM) form a set of functions which, under specific boundary conditions, diagonalizes operator relating field and induced sources. Under certain conditions, the set of the CM is unique and complete (at least theoretically) and thereby capable of describing the behavior of a studied object in full. This article deals with characteristic mode decomposition in electromagnetics, a domain in which the CM theory has originally been proposed. Background CM decomposition was originally introduced as set of modes diagonalizing a scattering matrix. The theory has, subsequently, been generalized by Harrington and Mautz for antennas. Harrington, Mautz and their students also successively developed several other extensions of the theory. Even though some precursors were published back in the late 1940s, the full potential of CM has remained unrecognized for an additional 40 years. The capabilities of CM were revisited in 2007 and, since then, interest in CM has dramatica ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Electric Field
An electric field (sometimes called E-field) is a field (physics), physical field that surrounds electrically charged particles such as electrons. In classical electromagnetism, the electric field of a single charge (or group of charges) describes their capacity to exert attractive or repulsive forces on another charged object. Charged particles exert attractive forces on each other when the sign of their charges are opposite, one being positive while the other is negative, and repel each other when the signs of the charges are the same. Because these forces are exerted mutually, two charges must be present for the forces to take place. These forces are described by Coulomb's law, which says that the greater the magnitude of the charges, the greater the force, and the greater the distance between them, the weaker the force. Informally, the greater the charge of an object, the stronger its electric field. Similarly, an electric field is stronger nearer charged objects and weaker f ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Vacuum Permittivity
Vacuum permittivity, commonly denoted (pronounced "epsilon nought" or "epsilon zero"), is the value of the absolute dielectric permittivity of classical vacuum. It may also be referred to as the permittivity of free space, the electric constant, or the distributed capacitance of the vacuum. It is an ideal (baseline) physical constant. Its CODATA value is: It is a measure of how dense of an electric field is "permitted" to form in response to electric charges and relates the units for electric charge to mechanical quantities such as length and force. For example, the force between two separated electric charges with spherical symmetry (in the vacuum of classical electromagnetism) is given by Coulomb's law: F_\text = \frac \frac Here, ''q''1 and ''q''2 are the charges, ''r'' is the distance between their centres, and the value of the constant fraction 1/(4π''ε''0) is approximately . Likewise, ''ε''0 appears in Maxwell's equations, which describe the properties of electr ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Numerical Range
In the mathematics, mathematical field of linear algebra and convex analysis, the numerical range or field of values of a complex number, complex n \times n square matrix, matrix ''A'' is the set :W(A) = \left\ = \left\ where \mathbf^* denotes the conjugate transpose of the vector (mathematics), vector \mathbf. The numerical range includes, in particular, the diagonal entries of the matrix (obtained by choosing ''x'' equal to the unit vectors along the coordinate axes) and the eigenvalues of the matrix (obtained by choosing ''x'' equal to the eigenvectors). In engineering, numerical ranges are used as a rough estimate of eigenvalues of ''A''. Recently, generalizations of the numerical range are used to study quantum computing. A related concept is the numerical radius, which is the largest absolute value of the numbers in the numerical range, i.e. :r(A) = \sup \ = \sup_ , \langle\mathbf, A\mathbf \rangle, . Properties Let sum of sets denote a sumset. General properties # The ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Rayleigh Quotient
In mathematics, the Rayleigh quotient () for a given complex Hermitian matrix M and nonzero vector ''x'' is defined as:R(M,x) = .For real matrices and vectors, the condition of being Hermitian reduces to that of being symmetric, and the conjugate transpose x^ to the usual transpose x'. Note that R(M, c x) = R(M,x) for any non-zero scalar ''c''. Recall that a Hermitian (or real symmetric) matrix is diagonalizable with only real eigenvalues. It can be shown that, for a given matrix, the Rayleigh quotient reaches its minimum value \lambda_\min (the smallest eigenvalue of ''M'') when ''x'' is v_\min (the corresponding eigenvector). Similarly, R(M, x) \leq \lambda_\max and R(M, v_\max) = \lambda_\max. The Rayleigh quotient is used in the min-max theorem to get exact values of all eigenvalues. It is also used in eigenvalue algorithms (such as Rayleigh quotient iteration) to obtain an eigenvalue approximation from an eigenvector approximation. The range of the Rayleigh quotient (fo ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Hermitian Transpose
In mathematics, the conjugate transpose, also known as the Hermitian transpose, of an m \times n complex matrix \mathbf is an n \times m matrix obtained by transposing \mathbf and applying complex conjugation to each entry (the complex conjugate of a+ib being a-ib, for real numbers a and b). There are several notations, such as \mathbf^\mathrm or \mathbf^*, \mathbf', or (often in physics) \mathbf^. For real matrices, the conjugate transpose is just the transpose, \mathbf^\mathrm = \mathbf^\operatorname. Definition The conjugate transpose of an m \times n matrix \mathbf is formally defined by where the subscript ij denotes the (i,j)-th entry (matrix element), for 1 \le i \le n and 1 \le j \le m, and the overbar denotes a scalar complex conjugate. This definition can also be written as :\mathbf^\mathrm = \left(\overline\right)^\operatorname = \overline where \mathbf^\operatorname denotes the transpose and \overline denotes the matrix with complex conjugated entries. Other na ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Bilinear Form
In mathematics, a bilinear form is a bilinear map on a vector space (the elements of which are called '' vectors'') over a field ''K'' (the elements of which are called '' scalars''). In other words, a bilinear form is a function that is linear in each argument separately: * and * and The dot product on \R^n is an example of a bilinear form which is also an inner product. An example of a bilinear form that is not an inner product would be the four-vector product. The definition of a bilinear form can be extended to include modules over a ring, with linear maps replaced by module homomorphisms. When is the field of complex numbers , one is often more interested in sesquilinear forms, which are similar to bilinear forms but are conjugate linear in one argument. Coordinate representation Let be an - dimensional vector space with basis . The matrix ''A'', defined by is called the ''matrix of the bilinear form'' on the basis . If the matrix represents a ve ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Arnoldi Iteration
In numerical linear algebra, the Arnoldi iteration is an eigenvalue algorithm and an important example of an iterative method. Arnoldi finds an approximation to the eigenvalues and eigenvectors of general (possibly non- Hermitian) matrices by constructing an orthonormal basis of the Krylov subspace, which makes it particularly useful when dealing with large sparse matrices. The Arnoldi method belongs to a class of linear algebra algorithms that give a partial result after a small number of iterations, in contrast to so-called ''direct methods'' which must complete to give any useful results (see for example, Householder transformation). The partial result in this case being the first few vectors of the basis the algorithm is building. When applied to Hermitian matrices it reduces to the Lanczos algorithm. The Arnoldi iteration was invented by W. E. Arnoldi in 1951. Krylov subspaces and the power iteration An intuitive method for finding the largest (in absolute value) ei ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Generalized Schur Decomposition
In the mathematical discipline of linear algebra, the Schur decomposition or Schur triangulation, named after Issai Schur, is a matrix decomposition. It allows one to write an arbitrary complex square matrix as unitarily similar to an upper triangular matrix whose diagonal elements are the eigenvalues of the original matrix. Statement The complex Schur decomposition reads as follows: if is an square matrix with complex entries, then ''A'' can be expressed as (Section 2.3 and further at p. 79(Section 7.7 at p. 313 A = Q U Q^ for some unitary matrix ''Q'' (so that the inverse ''Q''−1 is also the conjugate transpose ''Q''* of ''Q''), and some upper triangular matrix ''U''. This is called a Schur form of ''A''. Since ''U'' is similar to ''A'', it has the same spectrum, and since it is triangular, its eigenvalues are the diagonal entries of ''U''. The Schur decomposition implies that there exists a nested sequence of ''A''-invariant subspaces , and that there exists an ordere ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Galerkin Method
In mathematics, in the area of numerical analysis, Galerkin methods are a family of methods for converting a continuous operator problem, such as a differential equation, commonly in a weak formulation, to a discrete problem by applying linear constraints determined by finite sets of basis functions. They are named after the Soviet mathematician Boris Galerkin. Often when referring to a Galerkin method, one also gives the name along with typical assumptions and approximation methods used: * Ritz–Galerkin method (after Walther Ritz) typically assumes symmetric and positive-definite bilinear form in the weak formulation, where the differential equation for a physical system can be formulated via minimization of a quadratic function representing the system energy and the approximate solution is a linear combination of the given set of the basis functions.A. Ern, J.L. Guermond, ''Theory and practice of finite elements'', Springer, 2004, * Bubnov–Galerkin method (after Ivan ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]