Transform Theory
In mathematics, transform theory is the study of transforms, which relate a function in one domain to another function in a second domain. The essence of transform theory is that by a suitable choice of basis for a vector space a problem may be simplified—or ''diagonalized'' as in spectral theory. Main examples of transforms that are both well known and widely applicable include integral transforms such as the Fourier transform, the fractional Fourier Transform, the Laplace transform, and linear canonical transformations.J.J. Healy, M.A. Kutay, H.M. Ozaktas and J.T. Sheridan, "''Linear Canonical Transforms: Theory and Applications''", Springer, New York 2016. These transformations are used in signal processing, optics, and quantum mechanics. Spectral theory In spectral theory, the spectral theorem says that if ''A'' is an ''n''×''n'' self-adjoint matrix, there is an orthonormal basis of eigenvectors of ''A''. This implies that ''A'' is diagonalizable. Furthermore, each ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Mathematics
Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. There are many areas of mathematics, which include number theory (the study of numbers), algebra (the study of formulas and related structures), geometry (the study of shapes and spaces that contain them), Mathematical analysis, analysis (the study of continuous changes), and set theory (presently used as a foundation for all mathematics). Mathematics involves the description and manipulation of mathematical object, abstract objects that consist of either abstraction (mathematics), abstractions from nature orin modern mathematicspurely abstract entities that are stipulated to have certain properties, called axioms. Mathematics uses pure reason to proof (mathematics), prove properties of objects, a ''proof'' consisting of a succession of applications of in ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Orthonormal Basis
In mathematics, particularly linear algebra, an orthonormal basis for an inner product space V with finite Dimension (linear algebra), dimension is a Basis (linear algebra), basis for V whose vectors are orthonormal, that is, they are all unit vectors and Orthogonality_(mathematics), orthogonal to each other. For example, the standard basis for a Euclidean space \R^n is an orthonormal basis, where the relevant inner product is the dot product of vectors. The Image (mathematics), image of the standard basis under a Rotation (mathematics), rotation or Reflection (mathematics), reflection (or any orthogonal transformation) is also orthonormal, and every orthonormal basis for \R^n arises in this fashion. An orthonormal basis can be derived from an orthogonal basis via Normalize (linear algebra), normalization. The choice of an origin (mathematics), origin and an orthonormal basis forms a coordinate frame known as an ''orthonormal frame''. For a general inner product space V, an orthono ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Z-transform
In mathematics and signal processing, the Z-transform converts a discrete-time signal, which is a sequence of real or complex numbers, into a complex valued frequency-domain (the z-domain or z-plane) representation. It can be considered a discrete-time equivalent of the Laplace transform (the ''s-domain'' or ''s-plane''). This similarity is explored in the theory of time-scale calculus. While the continuous-time Fourier transform is evaluated on the s-domain's vertical axis (the imaginary axis), the discrete-time Fourier transform is evaluated along the z-domain's unit circle. The s-domain's left half-plane maps to the area inside the z-domain's unit circle, while the s-domain's right half-plane maps to the area outside of the z-domain's unit circle. In signal processing, one of the means of designing digital filters is to take analog designs, subject them to a bilinear transform which maps them from the s-domain to the z-domain, and then produce the digital filter by in ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Mellin Transform
In mathematics, the Mellin transform is an integral transform that may be regarded as the multiplicative version of the two-sided Laplace transform. This integral transform is closely connected to the theory of Dirichlet series, and is often used in number theory, mathematical statistics, and the theory of asymptotic expansions; it is closely related to the Laplace transform and the Fourier transform, and the theory of the gamma function and allied special functions. The Mellin transform of a complex-valued function defined on \mathbf R^_+= (0,\infty) is the function \mathcal M f of complex variable s given (where it exists, see Fundamental strip below) by \mathcal\left\(s) = \varphi(s)=\int_0^\infty x^ f(x) \, dx = \int_f(x) x^s \frac. Notice that dx/x is a Haar measure on the multiplicative group \mathbf R^_+ and x\mapsto x^s is a (in general non-unitary) multiplicative character. The inverse transform is \mathcal^\left\(x) = f(x)=\frac \int_^ x^ \varphi(s)\, ds. The notation ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Joukowsky Transform
In applied mathematics, the Joukowsky transform (sometimes transliterated ''Joukovsky'', ''Joukowski'' or ''Zhukovsky'') is a conformal map historically used to understand some principles of airfoil design. It is named after Nikolai Zhukovsky, who published it in 1910. The transform is : z = \zeta + \frac, where z = x + iy is a complex variable in the new space and \zeta = \chi + i \eta is a complex variable in the original space. In aerodynamics, the transform is used to solve for the two-dimensional potential flow around a class of airfoils known as Joukowsky airfoils. A Joukowsky airfoil is generated in the complex plane (z-plane) by applying the Joukowsky transform to a circle in the \zeta-plane. The coordinates of the centre of the circle are variables, and varying them modifies the shape of the resulting airfoil. The circle encloses the point \zeta = -1 (where the derivative is zero) and intersects the point \zeta = 1. This can be achieved for any allowable centre p ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Hankel Transform
In mathematics, the Hankel transform expresses any given function ''f''(''r'') as the weighted sum of an infinite number of Bessel functions of the first kind . The Bessel functions in the sum are all of the same order ν, but differ in a scaling factor ''k'' along the ''r'' axis. The necessary coefficient of each Bessel function in the sum, as a function of the scaling factor ''k'' constitutes the transformed function. The Hankel transform is an integral transform and was first developed by the mathematician Hermann Hankel. It is also known as the Fourier–Bessel transform. Just as the Fourier transform for an infinite interval is related to the Fourier series over a finite interval, so the Hankel transform over an infinite interval is related to the Fourier–Bessel series over a finite interval. Definition The Hankel transform of order \nu of a function ''f''(''r'') is given by : F_\nu(k) = \int_0^\infty f(r) J_\nu(kr) \,r\,\mathrmr, where J_\nu is the Bessel function of ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Linear Canonical Transformation
In Hamiltonian mechanics, the linear canonical transformation (LCT) is a family of integral transforms that generalizes many classical transforms. It has 4 parameters and 1 constraint, so it is a 3-dimensional family, and can be visualized as the action of the special linear group SL2(C) on the time–frequency plane (domain). As this defines the original function up to a sign, this translates into an action of its double cover on the original function space. The LCT generalizes the Fourier, fractional Fourier, Laplace, Gauss–Weierstrass, Bargmann and the Fresnel transforms as particular cases. The name "linear canonical transformation" is from canonical transformation, a map that preserves the symplectic structure, as SL2(R) can also be interpreted as the symplectic group Sp2, and thus LCTs are the linear maps of the time–frequency domain which preserve the symplectic form, and their action on the Hilbert space is given by the Metaplectic group. The basic properties of ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Real Number
In mathematics, a real number is a number that can be used to measure a continuous one- dimensional quantity such as a duration or temperature. Here, ''continuous'' means that pairs of values can have arbitrarily small differences. Every real number can be almost uniquely represented by an infinite decimal expansion. The real numbers are fundamental in calculus (and in many other branches of mathematics), in particular by their role in the classical definitions of limits, continuity and derivatives. The set of real numbers, sometimes called "the reals", is traditionally denoted by a bold , often using blackboard bold, . The adjective ''real'', used in the 17th century by René Descartes, distinguishes real numbers from imaginary numbers such as the square roots of . The real numbers include the rational numbers, such as the integer and the fraction . The rest of the real numbers are called irrational numbers. Some irrational numbers (as well as all the rationals) a ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Eigenvalue
In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a constant factor \lambda when the linear transformation is applied to it: T\mathbf v=\lambda \mathbf v. The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor \lambda (possibly a negative or complex number). Geometrically, vectors are multi-dimensional quantities with magnitude and direction, often pictured as arrows. A linear transformation rotates, stretches, or shears the vectors upon which it acts. A linear transformation's eigenvectors are those vectors that are only stretched or shrunk, with neither rotation nor shear. The corresponding eigenvalue is the factor by which an eigenvector is stretched or shrunk. If the eigenvalue is negative, the eigenvector's direction is reversed. Th ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Diagonalizable Matrix
In linear algebra, a square matrix A is called diagonalizable or non-defective if it is matrix similarity, similar to a diagonal matrix. That is, if there exists an invertible matrix P and a diagonal matrix D such that . This is equivalent to (Such D are not unique.) This property exists for any linear map: for a dimension (vector space), finite-dimensional vector space a linear map T:V\to V is called diagonalizable if there exists an Basis (linear algebra)#Ordered bases and coordinates, ordered basis of V consisting of eigenvectors of T. These definitions are equivalent: if T has a matrix (mathematics), matrix representation A = PDP^ as above, then the column vectors of P form a basis consisting of eigenvectors of and the diagonal entries of D are the corresponding eigenvalues of with respect to this eigenvector basis, T is represented by Diagonalization is the process of finding the above P and and makes many subsequent computations easier. One can raise a diag ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Eigenvector
In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a constant factor \lambda when the linear transformation is applied to it: T\mathbf v=\lambda \mathbf v. The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor \lambda (possibly a negative or complex number). Geometrically, vectors are multi- dimensional quantities with magnitude and direction, often pictured as arrows. A linear transformation rotates, stretches, or shears the vectors upon which it acts. A linear transformation's eigenvectors are those vectors that are only stretched or shrunk, with neither rotation nor shear. The corresponding eigenvalue is the factor by which an eigenvector is stretched or shrunk. If the eigenvalue is negative, the eigenvector's direction is reversed. ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |