HOME



picture info

Fourier–Bessel Series
In mathematics, Fourier–Bessel series is a particular kind of generalized Fourier series (an infinite series expansion on a finite interval) based on Bessel functions. Fourier–Bessel series are used in the solution to partial differential equations, particularly in cylindrical coordinate systems. Definition The Fourier–Bessel series of a function with a domain of satisfying f: ,b\to \R is the representation of that function as a linear combination of many orthogonal versions of the same Bessel function of the first kind ''J''''α'', where the argument to each version ''n'' is differently scaled, according to (J_\alpha )_n (x) := J_\alpha \left( \fracb x \right) where ''u''''α'',''n'' is a root, numbered ''n'' associated with the Bessel function ''J''''α'' and ''c''''n'' are the assigned coefficients: f(x) \sim \sum_^\infty c_n J_\alpha \left( \fracb x \right). Interpretation The Fourier–Bessel series may be thought of as a Fourier expansion in the ρ coordi ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Mathematics
Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. There are many areas of mathematics, which include number theory (the study of numbers), algebra (the study of formulas and related structures), geometry (the study of shapes and spaces that contain them), Mathematical analysis, analysis (the study of continuous changes), and set theory (presently used as a foundation for all mathematics). Mathematics involves the description and manipulation of mathematical object, abstract objects that consist of either abstraction (mathematics), abstractions from nature orin modern mathematicspurely abstract entities that are stipulated to have certain properties, called axioms. Mathematics uses pure reason to proof (mathematics), prove properties of objects, a ''proof'' consisting of a succession of applications of in ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Hankel Transform
In mathematics, the Hankel transform expresses any given function ''f''(''r'') as the weighted sum of an infinite number of Bessel functions of the first kind . The Bessel functions in the sum are all of the same order ν, but differ in a scaling factor ''k'' along the ''r'' axis. The necessary coefficient of each Bessel function in the sum, as a function of the scaling factor ''k'' constitutes the transformed function. The Hankel transform is an integral transform and was first developed by the mathematician Hermann Hankel. It is also known as the Fourier–Bessel transform. Just as the Fourier transform for an infinite interval is related to the Fourier series over a finite interval, so the Hankel transform over an infinite interval is related to the Fourier–Bessel series over a finite interval. Definition The Hankel transform of order \nu of a function ''f''(''r'') is given by : F_\nu(k) = \int_0^\infty f(r) J_\nu(kr) \,r\,\mathrmr, where J_\nu is the Bessel function of ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Neumann Polynomial
In mathematics, the Neumann polynomials, introduced by Carl Neumann for the special case \alpha=0, are a sequence of polynomials in 1/t used to expand functions in term of Bessel functions. The first few polynomials are :O_0^(t)=\frac 1 t, :O_1^(t)=2\frac , :O_2^(t)=\frac + 4\frac , :O_3^(t)=2\frac + 8\frac , :O_4^(t)=\frac + 4\frac + 16\frac . A general form for the polynomial is :O_n^(t)= \frac \sum_^ (-1)^\frac \left(\frac 2 t \right)^, and they have the "generating function" :\frac \frac 1 = \sum_O_n^(t) J_(z), where ''J'' are Bessel functions. To expand a function ''f'' in the form :f(z)=\left(\frac\right)^\alpha \sum_ a_n J_(z)\, for , t, , compute :a_n=\frac \oint_ f(t) O_n^(t)\,dt, where c' and ''c'' is the distance of the nearest singularity of ''f(z)'' from z=0 .


Examples

An example is the extension :\left(\tfracz\right)^s= \Gamma(s)\cdot\sum_(-1)^k J_(z)(s+2k), or the more gener ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Kapteyn Series
Kapteyn may refer to: * Jacobus Kapteyn - Astronomer ** Parallactic instrument of Kapteyn - the instrument used by Kapteyn to analyze photographic plates ** Jacobus Kapteyn Telescope - telescope named after Jacobus Kapteyn ** Kapteyn's Star - star named after Jacobus Kapteyn ** Kapteyn (crater) - Lunar crater named after Jacobus Kapteyn ** Kapteyn Astronomical Institute - Dutch Astronomical Institute named after Jacobus Kapteyn * Paul Joan George Kapteyn - Dutch Judge {{disambig ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Generalized Fourier Series
A generalized Fourier series is the expansion of a square integrable function into a sum of square integrable orthogonal basis functions. The standard Fourier series uses an orthonormal basis of trigonometric functions, and the series expansion is applied to periodic functions. In contrast, a generalized Fourier series uses any set of orthogonal basis functions and can apply to any square integrable function. Definition Consider a set \Phi = \_^\infty of square-integrable complex valued functions defined on the closed interval ,b that are pairwise orthogonal under the weighted inner product: \langle f, g \rangle_w = \int_a^b f(x) \overline w(x) dx, where w(x) is a weight function and \overline g is the complex conjugate of g . Then, the generalized Fourier series of a function f is: f(x) = \sum_^\infty c_n\phi_n(x),where the coefficients are given by: c_n = . Sturm-Liouville Problems Given the space L^2(a,b) of square integrable functions defined on a given inte ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Orthogonality
In mathematics, orthogonality is the generalization of the geometric notion of '' perpendicularity''. Although many authors use the two terms ''perpendicular'' and ''orthogonal'' interchangeably, the term ''perpendicular'' is more specifically used for lines and planes that intersect to form a right angle, whereas ''orthogonal'' is used in generalizations, such as ''orthogonal vectors'' or ''orthogonal curves''. ''Orthogonality'' is also used with various meanings that are often weakly related or not related at all with the mathematical meanings. Etymology The word comes from the Ancient Greek ('), meaning "upright", and ('), meaning "angle". The Ancient Greek (') and Classical Latin ' originally denoted a rectangle. Later, they came to mean a right triangle. In the 12th century, the post-classical Latin word ''orthogonalis'' came to mean a right angle or something related to a right angle. Mathematics Physics Optics In optics, polarization states are said to be ort ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Robin Boundary Condition
In mathematics, the Robin boundary condition ( , ), or third type boundary condition, is a type of boundary condition, named after Victor Gustave Robin (1855–1897). When imposed on an ordinary or a partial differential equation, it is a specification of a linear combination of the values of a function ''and'' the values of its derivative on the boundary of the domain. Other equivalent names in use are Fourier-type condition and radiation condition. Definition Robin boundary conditions are a weighted combination of Dirichlet boundary conditions and Neumann boundary conditions. This contrasts to mixed boundary conditions, which are boundary conditions of different types specified on different subsets of the boundary. Robin boundary conditions are also called impedance boundary conditions, from their application in electromagnetic problems, or convective boundary conditions, from their application in heat transfer problems (Hahn, 2012). If Ω is the domain on which th ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Newton-Raphson Method
In numerical analysis, the Newton–Raphson method, also known simply as Newton's method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valued function. The most basic version starts with a real-valued function , its derivative , and an initial guess for a root of . If satisfies certain assumptions and the initial guess is close, then x_ = x_0 - \frac is a better approximation of the root than . Geometrically, is the x-intercept of the tangent of the graph of at : that is, the improved guess, , is the unique root of the linear approximation of at the initial guess, . The process is repeated as x_ = x_n - \frac until a sufficiently precise value is reached. The number of correct digits roughly doubles with each step. This algorithm is first in the class of Householder's methods, and was succeeded by Halley's method. The method can also be extended to complex f ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Dirac Delta Function
In mathematical analysis, the Dirac delta function (or distribution), also known as the unit impulse, is a generalized function on the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire real line is equal to one. Thus it can be Heuristic, represented heuristically as \delta (x) = \begin 0, & x \neq 0 \\ , & x = 0 \end such that \int_^ \delta(x) dx=1. Since there is no function having this property, modelling the delta "function" rigorously involves the use of limit (mathematics), limits or, as is common in mathematics, measure theory and the theory of distribution (mathematics), distributions. The delta function was introduced by physicist Paul Dirac, and has since been applied routinely in physics and engineering to model point masses and instantaneous impulses. It is called the delta function because it is a continuous analogue of the Kronecker delta function, which is usually defined on a discrete domain and takes values ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Vector Projection
The vector projection (also known as the vector component or vector resolution) of a vector on (or onto) a nonzero vector is the orthogonal projection of onto a straight line parallel to . The projection of onto is often written as \operatorname_\mathbf \mathbf or . The vector component or vector resolute of perpendicular to , sometimes also called the vector rejection of ''from'' (denoted \operatorname_ \mathbf or ), is the orthogonal projection of onto the plane (or, in general, hyperplane) that is orthogonal to . Since both \operatorname_ \mathbf and \operatorname_ \mathbf are vectors, and their sum is equal to , the rejection of from is given by: \operatorname_ \mathbf = \mathbf - \operatorname_ \mathbf. To simplify notation, this article defines \mathbf_1 := \operatorname_ \mathbf and \mathbf_2 := \operatorname_ \mathbf. Thus, the vector \mathbf_1 is parallel to \mathbf, the vector \mathbf_2 is orthogonal to \mathbf, and \mathbf = \mathbf_1 + \mathbf_2. T ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]