Regularization (physics)
In physics, especially quantum field theory, regularization is a method of modifying observables which have singularities in order to make them finite by the introduction of a suitable parameter called the regulator. The regulator, also known as a "cutoff", models our lack of knowledge about physics at unobserved scales (e.g. scales of small size or large energy levels). It compensates for (and requires) the possibility of separation of scales that "new physics" may be discovered at those scales which the present theory is unable to model, while enabling the current theory to give accurate predictions as an "effective theory" within its intended scale of use. It is distinct from renormalization, another technique to control infinities without assuming new physics, by adjusting for self-interaction feedback. Regularization was for many decades controversial even amongst its inventors, as it combines physical and epistemological claims into the same equations. However, it i ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Physics
Physics is the scientific study of matter, its Elementary particle, fundamental constituents, its motion and behavior through space and time, and the related entities of energy and force. "Physical science is that department of knowledge which relates to the order of nature, or, in other words, to the regular succession of events." It is one of the most fundamental scientific disciplines. "Physics is one of the most fundamental of the sciences. Scientists of all disciplines use the ideas of physics, including chemists who study the structure of molecules, paleontologists who try to reconstruct how dinosaurs walked, and climatologists who study how human activities affect the atmosphere and oceans. Physics is also the foundation of all engineering and technology. No engineer could design a flat-screen TV, an interplanetary spacecraft, or even a better mousetrap without first understanding the basic laws of physics. (...) You will come to see physics as a towering achievement of ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Electromagnetic Mass
Electromagnetic mass was initially a concept of classical mechanics, denoting as to how much the electromagnetic field, or the self-energy, is contributing to the mass of charged particles. It was first derived by J. J. Thomson in 1881 and was for some time also considered as a dynamical explanation of inertial mass ''per se''. Today, the relation of mass, momentum, velocity, and all forms of energy – including electromagnetic energy – is analyzed on the basis of Albert Einstein's special relativity and mass–energy equivalence. As to the cause of mass of elementary particles, the Higgs mechanism in the framework of the relativistic Standard Model is currently used. However, some problems concerning the electromagnetic mass and self-energy of charged particles are still studied. Charged particles Rest mass and energy It was recognized by J. J. Thomson in 1881 that a charged sphere moving in a space filled with a medium of a specific inductive capacity (the electromagne ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Perturbative
In quantum mechanics, perturbation theory is a set of approximation schemes directly related to mathematical perturbation for describing a complicated quantum system in terms of a simpler one. The idea is to start with a simple system for which a mathematical solution is known, and add an additional "perturbing" Hamiltonian representing a weak disturbance to the system. If the disturbance is not too large, the various physical quantities associated with the perturbed system (e.g. its energy levels and eigenstates) can be expressed as "corrections" to those of the simple system. These corrections, being small compared to the size of the quantities themselves, can be calculated using approximate methods such as asymptotic series. The complicated system can therefore be studied based on knowledge of the simpler one. In effect, it is describing a complicated unsolved system using a simple, solvable system. Approximate Hamiltonians Perturbation theory is an important tool for desc ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Hadamard Regularization
In mathematics, Hadamard regularization (also called Hadamard finite part or Hadamard's partie finie) is a method of regularizing divergent integrals by dropping some divergent terms and keeping the finite part, introduced by . showed that this can be interpreted as taking the meromorphic continuation of a convergent integral. If the Cauchy principal value integral \mathcal\int_a^b \frac \, dt \quad (\text a |
|
Causal Perturbation Theory
Causal perturbation theory is a mathematically rigorous approach to renormalization theory, which makes it possible to put the theoretical setup of perturbative quantum field theory on a sound mathematical basis. It goes back to a 1973 work by Henri Epstein and Vladimir Jurko Glaser. Overview When developing quantum electrodynamics in the 1940s, Shin'ichiro Tomonaga, Julian Schwinger, Richard Feynman, and Freeman Dyson discovered that, in perturbative calculations, problems with divergent integrals abounded. The divergences appeared in calculations involving Feynman diagrams with closed loops of virtual particles. It is an important observation that in perturbative quantum field theory, time-ordered products of distributions arise in a natural way and may lead to ultraviolet divergences in the corresponding calculations. From the generalized functions point of view, the problem of divergences is rooted in the fact that the theory of distributions is a purely linear theory, ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Zeta Function Regularization
In mathematics and theoretical physics, zeta function regularization is a type of regularization (physics), regularization or summability method that assigns finite values to Divergent series, divergent sums or products, and in particular can be used to define determinants and trace (linear algebra), traces of some self-adjoint operators. The technique is now commonly applied to problems in physics, but has its origins in attempts to give precise meanings to Condition number, ill-conditioned sums appearing in number theory. Definition There are several different summation methods called zeta function regularization for defining the sum of a possibly divergent series One method is to define its zeta regularized sum to be ζ''A''(−1) if this is defined, where the zeta function is defined for large Re(''s'') by : \zeta_A(s) = \frac+\frac +\cdots if this sum converges, and by analytic continuation elsewhere. In the case when ''a''''n'' = ''n'', the zeta function is the o ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Lattice Regularization
In physics, lattice field theory is the study of lattice models of quantum field theory. This involves studying field theory on a space or spacetime that has been discretised onto a lattice. Details Although most lattice field theories are not exactly solvable, they are immensely appealing due to their feasibility for computer simulation, often using Markov chain Monte Carlo methods. One hopes that, by performing simulations on larger and larger lattices, while making the lattice spacing smaller and smaller, one will be able to recover the behavior of the continuum theory as the continuum limit is approached. Just as in all lattice models, numerical simulation provides access to field configurations that are not accessible to perturbation theory, such as solitons. Similarly, non-trivial vacuum states can be identified and examined. The method is particularly appealing for the quantization of a gauge theory using the Wilson action. Most quantization approaches maintain Poincar ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Pauli–Villars Regularization
__NOTOC__ In theoretical physics, Pauli–Villars regularization (P–V) is a procedure that isolates divergent terms from finite parts in loop calculations in field theory in order to renormalize the theory. Wolfgang Pauli and Felix Villars published the method in 1949, based on earlier work by Richard Feynman, Ernst Stueckelberg and Dominique Rivier. In this treatment, a divergence arising from a loop integral (such as vacuum polarization or electron self-energy) is modulated by a spectrum of auxiliary particles added to the Lagrangian or propagator. When the masses of the fictitious particles are taken as an infinite limit (i.e., once the regulator is removed) one expects to recover the original theory. This regulator is gauge invariant in an abelian theory due to the auxiliary particles being minimally coupled to the photon field through the gauge covariant derivative. It is not gauge covariant in a non-abelian theory, though, so Pauli–Villars regularization is more d ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Dimensional Regularization
__NOTOC__ In theoretical physics, dimensional regularization is a method introduced by Juan José Giambiagi and as well as – independently and more comprehensively – by Gerard 't Hooft and Martinus J. G. Veltman for regularizing integrals in the evaluation of Feynman diagrams; in other words, assigning values to them that are meromorphic functions of a complex parameter ''d'', the analytic continuation of the number of spacetime dimensions. Dimensional regularization writes a Feynman integral as an integral depending on the spacetime dimension ''d'' and the squared distances (''x''''i''−''x''''j'')2 of the spacetime points ''x''''i'', ... appearing in it. In Euclidean space, the integral often converges for −Re(''d'') sufficiently large, and can be analytically continued from this region to a meromorphic function defined for all complex ''d''. In general, there will be a pole at the physical value (usually 4) of ''d'', which needs to be canceled by renorma ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Multiple Time Dimensions
The possibility that there might be more than one dimension of time has occasionally been discussed in physics and philosophy. Similar ideas appear in folklore and fantasy literature. Physics Speculative theories with more than one time dimension have been explored in physics. The additional dimensions may be similar to conventional time, compactified like the additional spatial dimensions in string theory, or components of a complex time (sometimes referred to as kime). Itzhak Bars has proposed models of a two-time physics, noting in 2001 that "The 2T-physics approach in ''d'' + 2 dimensions offers a highly symmetric and unified version of the phenomena described by 1T-physics in ''d'' dimensions." F-theory, a branch of modern string theory, describes a 12-dimensional spacetime having two dimensions of time, giving it the metric signature (10,2). The existence of a well-posed initial value problem for the ultrahyperbolic equation (a wave equation in more than one time di ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
String Theory
In physics, string theory is a theoretical framework in which the point-like particles of particle physics are replaced by one-dimensional objects called strings. String theory describes how these strings propagate through space and interact with each other. On distance scales larger than the string scale, a string acts like a particle, with its mass, charge, and other properties determined by the vibrational state of the string. In string theory, one of the many vibrational states of the string corresponds to the graviton, a quantum mechanical particle that carries the gravitational force. Thus, string theory is a theory of quantum gravity. String theory is a broad and varied subject that attempts to address a number of deep questions of fundamental physics. String theory has contributed a number of advances to mathematical physics, which have been applied to a variety of problems in black hole physics, early universe cosmology, nuclear physics, and condensed matter ph ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Compton Wavelength
The Compton wavelength is a quantum mechanical property of a particle, defined as the wavelength of a photon whose energy is the same as the rest energy of that particle (see mass–energy equivalence). It was introduced by Arthur Compton in 1923 in his explanation of the scattering of photons by electrons (a process known as Compton scattering). The standard Compton wavelength of a particle of mass is given by \lambda = \frac, where is the Planck constant and is the speed of light. The corresponding frequency is given by f = \frac, and the angular frequency is given by \omega = \frac. The CODATA value for the Compton wavelength of the electron is Other particles have different Compton wavelengths. Reduced Compton wavelength The reduced Compton wavelength \lambda\!\!\!\bar ( barred lambda) of a particle is defined as its Compton wavelength divided by : : \lambda\!\!\!\bar = \frac = \frac, where is the reduced Planck constant. Role in equations for massive pa ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |