Regularization (physics)
   HOME

TheInfoList



OR:

In
physics Physics is the natural science that studies matter, its fundamental constituents, its motion and behavior through space and time, and the related entities of energy and force. "Physical science is that department of knowledge which r ...
, especially quantum field theory, regularization is a method of modifying observables which have singularities in order to make them finite by the introduction of a suitable parameter called the regulator. The regulator, also known as a "cutoff", models our lack of knowledge about physics at unobserved scales (e.g. scales of small size or large energy levels). It compensates for (and requires) the possibility that "new physics" may be discovered at those scales which the present theory is unable to model, while enabling the current theory to give accurate predictions as an "effective theory" within its intended scale of use. It is distinct from
renormalization Renormalization is a collection of techniques in quantum field theory, the statistical mechanics of fields, and the theory of self-similar geometric structures, that are used to treat infinities arising in calculated quantities by altering va ...
, another technique to control infinities without assuming new physics, by adjusting for self-interaction feedback. Regularization was for many decades controversial even amongst its inventors, as it combines physical and epistemological claims into the same equations. However, it is now well understood and has proven to yield useful, accurate predictions.


Overview

Regularization procedures deal with infinite, divergent, and nonsensical expressions by introducing an auxiliary concept of a regulator (for example, the minimal distance \epsilon in space which is useful, in case the divergences arise from short-distance physical effects). The correct physical result is obtained in the limit in which the regulator goes away (in our example, \epsilon\to 0), but the virtue of the regulator is that for its finite value, the result is finite. However, the result usually includes terms proportional to expressions like 1/ \epsilon which are not well-defined in the limit \epsilon\to 0. Regularization is the first step towards obtaining a completely finite and meaningful result; in quantum field theory it must be usually followed by a related, but independent technique called
renormalization Renormalization is a collection of techniques in quantum field theory, the statistical mechanics of fields, and the theory of self-similar geometric structures, that are used to treat infinities arising in calculated quantities by altering va ...
. Renormalization is based on the requirement that some physical quantities — expressed by seemingly divergent expressions such as 1/ \epsilon — are equal to the observed values. Such a constraint allows one to calculate a finite value for many other quantities that looked divergent. The existence of a limit as ε goes to zero and the independence of the final result from the regulator are nontrivial facts. The underlying reason for them lies in universality as shown by Kenneth Wilson and
Leo Kadanoff Leo Philip Kadanoff (January 14, 1937 – October 26, 2015) was an American physicist. He was a professor of physics (emeritus from 2004) at the University of Chicago and a former President of the American Physical Society (APS). He contributed t ...
and the existence of a
second order phase transition In chemistry, thermodynamics, and other related fields, a phase transition (or phase change) is the physical process of transition between one state of a medium and another. Commonly the term is used to refer to changes among the basic states of ...
. Sometimes, taking the limit as ε goes to zero is not possible. This is the case when we have a
Landau pole In physics, the Landau pole (or the Moscow zero, or the Landau ghost) is the momentum (or energy) scale at which the coupling constant (interaction strength) of a quantum field theory becomes infinite. Such a possibility was pointed out by the phy ...
and for nonrenormalizable couplings like the Fermi interaction. However, even for these two examples, if the regulator only gives reasonable results for \epsilon \gg 1/\Lambda and we are working with scales of the order of 1/\Lambda', regulators with 1/\Lambda \ll \epsilon \ll 1/\Lambda' still give pretty accurate approximations. The physical reason why we can't take the limit of ε going to zero is the existence of new physics below Λ. It is not always possible to define a regularization such that the limit of ε going to zero is independent of the regularization. In this case, one says that the theory contains an anomaly. Anomalous theories have been studied in great detail and are often founded on the celebrated Atiyah–Singer index theorem or variations thereof (see, for example, the
chiral anomaly In theoretical physics, a chiral anomaly is the anomalous nonconservation of a chiral current. In everyday terms, it is equivalent to a sealed box that contained equal numbers of left and right-handed bolts, but when opened was found to have mor ...
).


Classical physics example

The problem of infinities first arose in the
classical electrodynamics Classical electromagnetism or classical electrodynamics is a branch of theoretical physics that studies the interactions between electric charges and currents using an extension of the classical Newtonian model; It is, therefore, a classical fi ...
of point particles in the 19th and early 20th century. The mass of a charged particle should include the mass–energy in its electrostatic field (
electromagnetic mass Electromagnetic mass was initially a concept of classical mechanics, denoting as to how much the electromagnetic field, or the self-energy, is contributing to the mass of charged particles. It was first derived by J. J. Thomson in 1881 and was for ...
). Assume that the particle is a charged spherical shell of radius . The mass–energy in the field is :m_\mathrm = \int E^2 \, dV = \int_^ \frac \left( \right)^2 4\pi r^2 \, dr = , which becomes infinite as . This implies that the point particle would have infinite
inertia Inertia is the idea that an object will continue its current motion until some force causes its speed or direction to change. The term is properly understood as shorthand for "the principle of inertia" as described by Newton in his first law ...
, making it unable to be accelerated. Incidentally, the value of that makes m_ equal to the electron mass is called the
classical electron radius The classical electron radius is a combination of fundamental physical quantities that define a length scale for problems involving an electron interacting with electromagnetic radiation. It links the classical electrostatic self-interaction energ ...
, which (setting q = e and restoring factors of and \varepsilon_0) turns out to be :r_e = = \alpha \approx 2.8 \times 10^\ \mathrm. where \alpha \approx 1/137.040 is the
fine-structure constant In physics, the fine-structure constant, also known as the Sommerfeld constant, commonly denoted by (the Greek letter ''alpha''), is a fundamental physical constant which quantifies the strength of the electromagnetic interaction between el ...
, and \hbar/m_ c is the
Compton wavelength The Compton wavelength is a quantum mechanical property of a particle. The Compton wavelength of a particle is equal to the wavelength of a photon whose energy is the same as the rest energy of that particle (see mass–energy equivalence). It was ...
of the electron. Regularization: This process shows that the physical theory originally used breaks down at small scales. It shows that the electron cannot in fact be a point particle, and that some kind of additional new physics (in this case, a finite radius) is needed to explain systems below a certain scale. This same argument will appear in other renormalization problems: a theory holds in some domain but can be seen to break down and require new physics at other scales in order to avoid infinities. (Another way to avoid the infinity but while retaining the point nature of the particle would be to postulate a small additional dimension over which the particle could 'spread out' rather than over 3D space; this is a motivation for string theory.) (See also
renormalization Renormalization is a collection of techniques in quantum field theory, the statistical mechanics of fields, and the theory of self-similar geometric structures, that are used to treat infinities arising in calculated quantities by altering va ...
for an alternative way to remove infinities from this classical problem, assuming self-interactions rather than the existence of unknown new physics.)


Specific types

Specific types of regularization procedures include * Dimensional regularization * Pauli–Villars regularization * Lattice regularization * Zeta function regularization * Causal regularization * Hadamard regularization


Realistic regularization


Conceptual problem

Perturbative predictions by quantum field theory about quantum scattering of
elementary particles In particle physics, an elementary particle or fundamental particle is a subatomic particle that is not composed of other particles. Particles currently thought to be elementary include electrons, the fundamental fermions (quarks, leptons, anti ...
, implied by a corresponding
Lagrangian Lagrangian may refer to: Mathematics * Lagrangian function, used to solve constrained minimization problems in optimization theory; see Lagrange multiplier ** Lagrangian relaxation, the method of approximating a difficult constrained problem with ...
density, are computed using the
Feynman rules In theoretical physics, a Feynman diagram is a pictorial representation of the mathematical expressions describing the behavior and interaction of subatomic particles. The scheme is named after American physicist Richard Feynman, who introduce ...
, a regularization method to circumvent ultraviolet divergences so as to obtain finite results for
Feynman diagrams In theoretical physics, a Feynman diagram is a pictorial representation of the mathematical expressions describing the behavior and interaction of subatomic particles. The scheme is named after American physicist Richard Feynman, who introduce ...
containing loops, and a
renormalization Renormalization is a collection of techniques in quantum field theory, the statistical mechanics of fields, and the theory of self-similar geometric structures, that are used to treat infinities arising in calculated quantities by altering va ...
scheme. Regularization method results in regularized n-point
Green's function In mathematics, a Green's function is the impulse response of an inhomogeneous linear differential operator defined on a domain with specified initial conditions or boundary conditions. This means that if \operatorname is the linear differenti ...
s ( propagators), and a suitable limiting procedure (a renormalization scheme) then leads to perturbative
S-matrix In physics, the ''S''-matrix or scattering matrix relates the initial state and the final state of a physical system undergoing a scattering process. It is used in quantum mechanics, scattering theory and quantum field theory (QFT). More forma ...
elements. These are independent of the particular regularization method used, and enable one to model perturbatively the measurable physical processes (cross sections, probability amplitudes, decay widths and lifetimes of excited states). However, so far no known regularized n-point Green's functions can be regarded as being based on a physically realistic theory of quantum-scattering since the derivation of each disregards some of the basic tenets of conventional physics (e.g., by not being Lorentz-invariant, by introducing either unphysical particles with a negative metric or wrong statistics, or discrete space-time, or lowering the dimensionality of space-time, or some combination thereof). So the available regularization methods are understood as formalistic technical devices, devoid of any direct physical meaning. In addition, there are qualms about renormalization. For a history and comments on this more than half-a-century old open conceptual problem, see e.g.


Pauli's conjecture

As it seems that the vertices of non-regularized Feynman series adequately describe interactions in quantum scattering, it is taken that their ultraviolet divergences are due to the asymptotic, high-energy behavior of the Feynman propagators. So it is a prudent, conservative approach to retain the vertices in Feynman series, and modify only the Feynman propagators to create a regularized Feynman series. This is the reasoning behind the formal Pauli–Villars covariant regularization by modification of Feynman propagators through auxiliary unphysical particles, cf. and representation of physical reality by Feynman diagrams. In 1949 Pauli conjectured there is a realistic regularization, which is implied by a theory that respects all the established principles of contemporary physics. So its propagators (i) do not need to be regularized, and (ii) can be regarded as such a regularization of the propagators used in quantum field theories that might reflect the underlying physics. The additional parameters of such a theory do not need to be removed (i.e. the theory needs no renormalization) and may provide some new information about the physics of quantum scattering, though they may turn out experimentally to be negligible. By contrast, any present regularization method introduces formal coefficients that must eventually be disposed of by renormalization.


Opinions

Paul Dirac Paul Adrien Maurice Dirac (; 8 August 1902 – 20 October 1984) was an English theoretical physicist who is regarded as one of the most significant physicists of the 20th century. He was the Lucasian Professor of Mathematics at the Univer ...
was persistently, extremely critical about procedures of renormalization. In 1963, he wrote, "… in the renormalization theory we have a theory that has defied all the attempts of the mathematician to make it sound. I am inclined to suspect that the renormalization theory is something that will not survive in the future,…" He further observed that "One can distinguish between two main procedures for a theoretical physicist. One of them is to work from the experimental basis ... The other procedure is to work from the mathematical basis. One examines and criticizes the existing theory. One tries to pin-point the faults in it and then tries to remove them. The difficulty here is to remove the faults without destroying the very great successes of the existing theory."
Abdus Salam Mohammad Abdus Salam Salam adopted the forename "Mohammad" in 1974 in response to the anti-Ahmadiyya decrees in Pakistan, similarly he grew his beard. (; ; 29 January 192621 November 1996) was a Punjabi Pakistani theoretical physicist and a ...
remarked in 1972, "Field-theoretic infinities first encountered in Lorentz's computation of electron have persisted in classical electrodynamics for seventy and in quantum electrodynamics for some thirty-five years. These long years of frustration have left in the subject a curious affection for the infinities and a passionate belief that they are an inevitable part of nature; so much so that even the suggestion of a hope that they may after all be circumvented - and finite values for the renormalization constants computed - is considered irrational." However, in Gerard ’t Hooft’s opinion, "History tells us that if we hit upon some obstacle, even if it looks like a pure formality or just a technical complication, it should be carefully scrutinized. Nature might be telling us something, and we should find out what it is." The difficulty with a realistic regularization is that so far there is none, although nothing could be destroyed by its bottom-up approach; and there is no experimental basis for it.


Minimal realistic regularization

Considering distinct theoretical problems, Dirac in 1963 suggested: "I believe separate ideas will be needed to solve these distinct problems and that they will be solved one at a time through successive stages in the future evolution of physics. At this point I find myself in disagreement with most physicists. They are inclined to think one master idea will be discovered that will solve all these problems together. I think it is asking too much to hope that anyone will be able to solve all these problems together. One should separate them one from another as much as possible and try to tackle them separately. And I believe the future development of physics will consist of solving them one at a time, and that after any one of them has been solved there will still be a great mystery about how to attack further ones." According to Dirac, "
Quantum electrodynamics In particle physics, quantum electrodynamics (QED) is the relativistic quantum field theory of electrodynamics. In essence, it describes how light and matter interact and is the first theory where full agreement between quantum mechanics and spec ...
is the domain of physics that we know most about, and presumably it will have to be put in order before we can hope to make any fundamental progress with other field theories, although these will continue to develop on the experimental basis." Dirac’s two preceding remarks suggest that we should start searching for a realistic regularization in the case of quantum electrodynamics (QED) in the four-dimensional
Minkowski spacetime In mathematical physics, Minkowski space (or Minkowski spacetime) () is a combination of Three-dimensional space, three-dimensional Euclidean space and time into a four-dimensional manifold where the spacetime interval between any two Event (rel ...
, starting with the original QED
Lagrangian Lagrangian may refer to: Mathematics * Lagrangian function, used to solve constrained minimization problems in optimization theory; see Lagrange multiplier ** Lagrangian relaxation, the method of approximating a difficult constrained problem with ...
density. The path-integral formulation provides the most direct way from the Lagrangian density to the corresponding Feynman series in its Lorentz-invariant form. The free-field part of the Lagrangian density determines the Feynman propagators, whereas the rest determines the vertices. As the QED vertices are considered to adequately describe interactions in QED scattering, it makes sense to modify only the free-field part of the Lagrangian density so as to obtain such regularized Feynman series that the Lehmann–Symanzik–Zimmermann reduction formula provides a perturbative S-matrix that: (i) is Lorentz-invariant and unitary; (ii) involves only the QED particles; (iii) depends solely on QED parameters and those introduced by the modification of the Feynman propagators—for particular values of these parameters it is equal to the QED perturbative S-matrix; and (iv) exhibits the same symmetries as the QED perturbative S-matrix. Let us refer to such a regularization as ''the minimal realistic regularization'', and start searching for the corresponding, modified free-field parts of the QED Lagrangian density.


Transport theoretic approach

According to Bjorken and Drell, it would make physical sense to sidestep
ultraviolet divergence In physics, an ultraviolet divergence or UV divergence is a situation in which an integral, for example a Feynman diagram, diverges because of contributions of objects with unbounded energy, or, equivalently, because of physical phenomena at infi ...
s by using more detailed description than can be provided by differential field equations. And Feynman noted about the use of differential equations: "... for neutron diffusion it is only an approximation that is good when the distance over which we are looking is large compared with the mean free path. If we looked more closely, we would see individual neutrons running around." And then he wondered, "Could it be that the real world consists of little X-ons which can be seen only at very tiny distances? And that in our measurements we are always observing on such a large scale that we can’t see these little X-ons, and that is why we get the differential equations? ... Are they hereforealso correct only as a smoothed-out imitation of a really much more complicated microscopic world?" Already in 1938, Heisenberg proposed that a quantum field theory can provide only an idealized, large-scale description of quantum dynamics, valid for distances larger than some ''fundamental length'', expected also by Bjorken and Drell in 1965. Feynman's preceding remark provides a possible physical reason for its existence; either that or it is just another way of saying the same thing (there is a fundamental unit of distance) but having no new information.


Hints at new physics

The need for regularization terms in any quantum field theory of quantum gravity is a major motivation for physics beyond the standard model. Infinities of the non-gravitational forces in QFT can be controlled via
renormalization Renormalization is a collection of techniques in quantum field theory, the statistical mechanics of fields, and the theory of self-similar geometric structures, that are used to treat infinities arising in calculated quantities by altering va ...
only but additional regularization - and hence new physics—is required uniquely for gravity. The regularizers model, and work around, the break down of QFT at small scales and thus show clearly the need for some other theory to come into play beyond QFT at these scales. A. Zee (Quantum Field Theory in a Nutshell, 2003) considers this to be a benefit of the regularization framework—theories can work well in their intended domains but also contain information about their own limitations and point clearly to where new physics is needed.


References

{{DEFAULTSORT:Regularization (Physics) Concepts in physics Quantum field theory Summability methods