The limits of
computation
Computation is any type of arithmetic or non-arithmetic calculation that follows a well-defined model (e.g., an algorithm).
Mechanical or electronic devices (or, historically, people) that perform computations are known as '' computers''. An esp ...
are governed by a number of different factors. In particular, there are several physical and practical limits to the amount of computation or
data storage
Data storage is the recording (storing) of information (data) in a storage medium. Handwriting, phonographic recording, magnetic tape, and optical discs are all examples of storage media. Biological molecules such as RNA and DNA are cons ...
that can be performed with a given amount of
mass
Mass is an intrinsic property of a body. It was traditionally believed to be related to the quantity of matter in a physical body, until the discovery of the atom and particle physics. It was found that different atoms and different element ...
,
volume
Volume is a measure of occupied three-dimensional space. It is often quantified numerically using SI derived units (such as the cubic metre and litre) or by various imperial or US customary units (such as the gallon, quart, cubic inch). Th ...
, or
energy
In physics, energy (from Ancient Greek: ἐνέργεια, ''enérgeia'', “activity”) is the quantitative property that is transferred to a body or to a physical system, recognizable in the performance of work and in the form of hea ...
.
Hardware limits or physical limits
Processing and memory density
* The
Bekenstein bound limits the amount of information that can be stored within a spherical volume to the
entropy
Entropy is a scientific concept, as well as a measurable physical property, that is most commonly associated with a state of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodyna ...
of a
black hole
A black hole is a region of spacetime where gravity is so strong that nothing, including light or other electromagnetic waves, has enough energy to escape it. The theory of general relativity predicts that a sufficiently compact mass can defo ...
with the same surface area.
*
Thermodynamics
Thermodynamics is a branch of physics that deals with heat, work, and temperature, and their relation to energy, entropy, and the physical properties of matter and radiation. The behavior of these quantities is governed by the four laws o ...
limit the data storage of a system based on its energy, number of particles and particle modes. In practice, it is a stronger bound than the Bekenstein bound.
Processing speed
*
Bremermann's limit is the maximum computational speed of a self-contained system in the material universe, and is based on
mass–energy versus
quantum uncertainty
In quantum mechanics, the uncertainty principle (also known as Heisenberg's uncertainty principle) is any of a variety of mathematical inequalities asserting a fundamental limit to the accuracy with which the values for certain pairs of physic ...
constraints.
Communication delays
* The
Margolus–Levitin theorem sets a bound on the maximum computational speed per unit of energy: 6 × 10
33 operations per second per
joule
The joule ( , ; symbol: J) is the unit of energy in the International System of Units (SI). It is equal to the amount of work done when a force of 1 newton displaces a mass through a distance of 1 metre in the direction of the force applie ...
. This bound, however, can be avoided if there is access to
quantum memory
In quantum computing, quantum memory is the quantum-mechanical version of ordinary computer memory. Whereas ordinary memory stores information as binary states (represented by "1"s and "0"s), quantum memory stores a quantum state for later ...
. Computational algorithms can then be designed that require arbitrarily small amounts of energy/time per one elementary computation step.
Energy supply
*
Landauer's principle defines a lower theoretical limit for energy consumption: consumed per irreversible state change, where ''k'' is the
Boltzmann constant
The Boltzmann constant ( or ) is the proportionality factor that relates the average relative kinetic energy of particles in a gas with the thermodynamic temperature of the gas. It occurs in the definitions of the kelvin and the gas consta ...
and ''T'' is the operating temperature of the computer.
Reversible computing
Reversible computing is any model of computation where the computational process, to some extent, is time-reversible. In a model of computation that uses deterministic transitions from one state of the abstract machine to another, a necessary c ...
is not subject to this lower bound. ''T'' cannot, even in theory, be made lower than 3
kelvins, the approximate temperature of the
cosmic microwave background radiation
In Big Bang cosmology the cosmic microwave background (CMB, CMBR) is electromagnetic radiation that is a remnant from an early stage of the universe, also known as "relic radiation". The CMB is faint cosmic background radiation filling all space ...
, without spending more energy on cooling than is saved in computation. However, on a timescale of 10
9 - 10
10 years, the cosmic microwave background radiation will be decreasing exponentially, which has been argued to eventually enable 10
30 as much computations per unit of energy. Important parts of this argument have been disputed.
Building devices that approach physical limits
Several methods have been proposed for producing computing devices or data storage devices that approach physical and practical limits:
* A cold
degenerate star
In astronomy, the term compact star (or compact object) refers collectively to white dwarfs, neutron stars, and black holes. It would grow to include exotic stars if such hypothetical, dense bodies are confirmed to exist. All compact objects ha ...
could conceivably be used as a giant data storage device, by carefully perturbing it to various excited states, in the same manner as an atom or
quantum well
A quantum well is a potential well with only discrete energy values.
The classic model used to demonstrate a quantum well is to confine particles, which were initially free to move in three dimensions, to two dimensions, by forcing them to occup ...
used for these purposes. Such a star would have to be artificially constructed, as no natural degenerate stars will cool to this temperature for an extremely long time. It is also possible that
nucleon
In physics and chemistry, a nucleon is either a proton or a neutron, considered in its role as a component of an atomic nucleus. The number of nucleons in a nucleus defines the atom's mass number (nucleon number).
Until the 1960s, nucleons w ...
s on the surface of
neutron star
A neutron star is the collapsed core of a massive supergiant star, which had a total mass of between 10 and 25 solar masses, possibly more if the star was especially metal-rich. Except for black holes and some hypothetical objects (e.g. w ...
s could form complex "molecules", which some have suggested might be used for computing purposes, creating a type of
computronium
Computronium is a material hypothesized by Norman Margolus and Tommaso Toffoli of MIT in 1991 to be used as " programmable matter", a substrate for computer modeling of virtually any real object.
It also refers to a arrangement of matter that ...
based on
femtotechnology, which would be faster and denser than computronium based on
nanotechnology
Nanotechnology, also shortened to nanotech, is the use of matter on an atomic, molecular, and supramolecular scale for industrial purposes. The earliest, widespread description of nanotechnology referred to the particular technological goal o ...
.
* It may be possible to use a
black hole
A black hole is a region of spacetime where gravity is so strong that nothing, including light or other electromagnetic waves, has enough energy to escape it. The theory of general relativity predicts that a sufficiently compact mass can defo ...
as a data storage or computing device, if a practical mechanism for extraction of contained information can be found. Such extraction may in principle be possible (
Stephen Hawking's proposed resolution to the
black hole information paradox
The black hole information paradox is a puzzle that appears when the predictions of quantum mechanics and general relativity are combined. The theory of general relativity predicts the existence of black holes that are regions of spacetime from wh ...
). This would achieve storage density exactly equal to the
Bekenstein bound.
Seth Lloyd
Seth Lloyd (born August 2, 1960) is a professor of mechanical engineering and physics at the Massachusetts Institute of Technology.
His research area is the interplay of information with complex systems, especially quantum systems. He has perfo ...
calculated the computational abilities of an "ultimate laptop" formed by compressing a kilogram of matter into a black hole of radius 1.485 × 10
−27 meters, concluding that it would only last about 10
−19 seconds before
evaporating due to
Hawking radiation
Hawking radiation is theoretical black body radiation that is theorized to be released outside a black hole's event horizon because of relativistic quantum effects. It is named after the physicist Stephen Hawking, who developed a theoretical ar ...
, but that during this brief time it could compute at a rate of about 5 × 10
50 operations per second, ultimately performing about 10
32 operations on 10
16 bits (~1
PB). Lloyd notes that "Interestingly, although this hypothetical computation is performed at ultra-high densities and speeds, the total number of bits available to be processed is not far from the number available to current computers operating in more familiar surroundings."
* In ''
The Singularity Is Near'', Ray Kurzweil cites the calculations of Seth Lloyd that a universal-scale computer is capable of 10
90 operations per second. The mass of the universe can be estimated at 3 × 10
52 kilograms. If all matter in the universe was turned into a black hole, it would have a lifetime of 2.8 × 10
139 seconds before evaporating due to Hawking radiation. During that lifetime such a universal-scale black hole computer would perform 2.8 × 10
229 operations.
Abstract limits in computer science
In the field of theoretical
computer science
Computer science is the study of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, information theory, and automation) to practical disciplines (includin ...
the
computability
Computability is the ability to solve a problem in an effective manner. It is a key topic of the field of computability theory within mathematical logic and the theory of computation within computer science. The computability of a problem is clo ...
and
complexity
Complexity characterises the behaviour of a system or model whose components interact in multiple ways and follow local rules, leading to nonlinearity, randomness, collective dynamics, hierarchy, and emergence.
The term is generally used to c ...
of computational problems are often sought-after. Computability theory describes the degree to which problems are computable, whereas complexity theory describes the asymptotic degree of resource consumption. Computational problems are therefore confined into
complexity classes. The
arithmetical hierarchy
In mathematical logic, the arithmetical hierarchy, arithmetic hierarchy or Kleene–Mostowski hierarchy (after mathematicians Stephen Cole Kleene and Andrzej Mostowski) classifies certain sets based on the complexity of formulas that define th ...
and
polynomial hierarchy
In computational complexity theory, the polynomial hierarchy (sometimes called the polynomial-time hierarchy) is a hierarchy of complexity classes that generalize the classes NP and co-NP. Each class in the hierarchy is contained within PSPACE. ...
classify the degree to which problems are respectively computable and computable in polynomial time. For instance, the level
of the arithmetical hierarchy classifies computable, partial functions. Moreover, this hierarchy is strict such that at any other class in the arithmetic hierarchy classifies strictly
uncomputable
Computable functions are the basic objects of study in computability theory. Computable functions are the formalized analogue of the intuitive notion of algorithms, in the sense that a function is computable if there exists an algorithm that can do ...
functions.
Loose and tight limits
Many limits derived in terms of physical constants and abstract models of computation in computer science are loose.
Very few known limits directly obstruct leading-edge technologies, but many engineering obstacles currently cannot be explained by closed-form limits.
See also
*
Transcomputational problem
*
Programmable matter
*
Hypercomputation
*
Supertask
*
Digital physics
Digital physics is a speculative idea that the universe can be conceived of as a vast, digital computation device, or as the output of a deterministic or probabilistic computer program. The hypothesis that the universe is a digital computer was ...
*
Quantum computing
Quantum computing is a type of computation whose operations can harness the phenomena of quantum mechanics, such as superposition, interference, and entanglement. Devices that perform quantum computations are known as quantum computers. Thou ...
*
Physics of computation
The study of the physics of computation relates to understanding the fundamental physical limits of computers. This field has led to the investigation of how thermodynamics limits information processing, the understanding of chaos and dynamical ...
*
Matrioshka brain
*
Bremermann's limit
References
External links
*{{Cite journal , url=https://www.jetpress.org/volume5/Brains2.pdf , title=The Physics of Information Processing Superobjects: Daily Life Among the Jupiter Brains , first=Anders , last=Sandberg , date=December 22, 1999 , journal=Journal of Evolution and Technology , volume=5 , number=1 , pages=1–34
Theory of computation