Parallel Computation Thesis
   HOME

TheInfoList



OR:

In
computational complexity theory In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and explores the relationships between these classifications. A computational problem ...
, the parallel computation thesis is a
hypothesis A hypothesis (: hypotheses) is a proposed explanation for a phenomenon. A scientific hypothesis must be based on observations and make a testable and reproducible prediction about reality, in a process beginning with an educated guess o ...
which states that the ''time'' used by a (reasonable) parallel machine is polynomially related to the ''space'' used by a sequential machine. The parallel computation thesis was set forth by
Chandra Chandra (), also known as Soma (), is the Hindu god of the Moon, and is associated with the night, plants and vegetation. He is one of the Navagraha (nine planets of Hinduism) and Dikpala (guardians of the directions). Etymology and other ...
and Stockmeyer in 1976. In other words, for a
computational model A computational model uses computer programs to simulate and study complex systems using an algorithmic or mechanistic approach and is widely used in a diverse range of fields spanning from physics, engineering, chemistry and biology to economics ...
which allows computations to branch and run in parallel without bound, a
formal language In logic, mathematics, computer science, and linguistics, a formal language is a set of strings whose symbols are taken from a set called "alphabet". The alphabet of a formal language consists of symbols that concatenate into strings (also c ...
which is decidable under the model using no more than t(n) steps for inputs of length ''n'' is decidable by a non-branching machine using no more than t(n)^k units of storage for some constant ''k''. Similarly, if a machine in the unbranching model decides a language using no more than s(n) storage, a machine in the parallel model can decide the language in no more than s(n)^k steps for some constant ''k''. The parallel computation thesis is not a rigorous formal statement, as it does not clearly define what constitutes an acceptable parallel model. A parallel machine must be sufficiently powerful to emulate the sequential machine in time polynomially related to the sequential space; compare
Turing machine A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, it is capable of implementing any computer algori ...
,
non-deterministic Turing machine In theoretical computer science, a nondeterministic Turing machine (NTM) is a theoretical model of computation whose governing rules specify more than one possible action when in some given situations. That is, an NTM's next state is ''not'' comp ...
, and
alternating Turing machine In computational complexity theory, an alternating Turing machine (ATM) is a non-deterministic Turing machine (NTM) with a rule for accepting computations that generalizes the rules used in the definition of the complexity classes NP and co-NP ...
. N. Blum (1983) introduced a model for which the thesis does not hold. However, the model allows 2^ parallel threads of computation after T(n) steps. (See
Big O notation Big ''O'' notation is a mathematical notation that describes the asymptotic analysis, limiting behavior of a function (mathematics), function when the Argument of a function, argument tends towards a particular value or infinity. Big O is a memb ...
.) Parberry (1986) suggested a more "reasonable" bound would be 2^ or 2^, in defense of the thesis. Goldschlager (1982) proposed a model which is sufficiently universal to emulate all "reasonable" parallel models. In this model, the thesis is provably true. Chandra and Stockmeyer originally formalized and proved results related to the thesis for deterministic and alternating Turing machines, which is where the thesis originated.


Definition

Given two models of computation, such as Turing machines and PRAM, they would have computational resource usages. For Turing machines, the resources can be tape space, sequential time, number of times the read/write head changes direction, etc. For PRAM, the resources can be parallel time, total number of processors, etc. Conditional on a function T(n), saying that the use of one resource R in one model is polynomially related to the use of another resource R' in another model means the following. Given a problem that can be solved with some computation according to the first model, consuming only T(n)^k amount of resource R for some k > 0, there exists another computation according to the second model, consuming only T(n)^ of resource R' for some k' > 0. And ''vice versa''. The parallel computation thesis states that, conditional on any T(n) \ge \log n, the use of tape space in Turing machines is polynomially related to the use of parallel time in PRAM for which the total number of processors is at most exponential in parallel time. The restriction on "at most exponential" is important, since with a bit more than exponentially many processors, there is a collapse: Any language in NP can be recognized in constant time by a shared-memory machine with O\left(2^\right) processors and word size O\left(T(n)^2\right). If the parallel computation thesis is true, then one implication is that "fast" parallel computers (i.e. those that run in polylogarithmic time) recognize exactly the languages in polyL.


Evidence

It was proven in 1978 that for any T(n) \ge \log n, and with the restriction that the number of processors of the PRAM is no more than exponential in parallel running time, we have \bigcup_^ T(n)^k \text = \bigcup_^ T(n)^k \text In particular, \bigcup_k \log^k n \text = \bigcup_k \log^k n \text, and polynomial-time PRAM = PSPACE. Note that the exponential amount of processors is likely required. Specifically, suppose that only a polynomial number of processors are required for some PSPACE-complete problem, then it would show that PSPACE = P, a major unresolved hypothesis that is expected to be false. Also, for non-deterministic versions, \bigcup_ cT(n) \text =\bigcup_ 2^ \text In particular, nondeterministic O(\log n)-time PRAM = NP and nondeterministic polynomial time PRAM = nondeterministic exponential time.


Other theses


Extended parallel computation thesis

The extended parallel computation thesis states that both of these are true: * Turing machine (head reversal, tape space) and PRAM (parallel time, processor count) are simultaneously polynomially related. * PRAM parallel time and PRAM processor count are polynomially related. One implication would be that "small and fast" parallel computers (i.e. those that run in both polylogarithmic time and with polynomially many processors) recognize exactly the languages in NC.


Sequential computation thesis

Related to this is the sequential computation thesis. It states that given any two reasonable definitions A and B, of what it means to have a "sequential computer", their execution times are polynomially related. Concretely, it means that for each sequential computer C_A according to definition A, there is a sequential computer C_B according to definition B, such that the execution time of C_A on any problem is upper bounded by a polynomial of the execution time of C_B on the same problem. It is stronger than the
Church–Turing thesis In Computability theory (computation), computability theory, the Church–Turing thesis (also known as computability thesis, the Turing–Church thesis, the Church–Turing conjecture, Church's thesis, Church's conjecture, and Turing's thesis) ...
, since it claims not only that the computable problems are the same for all computers, but also that the feasibly computable problems are the same for all computers.


References


Further reading

* {{Citation , last1=Balcázar , first1=José Luis , title=The Parallel Computation Thesis , date=1990 , work=Structural Complexity II , pages=33–62 , editor-last=Balcázar , editor-first=José Luis , url=https://link.springer.com/chapter/10.1007/978-3-642-75357-2_3 , access-date=2025-05-19 , place=Berlin, Heidelberg , publisher=Springer , language=en , doi=10.1007/978-3-642-75357-2_3 , isbn=978-3-642-75357-2 , last2=Díaz , first2=Josep , last3=Gabarró , first3=Joaquim , editor2-last=Díaz , editor2-first=Josep , editor3-last=Gabarró , editor3-first=Joaquim Parallel computing Theory of computation