In
computational complexity theory
In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved ...
, the space hierarchy theorems are separation results that show that both deterministic and nondeterministic machines can solve more problems in (asymptotically) more space, subject to certain conditions. For example, a
deterministic Turing machine
A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, it is capable of implementing any computer algor ...
can solve more
decision problem
In computability theory and computational complexity theory, a decision problem is a computational problem that can be posed as a yes–no question of the input values. An example of a decision problem is deciding by means of an algorithm whethe ...
s in space ''n'' log ''n'' than in space ''n''. The somewhat weaker analogous theorems for time are the
time hierarchy theorem
In computational complexity theory, the time hierarchy theorems are important statements about time-bounded computation on Turing machines. Informally, these theorems say that given more time, a Turing machine can solve more problems. For example, ...
s.
The foundation for the hierarchy theorems lies in the intuition that
with either more time or more space comes the ability to compute more
functions (or decide more languages). The hierarchy theorems are used
to demonstrate that the time and space complexity classes form a
hierarchy where classes with tighter bounds contain fewer languages
than those with more relaxed bounds. Here we define and prove the
space hierarchy theorem.
The space hierarchy theorems rely on the concept of
space-constructible function In complexity theory, a time-constructible function is a function ''f'' from natural numbers to natural numbers with the property that ''f''(''n'') can be constructed from ''n'' by a Turing machine in the time of order ''f''(''n''). The purpose of ...
s. The deterministic and nondeterministic space hierarchy theorems state that for all space-constructible functions ''f''(''n''),
:
,
where SPACE stands for either
DSPACE
DSpace is an open source repository software package typically used for creating open access repositories for scholarly and/or published digital content. While DSpace shares some feature overlap with content management systems and document managem ...
or
NSPACE, and refers to the
little o notation.
Statement
Formally, a function
is space-constructible if
and there exists a Turing machine
which computes the function
in space
when starting
with an input
, where
represents a string of ''n'' consecutive 1s. Most of the common functions that we work with are space-constructible, including polynomials, exponents, and logarithms.
For every space-constructible function
, there exists a language that is decidable in space
but not in space
.
Proof
The goal is to define a language that can be decided in space
but not space
. The language is defined as :
For any machine that decides a language in space
, will differ in at least one spot from the language of . Namely, for some large enough , will use space
on
and will therefore differ at its value.
On the other hand, is in
. The algorithm for deciding the language is as follows:
# On an input , compute
using space-constructibility, and mark off
cells of tape. Whenever an attempt is made to use more than
cells, ''reject''.
# If is not of the form
for some TM , ''reject''.
# Simulate on input for at most
steps (using
space). If the simulation tries to use more than
space or more than
operations, then ''reject''.
# If accepted during this simulation, then ''reject''; otherwise, ''accept''.
Note on step 3: Execution is limited to
steps in order to avoid the case where does not halt on the input . That is, the case where consumes space of only
as required, but runs for infinite time.
The above proof holds for the case of PSPACE, but some changes need to be made for the case of NPSPACE. The crucial point is that while on a deterministic TM, acceptance and rejection can be inverted (crucial for step 4), this is not possible on a non-deterministic machine.
For the case of NPSPACE, needs to be redefined first:
Now, the algorithm needs to be changed to accept by modifying step 4 to:
* If accepted during this simulation, then ''accept''; otherwise, ''reject''.
can not be decided by a TM using
cells. Assuming can be decided by some TM using
cells, and following from the
Immerman–Szelepcsényi theorem
In computational complexity theory, the Immerman–Szelepcsényi theorem states that nondeterministic space complexity classes are closed under complementation. It was proven independently by Neil Immerman and Róbert Szelepcsényi in 1987, for wh ...
,
can also be determined by a TM (called
) using
cells. Here lies the contradiction, therefore the assumption must be false:
# If
(for some large enough ) is not in
then will accept it, therefore
rejects , therefore is in
(contradiction).
# If
(for some large enough ) is in
then will reject it, therefore
accepts , therefore is not in
(contradiction).
Comparison and improvements
The space hierarchy theorem is stronger than the analogous
time hierarchy theorem
In computational complexity theory, the time hierarchy theorems are important statements about time-bounded computation on Turing machines. Informally, these theorems say that given more time, a Turing machine can solve more problems. For example, ...
s in several ways:
* It only requires s(n) to be at least log ''n'' instead of at least ''n''.
* It can separate classes with any asymptotic difference, whereas the time hierarchy theorem requires them to be separated by a logarithmic factor.
* It only requires the function to be space-constructible, not time-constructible.
It seems to be easier to separate classes in space than in time. Indeed, whereas the time hierarchy theorem has seen little remarkable improvement since its inception, the nondeterministic space hierarchy theorem has seen at least one important improvement by
Viliam Geffert
Viliam Geffert (born 1955) is a Slovak
theoretical computer scientist
known for his contributions
to the computational complexity theory in sublogarithmic space
and to the state complexity of two-way finite automata.
He has also developed new ...
in his 2003 paper "Space hierarchy theorem revised". This paper made several generalizations of the theorem:
* It relaxes the space-constructibility requirement. Instead of merely separating the union classes and , it separates from where is an arbitrary function and g(n) is a
computable function. These functions need not be space-constructible or even monotone increasing.
* It identifies a
unary language, or tally language, which is in one class but not the other. In the original theorem, the separating language was arbitrary.
* It does not require to be at least log ''n''; it can be any nondeterministically fully space-constructible function.
Refinement of space hierarchy
If space is measured as the number of cells used regardless of alphabet size, then because one can achieve any linear compression by switching to a larger alphabet. However, by measuring space in bits, a much sharper separation is achievable for deterministic space. Instead of being defined up to a multiplicative constant, space is now defined up to an additive constant. However, because any constant amount of external space can be saved by storing the contents into the internal state, we still have .
Assume that f is space-constructible. SPACE is deterministic.
* For a wide variety of sequential computational models, including for Turing machines, SPACE(''f''(''n'')-
ω(log(''f''(''n'')+''n''))) ⊊ SPACE(''f''(''n'')). This holds even if SPACE(''f''(''n'')-''ω''(log(''f''(''n'')+''n''))) is defined using a different computational model than because the different models can simulate each other with space overhead.
* For certain computational models, we even have SPACE(''f''(''n'')-''ω''(1)) ⊊ SPACE(''f''(''n'')). In particular, this holds for Turing machines if we fix the alphabet, the number of heads on the input tape, the number of heads on the worktape (using a single worktape), and add delimiters for the visited portion of the worktape (that can be checked without increasing space usage). SPACE(''f''(''n'')) does not depend on whether the worktape is infinite or semi-infinite. We can also have a fixed number of worktapes if ''f''(''n'') is either a SPACE constructible tuple giving the per-tape space usage, or a SPACE(''f''(''n'')-''ω''(log(''f''(''n'')))-constructible number giving the total space usage (not counting the overhead for storing the length of each tape).
The proof is similar to the proof of the space hierarchy theorem, but with two complications: The universal Turing machine has to be space-efficient, and the reversal has to be space-efficient. One can generally construct universal Turing machines with space overhead, and under appropriate assumptions, just space overhead (which may depend on the machine being simulated). For the reversal, the key issue is how to detect if the simulated machine rejects by entering an infinite (space-constrained) loop. Simply counting the number of steps taken would increase space consumption by about . At the cost of a potentially exponential time increase, loops can be detected space-efficiently as follows:
Modify the machine to erase everything and go to a specific configuration A on success. Use
depth-first search
Depth-first search (DFS) is an algorithm for traversing or searching tree or graph data structures. The algorithm starts at the root node (selecting some arbitrary node as the root node in the case of a graph) and explores as far as possible alo ...
to determine whether A is reachable in the space bound from the starting configuration. The search starts at A and goes over configurations that lead to A. Because of determinism, this can be done in place and without going into a loop.
It can also be determined whether the machine exceeds a space bound (as opposed to looping within the space bound) by iterating over all configurations about to exceed the space bound and checking (again using depth-first search) whether the initial configuration leads to any of them.
Corollaries
Corollary 1
''For any two functions
,
, where
is
and
is space-constructible,
.
This corollary lets us separate various space complexity classes.
For any function
is space-constructible for any natural
number k. Therefore for any two natural numbers
we can
prove
. This idea can be extended for real numbers in the following corollary. This demonstrates the detailed hierarchy within the PSPACE class.
Corollary 2
''For any two nonnegative real numbers
.''
Corollary 3
:
NL ⊊
PSPACE
In computational complexity theory, PSPACE is the set of all decision problems that can be solved by a Turing machine using a polynomial amount of space.
Formal definition
If we denote by SPACE(''t''(''n'')), the set of all problems that can b ...
.
Proof
Savitch's theorem
In computational complexity theory, Savitch's theorem, proved by Walter Savitch in 1970, gives a relationship between deterministic and non-deterministic space complexity. It states that for any function f\in\Omega(\log(n)),
:\mathsf\left(f\lef ...
shows that
, while the space hierarchy theorem shows that
. The result is this corollary along with the fact that TQBF ∉ NL
since TQBF is PSPACE-complete.
This could also be proven using the non-deterministic space hierarchy theorem to show that NL ⊊ NPSPACE, and using Savitch's theorem to show that PSPACE = NPSPACE.
Corollary 4
:
PSPACE
In computational complexity theory, PSPACE is the set of all decision problems that can be solved by a Turing machine using a polynomial amount of space.
Formal definition
If we denote by SPACE(''t''(''n'')), the set of all problems that can b ...
⊊
EXPSPACE
In computational complexity theory, is the set of all decision problems solvable by a deterministic Turing machine in exponential space, i.e., in O(2^) space, where p(n) is a polynomial function of n. Some authors restrict p(n) to be a linear f ...
.
This last corollary shows the existence of decidable problems that are intractable. In other words, their decision procedures must use more than polynomial space.
Corollary 5
There are problems in requiring an arbitrarily large exponent to solve; therefore does not collapse to (''n''
''k'') for some constant ''k''.
See also
*
Time hierarchy theorem
In computational complexity theory, the time hierarchy theorems are important statements about time-bounded computation on Turing machines. Informally, these theorems say that given more time, a Turing machine can solve more problems. For example, ...
References
*
*
Luca Trevisan
Luca Trevisan (21 July 1971) is an Italian professor of computer science at Bocconi University in Milan.
His research area is theoretical computer science, focusing on randomness, cryptography, probabilistically checkable proofs, approximation, ...
Notes on Hierarchy Theorems Handout 7. CS172: Automata, Computability and Complexity. U.C. Berkeley. April 26, 2004.
*
Viliam Geffert
Viliam Geffert (born 1955) is a Slovak
theoretical computer scientist
known for his contributions
to the computational complexity theory in sublogarithmic space
and to the state complexity of two-way finite automata.
He has also developed new ...
Space hierarchy theorem revised ''Theoretical Computer Science'', volume 295, number 1–3, p. 171-187. February 24, 2003.
* Pages 306–310 of section 9.1: Hierarchy theorems.
* {{cite book , first=Christos , last=Papadimitriou , author-link = Christos Papadimitriou , year = 1993 , title = Computational Complexity , publisher = Addison Wesley , edition = 1st , isbn = 0-201-53082-1 Section 7.2: The Hierarchy Theorem, pp. 143–146.
Articles containing proofs
Structural complexity theory
Theorems in computational complexity theory