HOME





Sparse Language
In computational complexity theory, a sparse language is a formal language (a set of strings) such that the complexity function, counting the number of strings of length ''n'' in the language, is bounded by a polynomial function of ''n''. They are used primarily in the study of the relationship of the complexity class NP with other classes. The complexity class of all sparse languages is called SPARSE. Sparse languages are called ''sparse'' because there are a total of 2''n'' strings of length ''n'', and if a language only contains polynomially many of these, then the proportion of strings of length ''n'' that it contains rapidly goes to zero as ''n'' grows. All unary languages are sparse. An example of a nontrivial sparse language is the set of binary strings containing exactly ''k'' 1 bits for some fixed ''k''; for each ''n'', there are only \binom strings in the language, which is bounded by ''n''''k''. Relationships to other complexity classes * SPARSE contains TALLY, th ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Computational Complexity Theory
In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and explores the relationships between these classifications. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying their computational complexity, i.e., the amount of resources needed to solve them, such as time and storage. Other measures of complexity are also used, such as the amount of communication (used in communication complexity), the number of logic gate, gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). O ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Mahaney's Theorem
Mahaney's theorem is a theorem in computational complexity theory proven by Stephen Mahaney that states that if any sparse language is NP-complete, then P = NP. Also, if any sparse language is NP-complete with respect to Turing reductions, then the polynomial-time hierarchy collapses to \Delta^P_2. Mahaney's argument does not actually require the sparse language to be in NP, so there is a sparse NP-hard In computational complexity theory, a computational problem ''H'' is called NP-hard if, for every problem ''L'' which can be solved in non-deterministic polynomial-time, there is a polynomial-time reduction from ''L'' to ''H''. That is, assumi ... set if and only if P = NP. This is because the existence of an NP-hard sparse set implies the existence of an NP-complete sparse set. References {{compsci-stub Theorems in computational complexity theory ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


William Gasarch
William Ian Gasarch ( ; born 1959) is an American computer scientist known for his work in computational complexity theory, computability theory, computational learning theory, and Ramsey theory. He is currently a professor at the University of Maryland Department of Computer Science with an affiliate appointment in Mathematics. Gasarch is a frequent mentor of high school student research projects; one of these, with Jacob Lurie, won the 1996 Westinghouse Science Talent Search for Lurie. He has co-blogged on computational complexity with Lance Fortnow since 2007. He was book review editor for ACM SIGACT NEWS from 1997 to 2015. Education Gasarch received his doctorate in computer science from Harvard in 1985, advised by Harry R. Lewis. His thesis was titled ''Recursion-Theoretic Techniques in Complexity Theory and Combinatorics''. He was hired into a tenure track professorial job at the University of Maryland in the Fall of 1985. He was promoted to associate professor with tenu ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Lance Fortnow
Lance Jeremy Fortnow (born August 15, 1963) is a computer scientist known for major results in Computational complexity theory, computational complexity and interactive proof systems. Since 2019, he has been at the Illinois Institute of Technology, where he is currently the Dean of the College of Computing. Biography Lance Fortnow received a doctorate in applied mathematics from MIT in 1989, supervised by Michael Sipser. Since graduation, he has been on the faculty of the University of Chicago (1989–1999, 2003–2007), Northwestern University (2008–2012) and the Georgia Institute of Technology (2012–2019) as chair of the Georgia Institute of Technology School of Computer Science, School of Computer Science. From 1999-2003 he was a Senior Research Scientist at the NEC Research Institute. Fortnow was the founding editor-in-chief of the journal ''ACM Transactions on Computation Theory'' in 2009. He was the chair of ACM SIGACT and succeeded by Paul Beame. He was the chair of the ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

P (complexity)
In computational complexity theory, P, also known as PTIME or DTIME(''n''O(1)), is a fundamental complexity class. It contains all decision problems that can be solved by a deterministic Turing machine using a polynomial amount of computation time, or polynomial time. Cobham's thesis holds that P is the class of computational problems that are "efficiently solvable" or " tractable". This is inexact: in practice, some problems not known to be in P have practical solutions, and some that are in P do not, but this is a useful rule of thumb. Definition A language ''L'' is in P if and only if there exists a deterministic Turing machine ''M'', such that * ''M'' runs for polynomial time on all inputs * For all ''x'' in ''L'', ''M'' outputs 1 * For all ''x'' not in ''L'', ''M'' outputs 0 P can also be viewed as a uniform family of Boolean circuits. A language ''L'' is in P if and only if there exists a polynomial-time uniform family of Boolean circuits \, such that * For all n \in ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

L (complexity)
In computational complexity theory, L (also known as LSPACE, LOGSPACE or DLOGSPACE) is the complexity class containing decision problems that can be solved by a deterministic Turing machine using a logarithmic amount of writable memory space. Formally, the Turing machine has two tapes, one of which encodes the input and can only be read, whereas the other tape has logarithmic size but can be written as well as read. Logarithmic space is sufficient to hold a constant number of pointers into the input and a logarithmic number of Boolean flags, and many basic logspace algorithms use the memory in this way. Complete problems and logical characterization Every non-trivial problem in L is complete under log-space reductions, so weaker reductions are required to identify meaningful notions of L-completeness, the most common being first-order reductions. A 2004 result by Omer Reingold shows that USTCON, the problem of whether there exists a path between two vertices in a given u ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




P-complete
In computational complexity theory, a decision problem is P-complete ( complete for the complexity class P) if it is in P and every problem in P can be reduced to it by an appropriate reduction. The notion of P-complete decision problems is useful in the analysis of: * which problems are difficult to parallelize effectively, * which problems are difficult to solve in limited space. specifically when stronger notions of reducibility than polytime-reducibility are considered. The specific type of reduction used varies and may affect the exact set of problems. Generically, reductions stronger than polynomial-time reductions are used, since all languages in P (except the empty language and the language of all strings) are P-complete under polynomial-time reductions. If we use NC reductions, that is, reductions which can operate in polylogarithmic time on a parallel computer with a polynomial number of processors, then all P-complete problems lie outside NC and so cannot be effec ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Karp Reduction
In computational complexity theory, a polynomial-time reduction is a method for solving one problem using another. One shows that if a hypothetical subroutine solving the second problem exists, then the first problem can be solved by transforming or reducing it to inputs for the second problem and calling the subroutine one or more times. If both the time required to transform the first problem to the second, and the number of times the subroutine is called is polynomial, then the first problem is polynomial-time reducible to the second. A polynomial-time reduction proves that the first problem is no more difficult than the second one, because whenever an efficient algorithm exists for the second problem, one exists for the first problem as well. By contraposition, if no efficient algorithm exists for the first problem, none exists for the second either. Polynomial-time reductions are frequently used in complexity theory for defining both complexity classes and complete problems ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Turing Reduction
In computability theory, a Turing reduction from a decision problem A to a decision problem B is an oracle machine that decides problem A given an oracle for B (Rogers 1967, Soare 1987) in finitely many steps. It can be understood as an algorithm that could be used to solve A if it had access to a subroutine for solving B. The concept can be analogously applied to function problems. If a Turing reduction from A to B exists, then every algorithm for B can be used to produce an algorithm for A, by inserting the algorithm for B at each place where the oracle machine computing A queries the oracle for B. However, because the oracle machine may query the oracle a large number of times, the resulting algorithm may require more time asymptotically than either the algorithm for B or the oracle machine computing A. A Turing reduction in which the oracle machine runs in polynomial time is known as a Cook reduction. The first formal definition of relative computability, then called relative ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


NE (complexity)
In computational complexity theory, the complexity class NE is the set of decision problems that can be solved by a non-deterministic Turing machine in time Big O notation, O(''k''n) for some ''k''. NE, unlike the similar class NEXPTIME, is not closed under Polynomial-time reduction, polynomial-time many-one reductions. Relationship to other classes NE is contained by NEXPTIME. See also * E (complexity) References {{DEFAULTSORT:Ne (Complexity) Complexity classes ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




E (complexity)
In computational complexity theory, the complexity class E is the set of decision problem In computability theory and computational complexity theory, a decision problem is a computational problem that can be posed as a yes–no question on a set of input values. An example of a decision problem is deciding whether a given natura ...s that can be solved by a deterministic Turing machine in time 2 O(''n'') and is therefore equal to the complexity class DTIME(2O(''n'')). E, unlike the similar class EXPTIME, is not closed under polynomial-time many-one reductions. Relationship to other classes E is contained by NE. References *. *. *. *. *. External links * {{comp-sci-theory-stub Complexity classes ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Springer Science+Business Media
Springer Science+Business Media, commonly known as Springer, is a German multinational publishing company of books, e-books and peer-reviewed journals in science, humanities, technical and medical (STM) publishing. Originally founded in 1842 in Berlin, it expanded internationally in the 1960s, and through mergers in the 1990s and a sale to venture capitalists it fused with Wolters Kluwer and eventually became part of Springer Nature in 2015. Springer has major offices in Berlin, Heidelberg, Dordrecht, and New York City. History Julius Springer founded Springer-Verlag in Berlin in 1842 and his son Ferdinand Springer grew it from a small firm of 4 employees into Germany's then second-largest academic publisher with 65 staff in 1872.Chronology
". Springer Science+Business Media.
In 1964, Springer expanded its business internationally, op ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]