NP-complete
In computational complexity theory, NP-complete problems are the hardest of the problems to which ''solutions'' can be verified ''quickly''. Somewhat more precisely, a problem is NP-complete when: # It is a decision problem, meaning that for any input to the problem, the output is either "yes" or "no". # When the answer is "yes", this can be demonstrated through the existence of a short (polynomial length) ''solution''. # The correctness of each solution can be verified quickly (namely, in polynomial time) and a brute-force search algorithm can find a solution by trying all possible solutions. # The problem can be used to simulate every other problem for which we can verify quickly that a solution is correct. Hence, if we could find solutions of some NP-complete problem quickly, we could quickly find the solutions of every other problem to which a given solution can be easily verified. The name "NP-complete" is short for "nondeterministic polynomial-time complete". In this name, ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
P Versus NP Problem
The P versus NP problem is a major unsolved problem in theoretical computer science. Informally, it asks whether every problem whose solution can be quickly verified can also be quickly solved. Here, "quickly" means an algorithm exists that solves the task and runs in polynomial time (as opposed to, say, exponential time), meaning the task completion time is bounded above by a polynomial function on the size of the input to the algorithm. The general class of questions that some algorithm can answer in polynomial time is " P" or "class P". For some questions, there is no known way to find an answer quickly, but if provided with an answer, it can be verified quickly. The class of questions where an answer can be ''verified'' in polynomial time is "NP", standing for "nondeterministic polynomial time".A nondeterministic Turing machine can move to a state that is not determined by the previous state. Such a machine could solve an NP problem in polynomial time by falling into t ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Cook–Levin Theorem
In computational complexity theory, the Cook–Levin theorem, also known as Cook's theorem, states that the Boolean satisfiability problem is NP-completeness, NP-complete. That is, it is in NP (complexity), NP, and any problem in NP can be reduction (complexity), reduced in polynomial time by a deterministic Turing machine to the Boolean satisfiability problem. The theorem is named after Stephen Cook and Leonid Levin. The proof is due to Richard Karp, based on an earlier proof (using a different notion of reducibility) by Cook. An important consequence of this theorem is that if there exists a deterministic polynomial-time algorithm for solving Boolean satisfiability, then every NP (complexity), NP problem can be solved by a deterministic polynomial-time algorithm. The question of whether such an algorithm for Boolean satisfiability exists is thus equivalent to the P versus NP problem, which is still widely considered the most important unsolved problem in theoretical computer sc ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
NP-hard
In computational complexity theory, a computational problem ''H'' is called NP-hard if, for every problem ''L'' which can be solved in non-deterministic polynomial-time, there is a polynomial-time reduction from ''L'' to ''H''. That is, assuming a solution for ''H'' takes 1 unit time, ''H''s solution can be used to solve ''L'' in polynomial time. As a consequence, finding a polynomial time algorithm to solve a single NP-hard problem would give polynomial time algorithms for all the problems in the complexity class NP. As it is suspected, but unproven, that P≠NP, it is unlikely that any polynomial-time algorithms for NP-hard problems exist. A simple example of an NP-hard problem is the subset sum problem. Informally, if ''H'' is NP-hard, then it is at least as difficult to solve as the problems in NP. However, the opposite direction is not true: some problems are undecidable, and therefore even more difficult to solve than all problems in NP, but they are probably not NP- ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
NP (complexity)
In computational complexity theory, NP (nondeterministic polynomial time) is a complexity class used to classify decision problems. NP is the Set (mathematics), set of decision problems for which the Computational complexity theory#Problem instances, problem instances, where the answer is "yes", have mathematical proof, proofs verifiable in polynomial time by a deterministic Turing machine, or alternatively the set of problems that can be solved in polynomial time by a nondeterministic Turing machine.''Polynomial time'' refers to how quickly the number of operations needed by an algorithm, relative to the size of the problem, grows. It is therefore a measure of efficiency of an algorithm. * NP is the set of decision problems ''solvable'' in polynomial time by a nondeterministic Turing machine. * NP is the set of decision problems ''verifiable'' in polynomial time by a deterministic Turing machine. The first definition is the basis for the abbreviation NP; "Nondeterministic alg ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Computational Complexity Theory
In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and explores the relationships between these classifications. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying their computational complexity, i.e., the amount of resources needed to solve them, such as time and storage. Other measures of complexity are also used, such as the amount of communication (used in communication complexity), the number of logic gate, gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). O ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Polynomial Time
In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm are taken to be related by a constant factor. Since an algorithm's running time may vary among different inputs of the same size, one commonly considers the worst-case time complexity, which is the maximum amount of time required for inputs of a given size. Less common, and usually specified explicitly, is the average-case complexity, which is the average of the time taken on inputs of a given size (this makes sense because there are only a finite number of possible inputs of a given size). In both cases, the time complexity is gener ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Approximation Algorithm
In computer science and operations research, approximation algorithms are efficient algorithms that find approximate solutions to optimization problems (in particular NP-hard problems) with provable guarantees on the distance of the returned solution to the optimal one. Approximation algorithms naturally arise in the field of theoretical computer science as a consequence of the widely believed P ≠ NP conjecture. Under this conjecture, a wide class of optimization problems cannot be solved exactly in polynomial time. The field of approximation algorithms, therefore, tries to understand how closely it is possible to approximate optimal solutions to such problems in polynomial time. In an overwhelming majority of the cases, the guarantee of such algorithms is a multiplicative one expressed as an approximation ratio or approximation factor i.e., the optimal solution is always guaranteed to be within a (predetermined) multiplicative factor of the returned solution. However, there a ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Decision Problem
In computability theory and computational complexity theory, a decision problem is a computational problem that can be posed as a yes–no question on a set of input values. An example of a decision problem is deciding whether a given natural number is prime. Another example is the problem, "given two numbers ''x'' and ''y'', does ''x'' evenly divide ''y''?" A decision procedure for a decision problem is an algorithmic method that answers the yes-no question on all inputs, and a decision problem is called decidable if there is a decision procedure for it. For example, the decision problem "given two numbers ''x'' and ''y'', does ''x'' evenly divide ''y''?" is decidable since there is a decision procedure called long division that gives the steps for determining whether ''x'' evenly divides ''y'' and the correct answer, ''YES'' or ''NO'', accordingly. Some of the most important problems in mathematics are undecidable, e.g. the halting problem. The field of computational ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
List Of Open Problems In Computer Science
This article is a list of notable unsolved problems in computer science. A problem in computer science is considered unsolved when no solution is known or when experts in the field disagree about proposed solutions. Computational complexity * P versus NP problem – The P vs NP problem is a major unsolved question in computer science that asks whether every problem whose solution can be quickly verified by a computer (NP) can also be quickly solved by a computer (P). This question has profound implications for fields such as cryptography, algorithm design, and computational theory. * What is the relationship between BQP and NP? * NC = P problem * NP = co-NP problem * P = BPP problem * P = PSPACE problem * L = NL problem * PH = PSPACE problem * L = P problem * L = RL problem * Unique games conjecture * Is the exponential time hypothesis true? ** Is the strong exponential time hypothesis (SETH) true? * Do one-way functions exist? ** Is public-key cryptography possible? * ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Symposium On Theory Of Computing
The Annual ACM Symposium on Theory of Computing (STOC) is an academic conference in the field of theoretical computer science. STOC has been organized annually since 1969, typically in May or June; the conference is sponsored by the Association for Computing Machinery special interest group SIGACT. Acceptance rate of STOC, averaged from 1970 to 2012, is 31%, with the rate of 29% in 2012. As writes, STOC and its annual IEEE counterpart FOCS (the Symposium on Foundations of Computer Science) are considered the two top conferences in theoretical computer science, considered broadly: they “are forums for some of the best work throughout theory of computing that promote breadth among theory of computing researchers and help to keep the community together.” includes regular attendance at STOC and FOCS as one of several defining characteristics of theoretical computer scientists. Awards The Gödel Prize for outstanding papers in theoretical computer science is presented alternate ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Nondeterministic Turing Machine
In theoretical computer science, a nondeterministic Turing machine (NTM) is a theoretical model of computation whose governing rules specify more than one possible action when in some given situations. That is, an NTM's next state is ''not'' completely determined by its action and the current symbol it sees, unlike a deterministic Turing machine. NTMs are sometimes used in thought experiments to examine the abilities and limits of computers. One of the most important open problems in theoretical computer science is the P versus NP problem, which (among other equivalent formulations) concerns the question of how difficult it is to simulate nondeterministic computation with a deterministic computer. Background In essence, a Turing machine is imagined to be a simple computer that reads and writes symbols one at a time on an endless tape by strictly following a set of rules. It determines what action it should perform next according to its internal ''state'' and ''what symbol it cu ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |