P-complete Problems
   HOME





P-complete Problems
In computational complexity theory, a decision problem is P-complete ( complete for the complexity class P) if it is in P and every problem in P can be reduced to it by an appropriate reduction. The notion of P-complete decision problems is useful in the analysis of: * which problems are difficult to parallelize effectively, * which problems are difficult to solve in limited space. specifically when stronger notions of reducibility than polytime-reducibility are considered. The specific type of reduction used varies and may affect the exact set of problems. Generically, reductions stronger than polynomial-time reductions are used, since all languages in P (except the empty language and the language of all strings) are P-complete under polynomial-time reductions. If we use NC reductions, that is, reductions which can operate in polylogarithmic time on a parallel computer with a polynomial number of processors, then all P-complete problems lie outside NC and so cannot be effec ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Computational Complexity Theory
In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and explores the relationships between these classifications. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying their computational complexity, i.e., the amount of resources needed to solve them, such as time and storage. Other measures of complexity are also used, such as the amount of communication (used in communication complexity), the number of logic gate, gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). O ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Boolean Circuit
In computational complexity theory and circuit complexity, a Boolean circuit is a mathematical model for combinational digital logic circuits. A formal language can be decided by a family of Boolean circuits, one circuit for each possible input length. Boolean circuits are defined in terms of the logic gates they contain. For example, a circuit might contain binary AND and OR gates and unary NOT gates, or be entirely described by binary NAND gates. Each gate corresponds to some Boolean function that takes a fixed number of bits as input and outputs a single bit. Boolean circuits provide a model for many digital components used in computer engineering, including multiplexers, adders, and arithmetic logic units, but they exclude sequential logic. They are an abstraction that omits many aspects relevant to designing real digital logic circuits, such as metastability, fanout, glitches, power consumption, and propagation delay variability. Formal definition In givi ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Lambda Calculus
In mathematical logic, the lambda calculus (also written as ''λ''-calculus) is a formal system for expressing computability, computation based on function Abstraction (computer science), abstraction and function application, application using variable Name binding, binding and Substitution (algebra), substitution. Untyped lambda calculus, the topic of this article, is a universal machine, a model of computation that can be used to simulate any Turing machine (and vice versa). It was introduced by the mathematician Alonzo Church in the 1930s as part of his research into the foundations of mathematics. In 1936, Church found a formulation which was #History, logically consistent, and documented it in 1940. Lambda calculus consists of constructing #Lambda terms, lambda terms and performing #Reduction, reduction operations on them. A term is defined as any valid lambda calculus expression. In the simplest form of lambda calculus, terms are built using only the following rules: # x: A ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Type Theory
In mathematics and theoretical computer science, a type theory is the formal presentation of a specific type system. Type theory is the academic study of type systems. Some type theories serve as alternatives to set theory as a foundation of mathematics. Two influential type theories that have been proposed as foundations are: * Typed λ-calculus of Alonzo Church * Intuitionistic type theory of Per Martin-Löf Most computerized proof-writing systems use a type theory for their foundation. A common one is Thierry Coquand's Calculus of Inductive Constructions. History Type theory was created to avoid paradoxes in naive set theory and formal logic, such as Russell's paradox which demonstrates that, without proper axioms, it is possible to define the set of all sets that are not members of themselves; this set both contains itself and does not contain itself. Between 1902 and 1908, Bertrand Russell proposed various solutions to this problem. By 1908, Russell arrive ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Type Inference
Type inference, sometimes called type reconstruction, refers to the automatic detection of the type of an expression in a formal language. These include programming languages and mathematical type systems, but also natural languages in some branches of computer science and linguistics. Nontechnical explanation In a typed language, a term's type determines the ways it can and cannot be used in that language. For example, consider the English language and terms that could fill in the blank in the phrase "sing _." The term "a song" is of singable type, so it could be placed in the blank to form a meaningful phrase: "sing a song." On the other hand, the term "a friend" does not have the singable type, so "sing a friend" is nonsense. At best it might be metaphor; bending type rules is a feature of poetic language. A term's type can also affect the interpretation of operations involving that term. For instance, "a song" is of composable type, so we interpret it as the thing created ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Gzip
gzip is a file format and a software application used for file compression and decompression. The program was created by Jean-loup Gailly and Mark Adler as a free software replacement for the compress program used in early Unix systems, and intended for use by GNU (from which the "g" of gzip is derived). Version 0.1 was first publicly released on 31 October 1992, and version 1.0 followed in February 1993. The decompression of the ''gzip'' format can be implemented as a streaming algorithm, an important feature for Web protocols, data interchange and ETL (in standard pipes) applications. File format gzip is based on the DEFLATE algorithm, which is a combination of LZ77 and Huffman coding. DEFLATE was intended as a replacement for LZW and other patent-encumbered data compression algorithms which, at the time, limited the usability of the compress utility and other popular archivers. "gzip" also refers to the gzip file format (described in the table below). In sho ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




LZ77
LZ77 and LZ78 are the two lossless data compression algorithms published in papers by Abraham Lempel and Jacob Ziv in 1977 and 1978. They are also known as Lempel-Ziv 1 (LZ1) and Lempel-Ziv 2 (LZ2) respectively. These two algorithms form the basis for many variations including Lempel–Ziv–Welch, LZW, Lempel–Ziv–Storer–Szymanski, LZSS, Lempel–Ziv–Markov chain algorithm, LZMA and others. Besides their academic influence, these algorithms formed the basis of several ubiquitous compression schemes, including GIF and the DEFLATE algorithm used in Portable Network Graphics, PNG and Zip (file format), ZIP. They are both theoretically dictionary coders. LZ77 maintains a sliding window during compression. This was later shown to be equivalent to the ''explicit dictionary'' constructed by LZ78—however, they are only equivalent when the entire data is intended to be decompressed. Since LZ77 encodes and decodes from a sliding window over previously seen characters, decompressio ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Conway's Game Of Life
The Game of Life, also known as Conway's Game of Life or simply Life, is a cellular automaton devised by the British mathematician John Horton Conway in 1970. It is a zero-player game, meaning that its evolution is determined by its initial state, requiring no further input. One interacts with the Game of Life by creating an initial configuration and observing how it evolves. It is Turing complete and can simulate a von Neumann universal constructor, universal constructor or any other Turing machine. Rules The universe of the Game of Life is Square tiling, an infinite, two-dimensional orthogonal grid of square ''cells'', each of which is in one of two possible states, ''live'' or ''dead'' (or ''populated'' and ''unpopulated'', respectively). Every cell interacts with its eight ''Moore neighborhood, neighbours'', which are the cells that are horizontally, vertically, or diagonally adjacent. At each step in time, the following transitions occur: # Any live cell with fewer than ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Boolean Satisfiability Problem
In logic and computer science, the Boolean satisfiability problem (sometimes called propositional satisfiability problem and abbreviated SATISFIABILITY, SAT or B-SAT) asks whether there exists an Interpretation (logic), interpretation that Satisfiability, satisfies a given Boolean logic, Boolean Formula (mathematical logic), formula. In other words, it asks whether the formula's variables can be consistently replaced by the values TRUE or FALSE to make the formula evaluate to TRUE. If this is the case, the formula is called ''satisfiable'', else ''unsatisfiable''. For example, the formula "''a'' AND NOT ''b''" is satisfiable because one can find the values ''a'' = TRUE and ''b'' = FALSE, which make (''a'' AND NOT ''b'') = TRUE. In contrast, "''a'' AND NOT ''a''" is unsatisfiable. SAT is the first problem that was proven to be NP-complete—this is the Cook–Levin theorem. This means that all problems in the complexity class NP (complexity), NP, which includes a wide range of natu ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Horn Clause
In mathematical logic and logic programming, a Horn clause is a logical formula of a particular rule-like form that gives it useful properties for use in logic programming, formal specification, universal algebra and model theory. Horn clauses are named for the logician Alfred Horn, who first pointed out their significance in 1951. Definition A Horn clause is a disjunctive clause (a disjunction of literals) with at most one positive, i.e. unnegated, literal. Conversely, a disjunction of literals with at most one negated literal is called a dual-Horn clause. A Horn clause with exactly one positive literal is a definite clause or a strict Horn clause; a definite clause with no negative literals is a unit clause, and a unit clause without variables is a fact; a Horn clause without a positive literal is a goal clause. The empty clause, consisting of no literals (which is equivalent to ''false''), is a goal clause. These three kinds of Horn clauses are illustrated in the follo ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Horn-satisfiability
In formal logic, Horn-satisfiability, or HORNSAT, is the problem of deciding whether a given conjunction of propositional Horn clauses is satisfiable or not. Horn-satisfiability and Horn clauses are named after Alfred Horn. A Horn clause is a clause with at most one positive literal, called the ''head'' of the clause, and any number of negative literals, forming the ''body'' of the clause. A Horn formula is a propositional formula formed by conjunction of Horn clauses. Horn satisfiability is actually one of the "hardest" or "most expressive" problems which is known to be computable in polynomial time, in the sense that it is a P-complete problem.Author's 2008 draft version see p.213f) The extension of the problem for quantified Horn formulae can be also solved in polynomial time. The Horn satisfiability problem can also be asked for propositional many-valued logics. The algorithms are not usually linear, but some are polynomial; see Hähnle (2001 or 2003) for a survey. Algori ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]