HOME





PL (complexity)
PL, or probabilistic L, is the class of languages recognizable by a polynomial time logarithmic space randomized machine with probability >  (this is called unbounded error). Equivalently, as shown below, PL is the class of languages recognized by unbounded time unbounded error logspace randomized machine. An example of PL complete problem (under logspace reduction) is finding whether the determinant of a matrix (with integral coefficients) is positive. Given a matrix and a number , testing with , M, >n is also PL complete. By contrast, testing whether matrix permanent is positive is PP complete. PLPL=PL in the sense that for every f in PL, PL is unchanged if it is extended to allow as a subroutine, where A is the input string. PL contains NL and BPL and is contained in NC2. Approximating determinant in PL Determinant of an integral matrix can be reduced to finding the difference between the number of accepting and rejecting paths on a polynomially sized directe ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

L (complexity)
In computational complexity theory, L (also known as LSPACE, LOGSPACE or DLOGSPACE) is the complexity class containing decision problems that can be solved by a deterministic Turing machine using a logarithmic amount of writable memory space. Formally, the Turing machine has two tapes, one of which encodes the input and can only be read, whereas the other tape has logarithmic size but can be written as well as read. Logarithmic space is sufficient to hold a constant number of pointers into the input and a logarithmic number of Boolean flags, and many basic logspace algorithms use the memory in this way. Complete problems and logical characterization Every non-trivial problem in L is complete under log-space reductions, so weaker reductions are required to identify meaningful notions of L-completeness, the most common being first-order reductions. A 2004 result by Omer Reingold shows that USTCON, the problem of whether there exists a path between two vertices in a given u ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Permanent (mathematics)
In linear algebra, the permanent of a square matrix is a function of the matrix similar to the determinant. The permanent, as well as the determinant, is a polynomial in the entries of the matrix. Both are special cases of a more general function of a matrix called the immanant. Definition The permanent of an matrix is defined as \operatorname(A)=\sum_\prod_^n a_. The sum here extends over all elements σ of the symmetric group ''S''''n''; i.e. over all permutations of the numbers 1, 2, ..., ''n''. For example, \operatorname\begina&b \\ c&d\end=ad+bc, and \operatorname\begina&b&c \\ d&e&f \\ g&h&i \end=aei + bfg + cdh + ceg + bdi + afh. The definition of the permanent of ''A'' differs from that of the determinant of ''A'' in that the signatures of the permutations are not taken into account. The permanent of a matrix A is denoted per ''A'', perm ''A'', or Per ''A'', sometimes with parentheses around the argument. Minc uses Per(''A'') for the permanent of rectangula ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

PP (complexity)
In complexity theory, PP, or PPT is the class of decision problems solvable by a probabilistic Turing machine in polynomial time, with an error probability of less than 1/2 for all instances. The abbreviation PP refers to probabilistic polynomial time. The complexity class was defined by Gill in 1977. If a decision problem is in PP, then there is an algorithm running in polynomial time that is allowed to make random decisions, such that it returns the correct answer with chance higher than 1/2. In more practical terms, it is the class of problems that can be solved to any fixed degree of accuracy by running a randomized, polynomial-time algorithm a sufficient (but bounded) number of times. Turing machines that are polynomially-bound and probabilistic are characterized as PPT, which stands for probabilistic polynomial-time machines. This characterization of Turing machines does not require a bounded error probability. Hence, PP is the complexity class containing all problems s ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


NL (complexity)
In computational complexity theory, NL (Nondeterministic Logarithmic-space) is the complexity class containing decision problems that can be solved by a nondeterministic Turing machine using a logarithmic amount of memory space. NL is a generalization of L, the class for logspace problems on a deterministic Turing machine. Since any deterministic Turing machine is also a nondeterministic Turing machine, we have that L is contained in NL. NL can be formally defined in terms of the computational resource nondeterministic space (or NSPACE) as NL = NSPACE(log ''n''). Important results in complexity theory allow us to relate this complexity class with other classes, telling us about the relative power of the resources involved. Results in the field of algorithms, on the other hand, tell us which problems can be solved with this resource. Like much of complexity theory, many important questions about NL are still open (see Unsolved problems in computer science). Occasionally N ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


BPL (complexity)
In computational complexity theory, BPL (Bounded-error Probabilistic Logarithmic-space), sometimes called BPLP (Bounded-error Probabilistic Logarithmic-space Polynomial-time), is the complexity class of problems solvable in logarithmic space and polynomial time with probabilistic Turing machines with two-sided error. It is named in analogy with BPP, which is similar but has no logarithmic space restriction. Error model The probabilistic Turing machines in the definition of BPL may only accept or reject incorrectly less than 1/3 of the time; this is called ''two-sided error''. The constant 1/3 is arbitrary; any ''x'' with 0 ≤ ''x'' < 1/2 would suffice. This error can be made 2−''p''(''x'') times smaller for any polynomial ''p''(''x'') without using more than polynomial time or logarithmic space by running the algorithm repeatedly.


Related classes

Since two-sided error is more general than one-sided error,
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




NC (complexity)
In computational complexity theory, the class NC (for "Nick's Class") is the set of decision problems decidable in polylogarithmic time on a parallel computer with a polynomial number of processors. In other words, a problem with input size ''n'' is in NC if there exist constants ''c'' and ''k'' such that it can be solved in time using parallel processors. Stephen Cook coined the name "Nick's class" after Nick Pippenger, who had done extensive research on circuits with polylogarithmic depth and polynomial size.Arora & Barak (2009) p.120 As in the case of circuit complexity theory, usually the class has an extra constraint that the circuit family must be ''uniform'' ( see below). Just as the class P can be thought of as the tractable problems ( Cobham's thesis), so NC can be thought of as the problems that can be efficiently solved on a parallel computer.Arora & Barak (2009) p.118 NC is a subset of P because polylogarithmic parallel computations can be simulated by polynomi ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Markov Chain
In probability theory and statistics, a Markov chain or Markov process is a stochastic process describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally, this may be thought of as, "What happens next depends only on the state of affairs ''now''." A countably infinite sequence, in which the chain moves state at discrete time steps, gives a discrete-time Markov chain (DTMC). A continuous-time process is called a continuous-time Markov chain (CTMC). Markov processes are named in honor of the Russian mathematician Andrey Markov. Markov chains have many applications as statistical models of real-world processes. They provide the basis for general stochastic simulation methods known as Markov chain Monte Carlo, which are used for simulating sampling from complex probability distributions, and have found application in areas including Bayesian statistics, biology, chemistry, economics, fin ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Determinant
In mathematics, the determinant is a Scalar (mathematics), scalar-valued function (mathematics), function of the entries of a square matrix. The determinant of a matrix is commonly denoted , , or . Its value characterizes some properties of the matrix and the linear map represented, on a given basis (linear algebra), basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible matrix, invertible and the corresponding linear map is an linear isomorphism, isomorphism. However, if the determinant is zero, the matrix is referred to as singular, meaning it does not have an inverse. The determinant is completely determined by the two following properties: the determinant of a product of matrices is the product of their determinants, and the determinant of a triangular matrix is the product of its diagonal entries. The determinant of a matrix is :\begin a & b\\c & d \end=ad-bc, and the determinant of a matrix is : \begin a & b & c \\ d & e ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]