Set Splitting
   HOME



picture info

Set Splitting
In computational complexity theory, the set splitting problem is the following decision problem: given a family ''F'' of subsets of a finite set ''S'', decide whether there exists a partition of ''S'' into two subsets ''S1'', ''S2'' such that all elements of ''F'' are split by this partition, i.e., none of the elements of ''F'' is completely in ''S1'' or ''S2''. Set Splitting is one of Computers and Intractability: A Guide to the Theory of NP-Completeness, Garey & Johnson's classical NP-complete problems. The problem is sometimes called Property B, hypergraph 2-colorability. Variants The optimization version of this problem is called max set splitting and requires finding the partition which maximizes the number of split elements of ''F''. It is an APX-complete problem and hence in Optimization_problem#NP_optimization_problem, NPO. The set ''k''-splitting problem is stated as follows: given ''S'', ''F'', and an integer ''k'', does there exist a partition of ''S'' which splits a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Not-all-equal 3-satisfiability
In computational complexity, not-all-equal 3-satisfiability (NAE3SAT) is an NP-complete variant of the Boolean satisfiability problem, often used in proofs of NP-completeness. Definition Like 3-satisfiability, an instance of the problem consists of a collection of Boolean variables and a collection of clauses, each of which combines three variables or negations of variables. However, unlike 3-satisfiability, which requires each clause to have at least one true Boolean value, NAE3SAT requires that the three values in each clause are not all equal to each other (in other words, at least one is true, and at least one is false). Hardness The NP-completeness of NAE3SAT can be proven by a reduction from 3-satisfiability (3SAT). First the nonsymmetric 3SAT is reduced to the symmetric NAE4SAT by adding a common dummy literal s to every clause, then NAE4SAT is reduced to NAE3SAT by splitting clauses as in the reduction of general k-satisfiability to 3SAT. In more detail, a 3SAT instance ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Association For Computing Machinery
The Association for Computing Machinery (ACM) is a US-based international learned society for computing. It was founded in 1947 and is the world's largest scientific and educational computing society. The ACM is a non-profit professional membership group, reporting nearly 110,000 student and professional members . Its headquarters are in New York City. The ACM is an umbrella organization for academic and scholarly interests in computer science (informatics). Its motto is "Advancing Computing as a Science & Profession". History In 1947, a notice was sent to various people: On January 10, 1947, at the Symposium on Large-Scale Digital Calculating Machinery at the Harvard computation Laboratory, Professor Samuel H. Caldwell of Massachusetts Institute of Technology spoke of the need for an association of those interested in computing machinery, and of the need for communication between them. ..After making some inquiries during May and June, we believe there is ample interest to ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Journal Of The ACM
The ''Journal of the ACM'' (''JACM'') is a peer-reviewed scientific journal covering computer science in general, especially theoretical aspects. It is an official journal of the Association for Computing Machinery. Its current editor-in-chief is Venkatesan Guruswami. The journal was established in 1954 and "computer scientists universally hold the ''Journal of the ACM'' in high esteem". See also * ''Communications of the ACM ''Communications of the ACM'' (''CACM'') is the monthly journal of the Association for Computing Machinery (ACM). History It was established in 1958, with Saul Rosen as its first managing editor. It is sent to all ACM members. Articles are i ...'' References External links * {{DEFAULTSORT:Journal Of The Acm Academic journals established in 1954 Computer science journals Association for Computing Machinery academic journals Bimonthly journals English-language journals ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Algorithmica
''Algorithmica'' is a monthly peer-reviewed scientific journal focusing on research and the application of computer science algorithms. The journal was established in 1986 and is published by Springer Science+Business Media. The editor in chief is Mohammad Hajiaghayi. Subject coverage includes sorting, searching, data structures, computational geometry, and linear programming, VLSI, distributed computing, parallel processing, computer aided design, robotics, graphics, data base design, and software tool A programming tool or software development tool is a computer program that is used to software development, develop another computer program, usually by helping the developer manage computer files. For example, a programmer may use a tool called ...s.Home page
Springer Science+Business Media. 2013


Abstract ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  



Springer Publishing
Springer Publishing Company is an American publishing company of academic journals and books, focusing on the fields of nursing, gerontology, psychology, social work, counseling, public health, and rehabilitation (neuropsychology). It was established in 1951 by Bernhard Springer, a great-grandson of Julius Springer, and is based in Midtown Manhattan, New York City. History Springer Publishing Company was founded in 1950 by Bernhard Springer, the Berlin-born great-grandson of Julius Springer, who founded Springer Science+Business Media, Springer-Verlag (now Springer Science+Business Media). Springer Publishing's first landmark publications included ''Livestock Health Encyclopedia'' by R. Seiden and the 1952 ''Handbook of Cardiology for Nurses''. The company's books soon branched into other fields, including medicine and psychology. Nursing publications grew rapidly in number, as Modell's ''Drugs in Current Use'', a small annual paperback, sold over 150,000 copies over several edi ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Maximum Cut
In a graph, a maximum cut is a cut whose size is at least the size of any other cut. That is, it is a partition of the graph's vertices into two complementary sets and , such that the number of edges between and is as large as possible. Finding such a cut is known as the max-cut problem. The problem can be stated simply as follows. One wants a subset of the vertex set such that the number of edges between and the complementary subset is as large as possible. Equivalently, one wants a bipartite subgraph of the graph with as many edges as possible. There is a more general version of the problem called weighted max-cut, where each edge is associated with a real number, its weight, and the objective is to maximize the total weight of the edges between and its complement rather than the number of the edges. The weighted max-cut problem allowing both positive and negative weights can be trivially transformed into a weighted minimum cut problem by flipping the sign in all ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Hypergraphs
In mathematics, a hypergraph is a generalization of a graph in which an edge can join any number of vertices. In contrast, in an ordinary graph, an edge connects exactly two vertices. Formally, a directed hypergraph is a pair (X,E), where X is a set of elements called ''nodes'', ''vertices'', ''points'', or ''elements'' and E is a set of pairs of subsets of X. Each of these pairs (D,C)\in E is called an ''edge'' or ''hyperedge''; the vertex subset D is known as its ''tail'' or ''domain'', and C as its ''head'' or ''codomain''. The order of a hypergraph (X,E) is the number of vertices in X. The size of the hypergraph is the number of edges in E. The order of an edge e=(D,C) in a directed hypergraph is , e, = (, D, ,, C, ): that is, the number of vertices in its tail followed by the number of vertices in its head. The definition above generalizes from a directed graph to a directed hypergraph by defining the head or tail of each edge as a set of vertices (C\subseteq X or D\ ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Graph Coloring
In graph theory, graph coloring is a methodic assignment of labels traditionally called "colors" to elements of a Graph (discrete mathematics), graph. The assignment is subject to certain constraints, such as that no two adjacent elements have the same color. Graph coloring is a special case of graph labeling. In its simplest form, it is a way of coloring the Vertex (graph theory), vertices of a graph such that no two adjacent vertices are of the same color; this is called a vertex coloring. Similarly, an ''edge coloring'' assigns a color to each Edge (graph theory), edges so that no two adjacent edges are of the same color, and a face coloring of a planar graph assigns a color to each Face (graph theory), face (or region) so that no two faces that share a boundary have the same color. Vertex coloring is often used to introduce graph coloring problems, since other coloring problems can be transformed into a vertex coloring instance. For example, an edge coloring of a graph is just ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Approximation Algorithm
In computer science and operations research, approximation algorithms are efficient algorithms that find approximate solutions to optimization problems (in particular NP-hard problems) with provable guarantees on the distance of the returned solution to the optimal one. Approximation algorithms naturally arise in the field of theoretical computer science as a consequence of the widely believed P ≠ NP conjecture. Under this conjecture, a wide class of optimization problems cannot be solved exactly in polynomial time. The field of approximation algorithms, therefore, tries to understand how closely it is possible to approximate optimal solutions to such problems in polynomial time. In an overwhelming majority of the cases, the guarantee of such algorithms is a multiplicative one expressed as an approximation ratio or approximation factor i.e., the optimal solution is always guaranteed to be within a (predetermined) multiplicative factor of the returned solution. However, there a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Computational Complexity Theory
In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and explores the relationships between these classifications. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm. A problem is regarded as inherently difficult if its solution requires significant resources, whatever the algorithm used. The theory formalizes this intuition, by introducing mathematical models of computation to study these problems and quantifying their computational complexity, i.e., the amount of resources needed to solve them, such as time and storage. Other measures of complexity are also used, such as the amount of communication (used in communication complexity), the number of logic gate, gates in a circuit (used in circuit complexity) and the number of processors (used in parallel computing). O ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Fixed-parameter Tractable
In computer science, parameterized complexity is a branch of computational complexity theory that focuses on classifying computational problems according to their inherent difficulty with respect to ''multiple'' parameters of the input or output. The complexity of a problem is then measured as a function of those parameters. This allows the classification of NP-hard problems on a finer scale than in the classical setting, where the complexity of a problem is only measured as a function of the number of bits in the input. This appears to have been first demonstrated in . The first systematic work on parameterized complexity was done by . Under the assumption that P ≠ NP, there exist many natural problems that require super-polynomial running time when complexity is measured in terms of the input size only but that are computable in a time that is polynomial in the input size and exponential or worse in a parameter . Hence, if is fixed at a small value and the growth ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]