HOME



picture info

Fusion Tree
In computer science, a fusion tree is a type of tree data structure that implements an associative array on -bit integers on a finite universe, where each of the input integers has size less than 2w and is non-negative. When operating on a collection of Attribute–value pair, key–value pairs, it uses space and performs searches in time, which is asymptotically faster than a traditional self-balancing binary search tree, and also better than the van Emde Boas tree for large values of . It achieves this speed by using certain constant-time operations that can be done on a machine word. Fusion trees were invented in 1990 by Michael Fredman and Dan Willard. Several advances have been made since Fredman and Willard's original 1990 paper. In 1999. it was shown how to implement fusion trees under a model of computation in which all of the underlying operations of the algorithm belong to AC0, AC0, a model of circuit complexity that allows addition and bitwise Boolean operations ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Computer Science
Computer science is the study of computation, information, and automation. Computer science spans Theoretical computer science, theoretical disciplines (such as algorithms, theory of computation, and information theory) to Applied science, applied disciplines (including the design and implementation of Computer architecture, hardware and Software engineering, software). Algorithms and data structures are central to computer science. The theory of computation concerns abstract models of computation and general classes of computational problem, problems that can be solved using them. The fields of cryptography and computer security involve studying the means for secure communication and preventing security vulnerabilities. Computer graphics (computer science), Computer graphics and computational geometry address the generation of images. Programming language theory considers different ways to describe computational processes, and database theory concerns the management of re ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Parallelism (computing)
Parallel computing is a type of computation in which many calculations or processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. There are several different forms of parallel computing: bit-level, instruction-level, data, and task parallelism. Parallelism has long been employed in high-performance computing, but has gained broader interest due to the physical constraints preventing frequency scaling.S.V. Adve ''et al.'' (November 2008)"Parallel Computing Research at Illinois: The UPCRC Agenda" (PDF). Parallel@Illinois, University of Illinois at Urbana-Champaign. "The main techniques for these performance benefits—increased clock frequency and smarter but increasingly complex architectures—are now hitting the so-called power wall. The computer industry has accepted that future performance increases must largely come from increasing the number of processors (or cores) on a die, rather than mak ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Journal Of Computer And System Sciences
The ''Journal of Computer and System Sciences'' (JCSS) is a peer-reviewed scientific journal in the field of computer science. ''JCSS'' is published by Elsevier, and it was started in 1967. Many influential scientific articles have been published in ''JCSS''; these include five papers that have won the Gödel Prize The Gödel Prize is an annual prize for outstanding papers in the area of theoretical computer science, given jointly by the European Association for Theoretical Computer Science (EATCS) and the Association for Computing Machinery Special Inter .... Its managing editor is Michael Segal. Notes References * * External links * Journal homepageScienceDirect accessDBLP information Computer science journals Elsevier academic journals {{compu-journal-stub ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Word RAM
In theoretical computer science, the word RAM (word random-access machine) model is a model of computation in which a random-access machine does arithmetic and bitwise operations on a word of bits. Michael Fredman and Dan Willard created it in 1990 to simulate programming languages like C. Model The word RAM model is an abstract machine similar to a random-access machine, but with finite memory and word-length. It works with words of size up to bits, meaning it can store integers up to 2^w-1. Because the model assumes that the word size matches the problem size, that is, for a problem of size , w \ge \log n, the word RAM model is a transdichotomous model. The model allows both arithmetic operations and bitwise operations including logical shifts to be done in constant time (the precise instruction set assumed by an algorithm or proof using the model may vary). Algorithms and data structures In the word RAM model, integer sorting can be done fairly efficiently. Yijie Han an ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


SIAM Journal On Computing
The ''SIAM Journal on Computing'' is a scientific journal focusing on the mathematical and formal aspects of computer science. It is published by the Society for Industrial and Applied Mathematics (SIAM). Although its official ISO abbreviation is ''SIAM J. Comput.'', its publisher and contributors frequently use the shorter abbreviation ''SICOMP''. SICOMP typically hosts the special issues of the IEEE Annual Symposium on Foundations of Computer Science (FOCS) and the Annual ACM Symposium on Theory of Computing (STOC), where about 15% of papers published in FOCS and STOC each year are invited to these special issues. For example, Volume 48 contains 11 out of 85 papers published in FOCS 2016. References External linksSIAM Journal on Computing
on

Quasi-polynomial Time
In computational complexity theory and the analysis of algorithms, an algorithm is said to take quasi-polynomial time if its time complexity is quasi-polynomially bounded. That is, there should exist a constant c such that the worst-case running time of the algorithm, on inputs of has an upper bound of the form 2^. The decision problems with quasi-polynomial time algorithms are natural candidates for being NP-intermediate, neither having polynomial time nor likely to be NP-hard. Complexity class The complexity class QP consists of all problems that have quasi-polynomial time algorithms. It can be defined in terms of DTIME as follows. :\mathsf = \bigcup_ \mathsf \left(2^\right) Examples An early example of a quasi-polynomial time algorithm was the Adleman–Pomerance–Rumely primality test. However, the problem of testing whether a number is a prime number has subsequently been shown to have a polynomial time algorithm, the AKS primality test. In some cases, quasi-polynomi ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Hash Table
In computer science, a hash table is a data structure that implements an associative array, also called a dictionary or simply map; an associative array is an abstract data type that maps Unique key, keys to Value (computer science), values. A hash table uses a hash function to compute an ''index'', also called a ''hash code'', into an array of ''buckets'' or ''slots'', from which the desired value can be found. During lookup, the key is hashed and the resulting hash indicates where the corresponding value is stored. A map implemented by a hash table is called a hash map. Most hash table designs employ an Perfect hash function, imperfect hash function. Hash collision, Hash collisions, where the hash function generates the same index for more than one key, therefore typically must be accommodated in some way. In a well-dimensioned hash table, the average time complexity for each lookup is independent of the number of elements stored in the table. Many hash table designs also ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Bitwise XOR
In computer programming, a bitwise operation operates on a bit string, a bit array or a binary numeral (considered as a bit string) at the level of its individual bits. It is a fast and simple action, basic to the higher-level arithmetic operations and directly supported by the processor. Most bitwise operations are presented as two-operand instructions where the result replaces one of the input operands. On simple low-cost processors, typically, bitwise operations are substantially faster than division, several times faster than multiplication, and sometimes significantly faster than addition. While modern processors usually perform addition and multiplication just as fast as bitwise operations due to their longer instruction pipelines and other architectural design choices, bitwise operations do commonly use less power because of the reduced use of resources. Bitwise operators In the explanations below, any indication of a bit's position is counted from the right (least si ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Most Significant Bit
In computing, bit numbering is the convention used to identify the bit positions in a binary numeral system, binary number. Bit significance and indexing In computing, the least significant bit (LSb) is the bit position in a Binary numeral system, binary Integer (computer science), integer representing the lowest-order place of the integer. Similarly, the most significant bit (MSb) represents the highest-order place of the binary integer. The LSb is sometimes referred to as the ''low-order bit''. Due to the convention in positional notation of writing less significant digits further to the right, the LSb also might be referred to as the ''right-most bit''. The MSb is similarly referred to as the ''high-order bit'' or ''left-most bit''. In both cases, the LSb and MSb correlate directly to the least significant Numerical digit, digit and most significant digit of a decimal integer. Bit indexing correlates to the positional notation of the value in base 2. For this reason, bit in ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Bitwise AND
In computer programming, a bitwise operation operates on a bit string, a bit array or a binary numeral (considered as a bit string) at the level of its individual bits. It is a fast and simple action, basic to the higher-level arithmetic operations and directly supported by the processor. Most bitwise operations are presented as two-operand instructions where the result replaces one of the input operands. On simple low-cost processors, typically, bitwise operations are substantially faster than division, several times faster than multiplication, and sometimes significantly faster than addition. While modern processors usually perform addition and multiplication just as fast as bitwise operations due to their longer instruction pipelines and other architectural design choices, bitwise operations do commonly use less power because of the reduced use of resources. Bitwise operators In the explanations below, any indication of a bit's position is counted from the right (least sig ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

C Programming Language
C (''pronounced'' '' – like the letter c'') is a general-purpose programming language. It was created in the 1970s by Dennis Ritchie and remains very widely used and influential. By design, C's features cleanly reflect the capabilities of the targeted Central processing unit, CPUs. It has found lasting use in operating systems code (especially in Kernel (operating system), kernels), device drivers, and protocol stacks, but its use in application software has been decreasing. C is commonly used on computer architectures that range from the largest supercomputers to the smallest microcontrollers and embedded systems. A successor to the programming language B (programming language), B, C was originally developed at Bell Labs by Ritchie between 1972 and 1973 to construct utilities running on Unix. It was applied to re-implementing the kernel of the Unix operating system. During the 1980s, C gradually gained popularity. It has become one of the most widely used programming langu ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]