Tornado Code
In coding theory, Tornado codes are a class of erasure codes that support error correction. Tornado codes require a constant C more redundant blocks than the more data-efficient Reed–Solomon erasure codes, but are much faster to generate and can fix erasures faster. Software-based implementations of tornado codes are about 100 times faster on small lengths and about 10,000 times faster on larger lengths than Reed–Solomon erasure codes. Since the introduction of Tornado codes, many other similar erasure codes have emerged, most notably Online codes, LT codes and Raptor codes. Tornado codes use a layered approach. All layers except the last use an LDPC error correction code, which is fast but has a chance of failure. The final layer uses a Reed–Solomon correction code, which is slower but is optimal in terms of failure recovery. Tornado codes dictates how many levels, how many recovery blocks in each level, and the distribution used to generate blocks for the non-final lay ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Coding Theory
Coding theory is the study of the properties of codes and their respective fitness for specific applications. Codes are used for data compression, cryptography, error detection and correction, data transmission and computer data storage, data storage. Codes are studied by various scientific disciplines—such as information theory, electrical engineering, mathematics, linguistics, and computer science—for the purpose of designing efficient and reliable data transmission methods. This typically involves the removal of redundancy and the correction or detection of errors in the transmitted data. There are four types of coding: # Data compression (or ''source coding'') # Error detection and correction, Error control (or ''channel coding'') # Cryptography, Cryptographic coding # Line code, Line coding Data compression attempts to remove unwanted redundancy from the data from a source in order to transmit it more efficiently. For example, DEFLATE data compression makes files small ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Erasure Code
In coding theory, an erasure code is a forward error correction (FEC) code under the assumption of bit erasures (rather than bit errors), which transforms a message of ''k'' symbols into a longer message (code word) with ''n'' symbols such that the original message can be recovered from a subset of the ''n'' symbols. The fraction ''r'' = ''k''/''n'' is called the code rate. The fraction ''k’/k'', where ''k’'' denotes the number of symbols required for recovery, is called reception efficiency. The recovery algorithm expects that it is known which of the ''n'' symbols are lost. History Erasure coding was invented by Irving Reed and Gustave Solomon in 1960. There are many different erasure coding schemes. The most popular erasure codes are Reed-Solomon coding, Low-density parity-check code (LDPC codes), and Turbo codes. As of 2023, modern data storage systems can be designed to tolerate the complete failure of a few disks without data loss, using one of 3 ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Error Correction
In information theory and coding theory with applications in computer science and telecommunications, error detection and correction (EDAC) or error control are techniques that enable reliable delivery of digital data over unreliable communication channels. Many communication channels are subject to channel noise, and thus errors may be introduced during transmission from the source to a receiver. Error detection techniques allow detecting such errors, while error correction enables reconstruction of the original data in many cases. Definitions ''Error detection'' is the detection of errors caused by noise or other impairments during transmission from the transmitter to the receiver. ''Error correction'' is the detection of errors and reconstruction of the original, error-free data. History In classical antiquity, copyists of the Hebrew Bible were paid for their work according to the number of stichs (lines of verse). As the prose books of the Bible were hardly ever w ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Online Codes
In computer science, online codes are an example of rateless erasure codes. These codes can encode a message into a number of symbols such that knowledge of any fraction of them allows one to recover the original message (with high probability). ''Rateless'' codes produce an arbitrarily large number of symbols which can be broadcast until the receivers have enough symbols. The online encoding algorithm consists of several phases. First the message is split into ''n'' fixed size message blocks. Then the ''outer encoding'' is an erasure code which produces auxiliary blocks that are appended to the message blocks to form a composite message. From this the inner encoding generates check blocks. Upon receiving a certain number of check blocks some fraction of the composite message can be recovered. Once enough has been recovered the outer decoding can be used to recover the original message. Detailed discussion Online codes are parameterised by the block size and two scalars, ''q'' ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
LT Codes
In computer science, Luby transform codes (LT codes) are the first class of practical fountain codes that are near-optimal erasure correcting codes. They were invented by Michael Luby in 1998 and published in 2002. Like some other fountain codes, LT codes depend on sparse bipartite graphs to trade reception overhead for encoding and decoding speed. The distinguishing characteristic of LT codes is in employing a particularly simple algorithm based on the exclusive or operation (\oplus) to encode and decode the message.The ''exclusive or'' (XOR) operation, symbolized by ⊕, has the very useful property that ''A'' ⊕ ''A'' = 0, where ''A'' is an arbitrary string of bits. LT codes are ''rateless'' because the encoding algorithm can in principle produce an infinite number of message packets (i.e., the percentage of packets that must be received to decode the message can be arbitrarily small). They are ''erasure correcting codes'' because they can be used to ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Raptor Codes
In computer science, Raptor codes (''rapid tornado''; see Tornado codes) are the first known class of fountain codes with linear time encoding and decoding. They were invented by Amin Shokrollahi in 2000/2001 and were first published in 2004 as an extended abstract. Raptor codes are a significant theoretical and practical improvement over LT codes, which were the first practical class of fountain codes. Raptor codes, as with fountain codes in general, encode a given source block of data consisting of a number ''k'' of equal size source symbols into a potentially limitless sequence of encoding symbols such that reception of any ''k'' or more encoding symbols allows the source block to be recovered with some non-zero probability. The probability that the source block can be recovered increases with the number of encoding symbols received above ''k'' becoming very close to 1, once the number of received encoding symbols is only very slightly larger than ''k''. For example, with the ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Low-density Parity-check Code
Low-density parity-check (LDPC) codes are a class of error correction codes which (together with the closely-related turbo codes) have gained prominence in coding theory and information theory since the late 1990s. The codes today are widely used in applications ranging from wireless communications to flash-memory storage. Together with turbo codes, they sparked a revolution in coding theory, achieving order-of-magnitude improvements in performance compared to traditional error correction codes. Central to the performance of LDPC codes is their adaptability to the iterative belief propagation decoding algorithm. Under this algorithm, they can be designed to approach theoretical limits (Channel capacity, capacities) of many channels at low computation costs. Theoretically, analysis of LDPC codes focuses on sequences of codes of fixed code rate and increasing block length. These sequences are typically tailored to a set of channels. For appropriately designed sequences, the ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Michael Luby
Michael George Luby is a mathematician and computer scientist, CEO of BitRipple, senior research scientist at the International Computer Science Institute (ICSI), former VP Technology at Qualcomm, co-founder and former chief technology officer of Digital Fountain. In coding theory he is known for leading the invention of the Tornado codes and the LT codes. In cryptography he is known for his contributions showing that any one-way function can be used as the basis for private cryptography, and for his analysis, in collaboration with Charles Rackoff, of the Feistel cipher construction. His distributed algorithm to find a maximal independent set in a computer network has also been influential. Luby received his B.Sc. in mathematics from Massachusetts Institute of Technology in 1975. In 1983 he was awarded a Ph.D. in computer science from University of California, Berkeley. In 1996–1997, while at the ICSI, he led the team that invented Tornado codes. These were the first LD ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Erasure Code
In coding theory, an erasure code is a forward error correction (FEC) code under the assumption of bit erasures (rather than bit errors), which transforms a message of ''k'' symbols into a longer message (code word) with ''n'' symbols such that the original message can be recovered from a subset of the ''n'' symbols. The fraction ''r'' = ''k''/''n'' is called the code rate. The fraction ''k’/k'', where ''k’'' denotes the number of symbols required for recovery, is called reception efficiency. The recovery algorithm expects that it is known which of the ''n'' symbols are lost. History Erasure coding was invented by Irving Reed and Gustave Solomon in 1960. There are many different erasure coding schemes. The most popular erasure codes are Reed-Solomon coding, Low-density parity-check code (LDPC codes), and Turbo codes. As of 2023, modern data storage systems can be designed to tolerate the complete failure of a few disks without data loss, using one of 3 ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Raptor Code
In computer science, Raptor codes (''rapid tornado''; see Tornado codes) are the first known class of fountain codes with linear time encoding and decoding. They were invented by Amin Shokrollahi in 2000/2001 and were first published in 2004 as an extended abstract. Raptor codes are a significant theoretical and practical improvement over LT codes, which were the first practical class of fountain codes. Raptor codes, as with fountain codes in general, encode a given source block of data consisting of a number ''k'' of equal size source symbols into a potentially limitless sequence of encoding symbols such that reception of any ''k'' or more encoding symbols allows the source block to be recovered with some non-zero probability. The probability that the source block can be recovered increases with the number of encoding symbols received above ''k'' becoming very close to 1, once the number of received encoding symbols is only very slightly larger than ''k''. For example, with the ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
International Computer Science Institute
The International Computer Science Institute (ICSI) is an independent, non-profit research organization located in Berkeley, California, United States. Since its founding in 1988, ICSI has maintained an affiliation agreement with the University of California, Berkeley, where several of its members hold faculty appointments. Research areas ICSI's research activities include Internet architecture, network security, network routing, speech and speaker recognition, spoken and text-based natural language processing, computer vision, multimedia, privacy and biological system modeling. Research groups and leaders * The Institute's director iDr. Lea Shanley * SIGCOMM Award winner Professor Scott Shenker, one of thmost-cited authors in computer science is the Chief Scientist and head of the New Initiatives group. * SIGCOMM Award winner Professor Vern Paxson, who leads network security efforts and who previously chaired the Internet Research Task Force. * Professor Jerry Feldman (computer ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |