List Decoding
In coding theory, list decoding is an alternative to unique decoding of error-correcting codes for large error rates. The notion was proposed by Elias in the 1950s. The main idea behind list decoding is that the decoding algorithm instead of outputting a single possible message outputs a list of possibilities one of which is correct. This allows for handling a greater number of errors than that allowed by unique decoding. The unique decoding model in coding theory, which is constrained to output a single valid codeword from the received word could not tolerate a greater fraction of errors. This resulted in a gap between the error-correction performance for stochastic noise models (proposed by Shannon) and the adversarial noise model (considered by Richard Hamming). Since the mid 90s, significant algorithmic progress by the coding theory community has bridged this gap. Much of this progress is based on a relaxed error-correction model called list decoding, wherein the decoder out ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
![]() |
Coding Theory
Coding theory is the study of the properties of codes and their respective fitness for specific applications. Codes are used for data compression, cryptography, error detection and correction, data transmission and computer data storage, data storage. Codes are studied by various scientific disciplines—such as information theory, electrical engineering, mathematics, linguistics, and computer science—for the purpose of designing efficient and reliable data transmission methods. This typically involves the removal of redundancy and the correction or detection of errors in the transmitted data. There are four types of coding: # Data compression (or ''source coding'') # Error detection and correction, Error control (or ''channel coding'') # Cryptography, Cryptographic coding # Line code, Line coding Data compression attempts to remove unwanted redundancy from the data from a source in order to transmit it more efficiently. For example, DEFLATE data compression makes files small ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
Guruswami–Sudan List Decoding Algorithm
In coding theory, list decoding is an alternative to unique decoding of error-correcting codes in the presence of many errors. If a code has relative distance \delta, then it is possible in principle to recover an encoded message when up to \delta / 2 fraction of the codeword symbols are corrupted. But when error rate is greater than \delta / 2 , this will not in general be possible. List decoding overcomes that issue by allowing the decoder to output a short list of messages that might have been encoded. List decoding can correct more than \delta / 2 fraction of errors. There are many polynomial-time algorithms for list decoding. In this article, we first present an algorithm for Reed–Solomon (RS) codes which corrects up to 1 - \sqrt errors and is due to Madhu Sudan. Subsequently, we describe the improved Guruswami–Sudan list decoding algorithm, which can correct up to 1 - \sqrt errors. Here is a plot of the rate R and distance \delta for different algorithms. ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
Coding Theory
Coding theory is the study of the properties of codes and their respective fitness for specific applications. Codes are used for data compression, cryptography, error detection and correction, data transmission and computer data storage, data storage. Codes are studied by various scientific disciplines—such as information theory, electrical engineering, mathematics, linguistics, and computer science—for the purpose of designing efficient and reliable data transmission methods. This typically involves the removal of redundancy and the correction or detection of errors in the transmitted data. There are four types of coding: # Data compression (or ''source coding'') # Error detection and correction, Error control (or ''channel coding'') # Cryptography, Cryptographic coding # Line code, Line coding Data compression attempts to remove unwanted redundancy from the data from a source in order to transmit it more efficiently. For example, DEFLATE data compression makes files small ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
Luca Trevisan
Luca Trevisan (21 July 1971 – 19 June 2024) was an Italian professor of computer science at Bocconi University in Milan. His research area was theoretical computer science, focusing on randomness, cryptography, probabilistically checkable proofs, approximation, property testing, spectral graph theory, and sublinear algorithms. He also ran a blog, ''in theory'', about theoretical computer science. Education and career Trevisan received his PhD from La Sapienza, Rome, under the supervision of Pierluigi Crescenzi. After postdoctoral studies at the Massachusetts Institute of Technology and DIMACS, he held an assistant professor position at Columbia University before moving to the University of California, Berkeley and then, in 2010, to Stanford. In 2014 he returned to Berkeley, and in 2019 he moved to the Department of Decision Sciences at Bocconi University. Recognition Trevisan won the Danny Lewin Best Student Paper Award at the 1997 Symposium on Theory of Computing, the ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
Pseudorandom Generator
In theoretical computer science and cryptography, a pseudorandom generator (PRG) for a class of statistical tests is a deterministic procedure that maps a random seed to a longer pseudorandom string such that no statistical test in the class can distinguish between the output of the generator and the uniform distribution. The random seed itself is typically a short binary string drawn from the uniform distribution. Many different classes of statistical tests have been considered in the literature, among them the class of all Boolean circuits of a given size. It is not known whether good pseudorandom generators for this class exist, but it is known that their existence is in a certain sense equivalent to (unproven) circuit lower bounds in computational complexity theory. Hence the construction of pseudorandom generators for the class of Boolean circuits of a given size rests on currently unproven hardness assumptions. Definition Let \mathcal A = \ be a class of functions. Thes ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
Extractor (mathematics)
{{Short description, Bipartite graph with nodes An (N,M,D,K,\epsilon) -extractor is a bipartite graph with N nodes on the left and M nodes on the right such that each node on the left has D neighbors (on the right), which has the added property that for any subset A of the left vertices of size at least K, the distribution on right vertices obtained by choosing a random node in A and then following a random edge to get a node x on the right side is \epsilon-close to the uniform distribution in terms of total variation distance. A disperser is a related graph. An equivalent way to view an extractor is as a bivariate function :E : \times \rightarrow /math> in the natural way. With this view it turns out that the extractor property is equivalent to: for any source of randomness X that gives n bits with min-entropy \log K, the distribution E(X,U_D) is \epsilon-close to U_M, where U_T denotes the uniform distribution on /math>. Extractors are interesting when they can ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
Permanent (mathematics)
In linear algebra, the permanent of a square matrix is a function of the matrix similar to the determinant. The permanent, as well as the determinant, is a polynomial in the entries of the matrix. Both are special cases of a more general function of a matrix called the immanant. Definition The permanent of an matrix is defined as \operatorname(A)=\sum_\prod_^n a_. The sum here extends over all elements σ of the symmetric group ''S''''n''; i.e. over all permutations of the numbers 1, 2, ..., ''n''. For example, \operatorname\begina&b \\ c&d\end=ad+bc, and \operatorname\begina&b&c \\ d&e&f \\ g&h&i \end=aei + bfg + cdh + ceg + bdi + afh. The definition of the permanent of ''A'' differs from that of the determinant of ''A'' in that the signatures of the permutations are not taken into account. The permanent of a matrix A is denoted per ''A'', perm ''A'', or Per ''A'', sometimes with parentheses around the argument. Minc uses Per(''A'') for the permanent of rectangula ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
One-way Function
In computer science, a one-way function is a function that is easy to compute on every input, but hard to invert given the image of a random input. Here, "easy" and "hard" are to be understood in the sense of computational complexity theory, specifically the theory of polynomial time problems. This has nothing to do with whether the function is one-to-one; finding any one input with the desired image is considered a successful inversion. (See , below.) The existence of such one-way functions is still an open conjecture. Their existence would prove that the complexity classes P and NP are not equal, thus resolving the foremost unsolved question of theoretical computer science.Oded Goldreich (2001). Foundations of Cryptography: Volume 1, Basic Toolsdraft availablefrom author's site). Cambridge University Press. . See als The converse is not known to be true, i.e. the existence of a proof that P ≠ NP would not directly imply the existence of one-way functions. In ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
Hard-core Predicate
In cryptography, a hard-core predicate of a one-way function ''f'' is a predicate ''b'' (i.e., a function whose output is a single bit) which is easy to compute (as a function of ''x'') but is hard to compute given ''f(x)''. In formal terms, there is no probabilistic polynomial-time (PPT) algorithm that computes ''b(x)'' from ''f(x)'' with probability significantly greater than one half over random choice of ''x''. Goldwasser, S. and Bellare, M.br>"Lecture Notes on Cryptography". Summer course on cryptography, MIT, 1996-2001 In other words, if ''x'' is drawn uniformly at random, then given ''f(x)'', any PPT adversary can only distinguish the hard-core bit ''b(x)'' and a uniformly random bit with negligible advantage over the length of ''x''. A hard-core function can be defined similarly. That is, if ''x'' is chosen uniformly at random, then given ''f(x)'', any PPT algorithm can only distinguish the hard-core function value ''h(x)'' and uniformly random bits of length '', h(x), ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
![]() |
Cryptography
Cryptography, or cryptology (from "hidden, secret"; and ''graphein'', "to write", or ''-logy, -logia'', "study", respectively), is the practice and study of techniques for secure communication in the presence of Adversary (cryptography), adversarial behavior. More generally, cryptography is about constructing and analyzing Communication protocol, protocols that prevent third parties or the public from reading private messages. Modern cryptography exists at the intersection of the disciplines of mathematics, computer science, information security, electrical engineering, digital signal processing, physics, and others. Core concepts related to information security (confidentiality, data confidentiality, data integrity, authentication, and non-repudiation) are also central to cryptography. Practical applications of cryptography include electronic commerce, Smart card#EMV, chip-based payment cards, digital currencies, password, computer passwords, and military communications. ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
Analysis Of Algorithms
In computer science, the analysis of algorithms is the process of finding the computational complexity of algorithms—the amount of time, storage, or other resources needed to execute them. Usually, this involves determining a function that relates the size of an algorithm's input to the number of steps it takes (its time complexity) or the number of storage locations it uses (its space complexity). An algorithm is said to be efficient when this function's values are small, or grow slowly compared to a growth in the size of the input. Different inputs of the same size may cause the algorithm to have different behavior, so best, worst and average case descriptions might all be of practical interest. When not otherwise specified, the function describing the performance of an algorithm is usually an upper bound, determined from the worst case inputs to the algorithm. The term "analysis of algorithms" was coined by Donald Knuth. Algorithm analysis is an important part of a broa ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |
|
![]() |
Folded Reed–Solomon Code
In coding theory, folded Reed–Solomon codes are like Reed–Solomon codes, which are obtained by mapping m Reed–Solomon codewords over a larger alphabet by careful bundling of codeword symbols. Folded Reed–Solomon codes are also a special case of Parvaresh–Vardy codes. Using optimal parameters one can decode with a code rate, rate of ''R'', and achieve a decoding radius of 1 − ''R''. The term "folded Reed–Solomon codes" was coined in a paper by V.Y. Krachkovsky with an algorithm that presented Reed–Solomon codes with many random "phased burst" errors. The list-decoding algorithm for folded RS codes corrects beyond the 1-\sqrt bound for Reed–Solomon codes achieved by the Venkatesan Guruswami, Guruswami–Madhu Sudan, Sudan algorithm for such phased burst errors. History One of the ongoing challenges in Coding Theory is to have error correcting codes achieve an optimal trade-off between (Coding) Rate and Error-Correction Radius. Though this may no ... [...More Info...] [...Related Items...] OR: [Wikipedia] [Google] [Baidu] |