HOME

TheInfoList



OR:

Reed–Solomon codes are a group of
error-correcting code In computing, telecommunication, information theory, and coding theory, an error correction code, sometimes error correcting code, (ECC) is used for controlling errors in data over unreliable or noisy communication channels. The central idea is ...
s that were introduced by
Irving S. Reed Irving Stoy Reed (November 12, 1923 – September 11, 2012) was an American mathematician and engineer. He is best known for co-inventing a class of algebraic error-correcting and error-detecting codes known as Reed–Solomon codes in collabo ...
and Gustave Solomon in 1960. They have many applications, the most prominent of which include consumer technologies such as
MiniDisc MiniDisc (MD) is an erasable magneto-optical disc-based data storage format offering a capacity of 60, 74, and later, 80 minutes of digitized audio. Sony announced the MiniDisc in September 1992 and released it in November of that year for ...
s, CDs,
DVD The DVD (common abbreviation for Digital Video Disc or Digital Versatile Disc) is a digital optical disc data storage format. It was invented and developed in 1995 and first released on November 1, 1996, in Japan. The medium can store any kin ...
s,
Blu-ray The Blu-ray Disc (BD), often known simply as Blu-ray, is a digital optical disc data storage format. It was invented and developed in 2005 and released on June 20, 2006 worldwide. It is designed to supersede the DVD format, and capable of stori ...
discs,
QR code A QR code (an initialism for quick response code) is a type of matrix barcode (or two-dimensional barcode) invented in 1994 by the Japanese company Denso Wave. A barcode is a machine-readable optical label that can contain information about th ...
s,
data transmission Data transmission and data reception or, more broadly, data communication or digital communications is the transfer and reception of data in the form of a digital bitstream or a digitized analog signal transmitted over a point-to-point or p ...
technologies such as
DSL Digital subscriber line (DSL; originally digital subscriber loop) is a family of technologies that are used to transmit digital data over telephone lines. In telecommunications marketing, the term DSL is widely understood to mean asymmetric dig ...
and
WiMAX Worldwide Interoperability for Microwave Access (WiMAX) is a family of wireless broadband communication standards based on the IEEE 802.16 set of standards, which provide physical layer (PHY) and media access control (MAC) options. The WiMA ...
,
broadcast Broadcasting is the distribution of audio or video content to a dispersed audience via any electronic mass communications medium, but typically one using the electromagnetic spectrum (radio waves), in a one-to-many model. Broadcasting began wi ...
systems such as satellite communications, DVB and
ATSC Advanced Television Systems Committee (ATSC) standards are an American set of standards for digital television transmission over terrestrial, cable and satellite networks. It is largely a replacement for the analog NTSC standard and, like that ...
, and storage systems such as RAID 6. Reed–Solomon codes operate on a block of data treated as a set of finite-field elements called symbols. Reed–Solomon codes are able to detect and correct multiple symbol errors. By adding =  −  check symbols to the data, a Reed–Solomon code can detect (but not correct) any combination of up to erroneous symbols, ''or'' locate and correct up to erroneous symbols at unknown locations. As an
erasure code In coding theory, an erasure code is a forward error correction (FEC) code under the assumption of bit erasures (rather than bit errors), which transforms a message of ''k'' symbols into a longer message (code word) with ''n'' symbols such that the ...
, it can correct up to erasures at locations that are known and provided to the algorithm, or it can detect and correct combinations of errors and erasures. Reed–Solomon codes are also suitable as multiple- burst bit-error correcting codes, since a sequence of consecutive bit errors can affect at most two symbols of size . The choice of is up to the designer of the code and may be selected within wide limits. There are two basic types of Reed–Solomon codes original view and BCH view with BCH view being the most common, as BCH view decoders are faster and require less working storage than original view decoders.


History

Reed–Solomon codes were developed in 1960 by
Irving S. Reed Irving Stoy Reed (November 12, 1923 – September 11, 2012) was an American mathematician and engineer. He is best known for co-inventing a class of algebraic error-correcting and error-detecting codes known as Reed–Solomon codes in collabo ...
and Gustave Solomon, who were then staff members of
MIT Lincoln Laboratory The MIT Lincoln Laboratory, located in Lexington, Massachusetts, is a United States Department of Defense federally funded research and development center chartered to apply advanced technology to problems of national security. Research and dev ...
. Their seminal article was titled "Polynomial Codes over Certain Finite Fields". . The original encoding scheme described in the Reed & Solomon article used a variable polynomial based on the message to be encoded where only a fixed set of values (evaluation points) to be encoded are known to encoder and decoder. The original theoretical decoder generated potential polynomials based on subsets of ''k'' (unencoded message length) out of ''n'' (encoded message length) values of a received message, choosing the most popular polynomial as the correct one, which was impractical for all but the simplest of cases. This was initially resolved by changing the original scheme to a
BCH code In coding theory, the Bose–Chaudhuri–Hocquenghem codes (BCH codes) form a class of cyclic error-correcting codes that are constructed using polynomials over a finite field (also called ''Galois field''). BCH codes were invented in 195 ...
like scheme based on a fixed polynomial known to both encoder and decoder, but later, practical decoders based on the original scheme were developed, although slower than the BCH schemes. The result of this is that there are two main types of Reed Solomon codes, ones that use the original encoding scheme, and ones that use the BCH encoding scheme. Also in 1960, a practical fixed polynomial decoder for
BCH codes In coding theory, the Bose–Chaudhuri–Hocquenghem codes (BCH codes) form a class of cyclic error-correcting codes that are constructed using polynomials over a finite field (also called ''Galois field''). BCH codes were invented in 1959 ...
developed by
Daniel Gorenstein Daniel E. Gorenstein (January 1, 1923 – August 26, 1992) was an American mathematician. He earned his undergraduate and graduate degrees at Harvard University, where he earned his Ph.D. in 1950 under Oscar Zariski, introducing in his dissertati ...
and Neal Zierler was described in an MIT Lincoln Laboratory report by Zierler in January 1960 and later in a paper in June 1961. The Gorenstein–Zierler decoder and the related work on BCH codes are described in a book Error Correcting Codes by
W. Wesley Peterson William Wesley Peterson (April 22, 1924 – May 6, 2009) was an American mathematician and computer scientist. He was best known for designing the cyclic redundancy check (CRC), – The original paper on CRCs for which research he was awarded ...
(1961). By 1963 (or possibly earlier), J. J. Stone (and others) recognized that Reed Solomon codes could use the BCH scheme of using a fixed generator polynomial, making such codes a special class of BCH codes, but Reed Solomon codes based on the original encoding scheme, are not a class of BCH codes, and depending on the set of evaluation points, they are not even cyclic codes. In 1969, an improved BCH scheme decoder was developed by
Elwyn Berlekamp Elwyn Ralph Berlekamp (September 6, 1940 – April 9, 2019) was a professor of mathematics and computer science at the University of California, Berkeley.Contributors, ''IEEE Transactions on Information Theory'' 42, #3 (May 1996), p. 1048. DO10.1 ...
and
James Massey James Lee Massey (February 11, 1934 – June 16, 2013) was an American information theorist and cryptographer, Professor Emeritus of Digital Technology at ETH Zurich. His notable work includes the application of the Berlekamp–Massey algorithm t ...
, and has since been known as the Berlekamp–Massey decoding algorithm. In 1975, another improved BCH scheme decoder was developed by Yasuo Sugiyama, based on the
extended Euclidean algorithm In arithmetic and computer programming, the extended Euclidean algorithm is an extension to the Euclidean algorithm, and computes, in addition to the greatest common divisor (gcd) of integers ''a'' and ''b'', also the coefficients of Bézout's ...
. In 1977, Reed–Solomon codes were implemented in the
Voyager program The Voyager program is an American scientific program that employs two robotic interstellar probes, ''Voyager 1'' and ''Voyager 2''. They were launched in 1977 to take advantage of a favorable alignment of Jupiter and Saturn, to fly near the ...
in the form of
concatenated error correction code In coding theory, concatenated codes form a class of error-correcting codes that are derived by combining an inner code and an outer code. They were conceived in 1966 by Dave Forney as a solution to the problem of finding a code that has both expon ...
s. The first commercial application in mass-produced consumer products appeared in 1982 with the compact disc, where two interleaved Reed–Solomon codes are used. Today, Reed–Solomon codes are widely implemented in
digital storage Data storage is the recording (storing) of information (data) in a storage medium. Handwriting, phonographic recording, magnetic tape, and optical discs are all examples of storage media. Biological molecules such as RNA and DNA are conside ...
devices and
digital communication Data transmission and data reception or, more broadly, data communication or digital communications is the transfer and reception of data in the form of a digital bitstream or a digitized analog signal transmitted over a point-to-point or p ...
standards, though they are being slowly replaced by Bose–Chaudhuri–Hocquenghem (BCH) codes. For example, Reed–Solomon codes are used in the
Digital Video Broadcasting Digital Video Broadcasting (DVB) is a set of international open standards for digital television. DVB standards are maintained by the DVB Project, an international industry consortium, and are published by a Joint Technical Committee (JTC) o ...
(DVB) standard
DVB-S Digital Video Broadcasting – Satellite (DVB-S) is the original DVB standard for Satellite Television and dates from 1995, in its first release, while development lasted from 1993 to 1997. The first commercial applications was by Star TV in Asi ...
, in conjunction with a convolutional inner code, but BCH codes are used with
LDPC In information theory, a low-density parity-check (LDPC) code is a linear error correcting code, a method of transmitting a message over a noisy transmission channel. An LDPC code is constructed using a sparse Tanner graph (subclass of the bip ...
in its successor,
DVB-S2 Digital Video Broadcasting - Satellite - Second Generation (DVB-S2) is a digital television broadcast standard that has been designed as a successor for the popular DVB-S system. It was developed in 2003 by the Digital Video Broadcasting Projec ...
. In 1986, an original scheme decoder known as the Berlekamp–Welch algorithm was developed. In 1996, variations of original scheme decoders called list decoders or soft decoders were developed by Madhu Sudan and others, and work continues on these types of decoders – see '' Guruswami–Sudan list decoding algorithm''. In 2002, another original scheme decoder was developed by Shuhong Gao, based on the
extended Euclidean algorithm In arithmetic and computer programming, the extended Euclidean algorithm is an extension to the Euclidean algorithm, and computes, in addition to the greatest common divisor (gcd) of integers ''a'' and ''b'', also the coefficients of Bézout's ...
.


Applications


Data storage

Reed–Solomon coding is very widely used in mass storage systems to correct the burst errors associated with media defects. Reed–Solomon coding is a key component of the compact disc. It was the first use of strong error correction coding in a mass-produced consumer product, and DAT and
DVD The DVD (common abbreviation for Digital Video Disc or Digital Versatile Disc) is a digital optical disc data storage format. It was invented and developed in 1995 and first released on November 1, 1996, in Japan. The medium can store any kin ...
use similar schemes. In the CD, two layers of Reed–Solomon coding separated by a 28-way
convolution In mathematics (in particular, functional analysis), convolution is a mathematical operation on two functions ( and ) that produces a third function (f*g) that expresses how the shape of one is modified by the other. The term ''convolution'' ...
al
interleaver In computing, telecommunication, information theory, and coding theory, an error correction code, sometimes error correcting code, (ECC) is used for controlling errors in data over unreliable or noisy communication channels. The central idea ...
yields a scheme called Cross-Interleaved Reed–Solomon Coding ( CIRC). The first element of a CIRC decoder is a relatively weak inner (32,28) Reed–Solomon code, shortened from a (255,251) code with 8-bit symbols. This code can correct up to 2 byte errors per 32-byte block. More importantly, it flags as erasures any uncorrectable blocks, i.e., blocks with more than 2 byte errors. The decoded 28-byte blocks, with erasure indications, are then spread by the deinterleaver to different blocks of the (28,24) outer code. Thanks to the deinterleaving, an erased 28-byte block from the inner code becomes a single erased byte in each of 28 outer code blocks. The outer code easily corrects this, since it can handle up to 4 such erasures per block. The result is a CIRC that can completely correct error bursts up to 4000 bits, or about 2.5 mm on the disc surface. This code is so strong that most CD playback errors are almost certainly caused by tracking errors that cause the laser to jump track, not by uncorrectable error bursts. DVDs use a similar scheme, but with much larger blocks, a (208,192) inner code, and a (182,172) outer code. Reed–Solomon error correction is also used in
parchive Parchive (a portmanteau of parity archive, and formally known as Parity Volume Set Specification) is an erasure code system that produces par files for checksum verification of data integrity, with the capability to perform data recovery operatio ...
files which are commonly posted accompanying multimedia files on
USENET Usenet () is a worldwide distributed discussion system available on computers. It was developed from the general-purpose UUCP, Unix-to-Unix Copy (UUCP) dial-up network architecture. Tom Truscott and Jim Ellis (computing), Jim Ellis conceived th ...
. The Distributed online storage service Wuala (discontinued in 2015) also used Reed–Solomon when breaking up files.


Bar code

Almost all two-dimensional bar codes such as PDF-417,
MaxiCode MaxiCode is a public domain, machine-readable symbol system originally created and used by United Parcel Service. Suitable for tracking and managing the shipment of packages, it resembles an Aztec Code or QR code, but uses dots arranged in a ...
,
Datamatrix A Data Matrix is a two-dimensional code consisting of black and white "cells" or dots arranged in either a square or rectangular pattern, also known as a matrix. The information to be encoded can be text or numeric data. Usual data size is f ...
,
QR Code A QR code (an initialism for quick response code) is a type of matrix barcode (or two-dimensional barcode) invented in 1994 by the Japanese company Denso Wave. A barcode is a machine-readable optical label that can contain information about th ...
, and
Aztec Code The Aztec Code is a matrix code invented by Andrew Longacre, Jr. and Robert Hussey in 1995.* The code was published by AIM, Inc. in 1997. Although the Aztec Code was patented, that patent was officially made public domain. Click "images" the ...
use Reed–Solomon error correction to allow correct reading even if a portion of the bar code is damaged. When the bar code scanner cannot recognize a bar code symbol, it will treat it as an erasure. Reed–Solomon coding is less common in one-dimensional bar codes, but is used by the PostBar symbology.


Data transmission

Specialized forms of Reed–Solomon codes, specifically
Cauchy Baron Augustin-Louis Cauchy (, ; ; 21 August 178923 May 1857) was a French mathematician, engineer, and physicist who made pioneering contributions to several branches of mathematics, including mathematical analysis and continuum mechanics. He w ...
-RS and Vandermonde-RS, can be used to overcome the unreliable nature of data transmission over erasure channels. The encoding process assumes a code of RS(''N'', ''K'') which results in ''N'' codewords of length ''N'' symbols each storing ''K'' symbols of data, being generated, that are then sent over an erasure channel. Any combination of ''K'' codewords received at the other end is enough to reconstruct all of the ''N'' codewords. The code rate is generally set to 1/2 unless the channel's erasure likelihood can be adequately modelled and is seen to be less. In conclusion, ''N'' is usually 2''K'', meaning that at least half of all the codewords sent must be received in order to reconstruct all of the codewords sent. Reed–Solomon codes are also used in
xDSL Digital subscriber line (DSL; originally digital subscriber loop) is a family of technologies that are used to transmit digital data over telephone lines. In telecommunications marketing, the term DSL is widely understood to mean asymmetric dig ...
systems and
CCSDS The Consultative Committee for Space Data Systems (CCSDS) was founded in 1982 for governmental and quasi-governmental space agencies to discuss and develop standards for space data and information systems. Currently composed of "eleven member agenc ...
's Space Communications Protocol Specifications as a form of
forward error correction In computing, telecommunication, information theory, and coding theory, an error correction code, sometimes error correcting code, (ECC) is used for controlling errors in data over unreliable or noisy communication channels. The central idea i ...
.


Space transmission

One significant application of Reed–Solomon coding was to encode the digital pictures sent back by the
Voyager program The Voyager program is an American scientific program that employs two robotic interstellar probes, ''Voyager 1'' and ''Voyager 2''. They were launched in 1977 to take advantage of a favorable alignment of Jupiter and Saturn, to fly near the ...
. Voyager introduced Reed–Solomon coding
concatenated In formal language theory and computer programming, string concatenation is the operation of joining character strings end-to-end. For example, the concatenation of "snow" and "ball" is "snowball". In certain formalisations of concatenat ...
with convolutional codes, a practice that has since become very widespread in deep space and satellite (e.g., direct digital broadcasting) communications.
Viterbi decoder A Viterbi decoder uses the Viterbi algorithm for decoding a bitstream that has been encoded using a convolutional code or trellis code. There are other algorithms for decoding a convolutionally encoded stream (for example, the Fano algorithm). T ...
s tend to produce errors in short bursts. Correcting these burst errors is a job best done by short or simplified Reed–Solomon codes. Modern versions of concatenated Reed–Solomon/Viterbi-decoded convolutional coding were and are used on the
Mars Pathfinder ''Mars Pathfinder'' (''MESUR Pathfinder'') is an American robotic spacecraft that landed a base station with a roving probe on Mars in 1997. It consisted of a lander, renamed the Carl Sagan Memorial Station, and a lightweight, wheeled roboti ...
,
Galileo Galileo di Vincenzo Bonaiuti de' Galilei (15 February 1564 – 8 January 1642) was an Italian astronomer, physicist and engineer, sometimes described as a polymath. Commonly referred to as Galileo, his name was pronounced (, ). He was ...
,
Mars Exploration Rover NASA's Mars Exploration Rover (MER) mission was a robotic space mission involving two Mars rovers, '' Spirit'' and '' Opportunity'', exploring the planet Mars. It began in 2003 with the launch of the two rovers to explore the Martian surface ...
and Cassini missions, where they perform within about 1–1.5 dB of the ultimate limit, the Shannon capacity. These concatenated codes are now being replaced by more powerful
turbo code In information theory, turbo codes (originally in French ''Turbocodes'') are a class of high-performance forward error correction (FEC) codes developed around 1990–91, but first published in 1993. They were the first practical codes to closely ...
s:


Constructions (encoding)

The Reed–Solomon code is actually a family of codes, where every code is characterised by three parameters: an
alphabet An alphabet is a standardized set of basic written graphemes (called letter (alphabet), letters) that represent the phonemes of certain spoken languages. Not all writing systems represent language in this way; in a syllabary, each character ...
size ''q'', a block length ''n'', and a message length ''k,'' with ''k < n ≤ q.'' The set of alphabet symbols is interpreted as the
finite field In mathematics, a finite field or Galois field (so-named in honor of Évariste Galois) is a field that contains a finite number of elements. As with any field, a finite field is a set on which the operations of multiplication, addition, subtr ...
of order ''q'', and thus, ''q'' must be a
prime power In mathematics, a prime power is a positive integer which is a positive integer power of a single prime number. For example: , and are prime powers, while , and are not. The sequence of prime powers begins: 2, 3, 4, 5, 7, 8, 9, 11, 13, 16, 17 ...
. In the most useful parameterizations of the Reed–Solomon code, the block length is usually some constant multiple of the message length, that is, the rate is some constant, and furthermore, the block length is equal to or one less than the alphabet size, that is, or .


Reed & Solomon's original view: The codeword as a sequence of values

There are different encoding procedures for the Reed–Solomon code, and thus, there are different ways to describe the set of all codewords. In the original view of , every codeword of the Reed–Solomon code is a sequence of function values of a polynomial of degree less than ''k''. In order to obtain a codeword of the Reed–Solomon code, the message symbols (each within the q-sized alphabet) are treated as the coefficients of a polynomial ''p'' of degree less than ''k'', over the finite field ''F'' with ''q'' elements. In turn, the polynomial ''p'' is evaluated at ''n'' ≤ ''q'' distinct points a_1,\dots,a_n of the field ''F'', and the sequence of values is the corresponding codeword. Common choices for a set of evaluation points include , , or for ''n'' < ''q'', , ... , where ''α'' is a primitive element of ''F''. Formally, the set \mathbf of codewords of the Reed–Solomon code is defined as follows: : \mathbf = \Big\\,. Since any two ''distinct'' polynomials of degree less than k agree in at most k-1 points, this means that any two codewords of the Reed–Solomon code disagree in at least n - (k-1) = n-k+1 positions. Furthermore, there are two polynomials that do agree in k-1 points but are not equal, and thus, the
distance Distance is a numerical or occasionally qualitative measurement of how far apart objects or points are. In physics or everyday usage, distance may refer to a physical length or an estimation based on other criteria (e.g. "two counties over"). ...
of the Reed–Solomon code is exactly d=n-k+1. Then the relative distance is \delta = d/n = 1-k/n + 1/n = 1-R+1/n\sim 1-R, where R=k/n is the rate. This trade-off between the relative distance and the rate is asymptotically optimal since, by the
Singleton bound In coding theory, the Singleton bound, named after Richard Collom Singleton, is a relatively crude upper bound on the size of an arbitrary block code C with block length n, size M and minimum distance d. It is also known as the Joshibound. proved b ...
, ''every'' code satisfies \delta+R\leq 1+1/n. Being a code that achieves this optimal trade-off, the Reed–Solomon code belongs to the class of
maximum distance separable code In coding theory, the Singleton bound, named after Richard Collom Singleton, is a relatively crude upper bound on the size of an arbitrary block code C with block length n, size M and minimum distance d. It is also known as the Joshibound. proved b ...
s. While the number of different polynomials of degree less than ''k'' and the number of different messages are both equal to q^k, and thus every message can be uniquely mapped to such a polynomial, there are different ways of doing this encoding. The original construction of interprets the message ''x'' as the ''coefficients'' of the polynomial ''p'', whereas subsequent constructions interpret the message as the ''values'' of the polynomial at the first ''k'' points a_1,\dots,a_k and obtain the polynomial ''p'' by interpolating these values with a polynomial of degree less than ''k''. The latter encoding procedure, while being slightly less efficient, has the advantage that it gives rise to a
systematic code In coding theory, a systematic code is any error-correcting code in which the input data is embedded in the encoded output. Conversely, in a non-systematic code the output does not contain the input symbols. Systematic codes have the advantage that ...
, that is, the original message is always contained as a subsequence of the codeword.


Simple encoding procedure: The message as a sequence of coefficients

In the original construction of , the message x=(x_1,\dots,x_k)\in F^k is mapped to the polynomial p_x with :p_x(a) = \sum_^k x_i a^ \,. The codeword of x is obtained by evaluating p_x at n different points a_1,\dots,a_n of the field F. Thus the classical encoding function C:F^k \to F^n for the Reed–Solomon code is defined as follows: :C(x) = \big(p_x(a_1),\dots,p_x(a_n)\big)\,. This function C is a
linear mapping In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that pre ...
, that is, it satisfies C(x) = x^T \cdot A for the following (k\times n)-matrix A with elements from F: :A=\begin 1 & \dots & 1 & \dots & 1 \\ a_1 & \dots & a_k & \dots & a_n \\ a_1^2 & \dots & a_k^2 & \dots & a_n^2 \\ \vdots & & \vdots & & \vdots \\ a_1^ & \dots & a_k^ & \dots & a_n^ \end This matrix is the transpose of a
Vandermonde matrix In linear algebra, a Vandermonde matrix, named after Alexandre-Théophile Vandermonde, is a matrix with the terms of a geometric progression in each row: an matrix :V=\begin 1 & x_1 & x_1^2 & \dots & x_1^\\ 1 & x_2 & x_2^2 & \dots & x_2^\\ 1 & x_ ...
over F. In other words, the Reed–Solomon code is a
linear code In coding theory, a linear code is an error-correcting code for which any linear combination of codewords is also a codeword. Linear codes are traditionally partitioned into block codes and convolutional codes, although turbo codes can be seen as ...
, and in the classical encoding procedure, its
generator matrix In coding theory, a generator matrix is a matrix whose rows form a basis for a linear code. The codewords are all of the linear combinations of the rows of this matrix, that is, the linear code is the row space of its generator matrix. Termi ...
is A.


Systematic encoding procedure: The message as an initial sequence of values

There is an alternative encoding procedure that also produces the Reed–Solomon code, but that does so in a systematic way. Here, the mapping from the message x to the polynomial p_x works differently: the polynomial p_x is now defined as the unique polynomial of degree less than k such that :p_x(a_i) = x_i holds for all i\in\. To compute this polynomial p_x from x, one can use
Lagrange interpolation In numerical analysis, the Lagrange interpolating polynomial is the unique polynomial of lowest degree that interpolates a given set of data. Given a data set of coordinate pairs (x_j, y_j) with 0 \leq j \leq k, the x_j are called ''nodes'' a ...
. Once it has been found, it is evaluated at the other points a_,\dots,a_n of the field. The alternative encoding function C:F^k \to F^n for the Reed–Solomon code is then again just the sequence of values: :C(x) = \big(p_x(a_1),\dots,p_x(a_n)\big)\,. Since the first k entries of each codeword C(x) coincide with x, this encoding procedure is indeed systematic. Since Lagrange interpolation is a linear transformation, C is a linear mapping. In fact, we have C(x) = x \cdot G , where :G= (A\text)^\cdot A = \begin 1 & 0 & 0 & \dots & 0 & g_ & \dots & g_ \\ 0 & 1 & 0 & \dots & 0 & g_ & \dots & g_ \\ 0 & 0 & 1 & \dots & 0 & g_ & \dots & g_ \\ \vdots & \vdots & \vdots & & \vdots & \vdots & & \vdots \\ 0 & \dots & 0 & \dots & 1 & g_ & \dots & g_ \end


Discrete Fourier transform and its inverse

A
discrete Fourier transform In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced samples of a function into a same-length sequence of equally-spaced samples of the discrete-time Fourier transform (DTFT), which is a comp ...
is essentially the same as the encoding procedure; it uses the generator polynomial ''p''(x) to map a set of evaluation points into the message values as shown above: :C(x) = \big(p_x(a_1),\dots,p_x(a_n)\big)\,. The inverse Fourier transform could be used to convert an error free set of ''n'' < ''q'' message values back into the encoding polynomial of ''k'' coefficients, with the constraint that in order for this to work, the set of evaluation points used to encode the message must be a set of increasing powers of ''α'': :a_i = \alpha^ :a_1, \dots, a_n = \ However, Lagrange interpolation performs the same conversion without the constraint on the set of evaluation points or the requirement of an error free set of message values and is used for systematic encoding, and in one of the steps of the Gao decoder.


The BCH view: The codeword as a sequence of coefficients

In this view, the message is interpreted as the coefficients of a polynomial p(x). The sender computes a related polynomial s(x) of degree n-1 where n \le q-1 and sends the polynomial s(x). The polynomial s(x) is constructed by multiplying the message polynomial p(x), which has degree k-1, with a
generator polynomial In coding theory, a polynomial code is a type of linear code whose set of valid code words consists of those polynomials (usually of some fixed length) that are divisible by a given fixed polynomial (of shorter length, called the ''generator polynom ...
g(x) of degree n-k that is known to both the sender and the receiver. The generator polynomial g(x) is defined as the polynomial whose roots are sequential powers of the Galois field primitive \alpha : g(x) = (x-\alpha^)(x-\alpha^)\cdots(x-\alpha^) = g_0 + g_1x + \cdots + g_x^ + x^ For a "narrow sense code", i = 1. :\mathbf = \left\\,.


Systematic encoding procedure

The encoding procedure for the BCH view of Reed–Solomon codes can be modified to yield a systematic encoding procedure, in which each codeword contains the message as a prefix, and simply appends error correcting symbols as a suffix. Here, instead of sending s(x) = p(x) g(x), the encoder constructs the transmitted polynomial s(x) such that the coefficients of the k largest monomials are equal to the corresponding coefficients of p(x), and the lower-order coefficients of s(x) are chosen exactly in such a way that s(x) becomes divisible by g(x). Then the coefficients of p(x) are a subsequence of the coefficients of s(x). To get a code that is overall systematic, we construct the message polynomial p(x) by interpreting the message as the sequence of its coefficients. Formally, the construction is done by multiplying p(x) by x^t to make room for the t=n-k check symbols, dividing that product by g(x) to find the remainder, and then compensating for that remainder by subtracting it. The t check symbols are created by computing the remainder s_r(x): :s_r(x) = p(x)\cdot x^t \ \bmod \ g(x). The remainder has degree at most t-1, whereas the coefficients of x^,x^,\dots,x^1,x^0 in the polynomial p(x)\cdot x^t are zero. Therefore, the following definition of the codeword s(x) has the property that the first k coefficients are identical to the coefficients of p(x): :s(x) = p(x)\cdot x^t - s_r(x)\,. As a result, the codewords s(x) are indeed elements of \mathbf, that is, they are divisible by the generator polynomial g(x): :s(x) \equiv p(x)\cdot x^t - s_r(x) \equiv s_r(x) - s_r(x) \equiv 0 \mod g(x)\,.


Properties

The Reed–Solomon code is a 'n'', ''k'', ''n'' − ''k'' + 1code; in other words, it is a
linear block code In coding theory, block codes are a large and important family of error-correcting codes that encode data in blocks. There is a vast number of examples for block codes, many of which have a wide range of practical applications. The abstract defini ...
of length ''n'' (over ''F'') with
dimension In physics and mathematics, the dimension of a mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any point within it. Thus, a line has a dimension of one (1D) because only one coordin ...
''k'' and minimum
Hamming distance In information theory, the Hamming distance between two strings of equal length is the number of positions at which the corresponding symbols are different. In other words, it measures the minimum number of ''substitutions'' required to chang ...
d_=n-k+1. The Reed–Solomon code is optimal in the sense that the minimum distance has the maximum value possible for a linear code of size (''n'', ''k''); this is known as the
Singleton bound In coding theory, the Singleton bound, named after Richard Collom Singleton, is a relatively crude upper bound on the size of an arbitrary block code C with block length n, size M and minimum distance d. It is also known as the Joshibound. proved b ...
. Such a code is also called a maximum distance separable (MDS) code. The error-correcting ability of a Reed–Solomon code is determined by its minimum distance, or equivalently, by n - k, the measure of redundancy in the block. If the locations of the error symbols are not known in advance, then a Reed–Solomon code can correct up to (n-k)/2 erroneous symbols, i.e., it can correct half as many errors as there are redundant symbols added to the block. Sometimes error locations are known in advance (e.g., "side information" in
demodulator Demodulation is extracting the original information-bearing signal from a carrier wave. A demodulator is an electronic circuit (or computer program in a software-defined radio) that is used to recover the information content from the modulated ...
signal-to-noise ratio Signal-to-noise ratio (SNR or S/N) is a measure used in science and engineering that compares the level of a desired signal to the level of background noise. SNR is defined as the ratio of signal power to the noise power, often expressed in de ...
s)—these are called erasures. A Reed–Solomon code (like any MDS code) is able to correct twice as many erasures as errors, and any combination of errors and erasures can be corrected as long as the relation 2''E'' + ''S'' ≤ ''n'' − ''k'' is satisfied, where E is the number of errors and S is the number of erasures in the block. The theoretical error bound can be described via the following formula for the
AWGN Additive white Gaussian noise (AWGN) is a basic noise model used in information theory to mimic the effect of many random processes that occur in nature. The modifiers denote specific characteristics: * ''Additive'' because it is added to any nois ...
channel for
FSK FSK may refer to: * FSK (band), a German band * Federal Counterintelligence Service, (Russian ') of Russia * Fiskerton railway station, in England * Forskolin, a diterpene * Forsvarets Spesialkommando, a Norwegian special forces unit * Fort Scott M ...
: : P_b \approx \frac\frac\sum_^n \ell P_s^\ell(1-P_s)^ and for other modulation schemes: : P_b \approx \frac\frac\sum_^n \ell P_s^\ell(1-P_s)^ where t = \frac(d_-1), P_s = 1-(1-s)^h, h = \frac, s is the symbol error rate in uncoded AWGN case and M is the modulation order. For practical uses of Reed–Solomon codes, it is common to use a finite field F with 2^m elements. In this case, each symbol can be represented as an m-bit value. The sender sends the data points as encoded blocks, and the number of symbols in the encoded block is n = 2^m - 1. Thus a Reed–Solomon code operating on 8-bit symbols has n = 2^8 - 1 = 255 symbols per block. (This is a very popular value because of the prevalence of byte-oriented computer systems.) The number k, with k < n, of ''data'' symbols in the block is a design parameter. A commonly used code encodes k = 223 eight-bit data symbols plus 32 eight-bit parity symbols in an n = 255-symbol block; this is denoted as a (n, k) = (255,223) code, and is capable of correcting up to 16 symbol errors per block. The Reed–Solomon code properties discussed above make them especially well-suited to applications where errors occur in bursts. This is because it does not matter to the code how many bits in a symbol are in error — if multiple bits in a symbol are corrupted it only counts as a single error. Conversely, if a data stream is not characterized by error bursts or drop-outs but by random single bit errors, a Reed–Solomon code is usually a poor choice compared to a binary code. The Reed–Solomon code, like the convolutional code, is a transparent code. This means that if the channel symbols have been inverted somewhere along the line, the decoders will still operate. The result will be the inversion of the original data. However, the Reed–Solomon code loses its transparency when the code is shortened. The "missing" bits in a shortened code need to be filled by either zeros or ones, depending on whether the data is complemented or not. (To put it another way, if the symbols are inverted, then the zero-fill needs to be inverted to a one-fill.) For this reason it is mandatory that the sense of the data (i.e., true or complemented) be resolved before Reed–Solomon decoding. Whether the Reed–Solomon code is cyclic or not depends on subtle details of the construction. In the original view of Reed and Solomon, where the codewords are the values of a polynomial, one can choose the sequence of evaluation points in such a way as to make the code cyclic. In particular, if \alpha is a primitive root of the field F, then by definition all non-zero elements of F take the form \alpha^i for i\in\, where q=, F, . Each polynomial p over F gives rise to a codeword (p(\alpha^1),\dots,p(\alpha^)). Since the function a\mapsto p(\alpha a) is also a polynomial of the same degree, this function gives rise to a codeword (p(\alpha^2),\dots,p(\alpha^)); since \alpha^=\alpha^1 holds, this codeword is the cyclic left-shift of the original codeword derived from p. So choosing a sequence of primitive root powers as the evaluation points makes the original view Reed–Solomon code cyclic. Reed–Solomon codes in the BCH view are always cyclic because BCH codes are cyclic.


Remarks

Designers are not required to use the "natural" sizes of Reed–Solomon code blocks. A technique known as "shortening" can produce a smaller code of any desired size from a larger code. For example, the widely used (255,223) code can be converted to a (160,128) code by padding the unused portion of the source block with 95 binary zeroes and not transmitting them. At the decoder, the same portion of the block is loaded locally with binary zeroes. The Delsarte–Goethals–Seidel theorem illustrates an example of an application of shortened Reed–Solomon codes. In parallel to shortening, a technique known as puncturing allows omitting some of the encoded parity symbols.


BCH view decoders

The decoders described in this section use the BCH view of a codeword as a sequence of coefficients. They use a fixed generator polynomial known to both encoder and decoder.


Peterson–Gorenstein–Zierler decoder

Daniel Gorenstein and Neal Zierler developed a decoder that was described in a MIT Lincoln Laboratory report by Zierler in January 1960 and later in a paper in June 1961. The Gorenstein–Zierler decoder and the related work on BCH codes are described in a book ''Error Correcting Codes'' by
W. Wesley Peterson William Wesley Peterson (April 22, 1924 – May 6, 2009) was an American mathematician and computer scientist. He was best known for designing the cyclic redundancy check (CRC), – The original paper on CRCs for which research he was awarded ...
(1961).


Formulation

The transmitted message, (c_0, \ldots, c_i, \ldots,c_), is viewed as the coefficients of a polynomial ''s''(''x''): :s(x) = \sum_^ c_i x^i As a result of the Reed-Solomon encoding procedure, ''s''(''x'') is divisible by the generator polynomial ''g''(''x''): :g(x) = \prod_^ (x - \alpha^j), where ''α'' is a primitive element. Since ''s''(''x'') is a multiple of the generator ''g''(''x''), it follows that it "inherits" all its roots. :s(x) \ mod \ (x-\alpha^j) = g(x) \ mod \ (x-\alpha^j) = 0 Therefore, :s(\alpha^j) = 0, \ j=1,2,\ldots,n-k The transmitted polynomial is corrupted in transit by an error polynomial ''e''(''x'') to produce the received polynomial ''r''(''x''). :r(x) = s(x) + e(x) :e(x) = \sum_^ e_i x^i Coefficient ''ei'' will be zero if there is no error at that power of ''x'' and nonzero if there is an error. If there are ''ν'' errors at distinct powers ''ik'' of ''x'', then :e(x) = \sum_^\nu e_ x^ The goal of the decoder is to find the number of errors (''ν''), the positions of the errors (''ik''), and the error values at those positions (''eik''). From those, ''e''(''x'') can be calculated and subtracted from ''r''(''x'') to get the originally sent message ''s''(''x'').


Syndrome decoding

The decoder starts by evaluating the polynomial as received at points \alpha^1 \dots \alpha^. We call the results of that evaluation the "syndromes", ''S''''j''. They are defined as: : \begin S_j &= r(\alpha^j) = s(\alpha^j) + e(\alpha^j) = 0 + e(\alpha^j) = e(\alpha^j), \ j=1,2,\ldots,n-k \\ &= \sum_^\nu e_ \left( \alpha^j \right)^ \end Note that s(\alpha^j) = 0 because s(x) has roots at \alpha^j, as shown in the previous section. The advantage of looking at the syndromes is that the message polynomial drops out. In other words, the syndromes only relate to the error, and are unaffected by the actual contents of the message being transmitted. If the syndromes are all zero, the algorithm stops here and reports that the message was not corrupted in transit.


Error locators and error values

For convenience, define the error locators ''Xk'' and error values ''Yk'' as: : X_k = \alpha^, \ Y_k = e_ Then the syndromes can be written in terms of these error locators and error values as : S_j = \sum_^ Y_k X_k^ This definition of the syndrome values is equivalent to the previous since (\alpha^j)^ = \alpha ^ = (\alpha^)^j = X_k^j. The syndromes give a system of ''n'' − ''k'' ≥ 2''ν'' equations in 2''ν'' unknowns, but that system of equations is nonlinear in the ''Xk'' and does not have an obvious solution. However, if the ''Xk'' were known (see below), then the syndrome equations provide a linear system of equations that can easily be solved for the ''Yk'' error values. :\begin X_1^1 & X_2^1 & \cdots & X_\nu^1 \\ X_1^2 & X_2^2 & \cdots & X_\nu^2 \\ \vdots & \vdots && \vdots \\ X_1^ & X_2^ & \cdots & X_\nu^ \\ \end \begin Y_1 \\ Y_2 \\ \vdots \\ Y_\nu \end = \begin S_1 \\ S_2 \\ \vdots \\ S_ \end Consequently, the problem is finding the ''Xk'', because then the leftmost matrix would be known, and both sides of the equation could be multiplied by its inverse, yielding Y''k'' In the variant of this algorithm where the locations of the errors are already known (when it is being used as an
erasure code In coding theory, an erasure code is a forward error correction (FEC) code under the assumption of bit erasures (rather than bit errors), which transforms a message of ''k'' symbols into a longer message (code word) with ''n'' symbols such that the ...
), this is the end. The error locations (''Xk'') are already known by some other method (for example, in an FM transmission, the sections where the bitstream was unclear or overcome with interference are probabilistically determinable from frequency analysis). In this scenario, up to n-k errors can be corrected. The rest of the algorithm serves to locate the errors, and will require syndrome values up to 2v, instead of just the v used thus far. This is why 2x as many error correcting symbols need to be added as can be corrected without knowing their locations.


Error locator polynomial

There is a linear recurrence relation that gives rise to a system of linear equations. Solving those equations identifies those error locations ''Xk''. Define the error locator polynomial Λ(''x'') as :\Lambda(x) = \prod_^\nu (1 - x X_k ) = 1 + \Lambda_1 x^1 + \Lambda_2 x^2 + \cdots + \Lambda_\nu x^\nu The zeros of Λ(''x'') are the reciprocals X_k^. This follows from the above product notation construction since if x=X_k^ then one of the multiplied terms will be zero (1 - X_k^ \cdot X_k) = 1 - 1 = 0, making the whole polynomial evaluate to zero. : \Lambda(X_k^) = 0 Let j be any integer such that 1 \leq j \leq \nu. Multiply both sides by Y_k X_k^ and it will still be zero. : \begin & Y_k X_k^ \Lambda(X_k^) = 0. \\ & Y_k X_k^ (1 + \Lambda_1 X_k^ + \Lambda_2 X_k^ + \cdots + \Lambda_\nu X_k^) = 0. \\ & Y_k X_k^ + \Lambda_1 Y_k X_k^ X_k^ + \Lambda_2 Y_k X_k^ X_k^ + \cdots + \Lambda_\nu Y_k X_k^ X_k^ = 0. \\ & Y_k X_k^ + \Lambda_1 Y_k X_k^ + \Lambda_2 Y_k X_k^ + \cdots + \Lambda_ Y_k X_k^j = 0. \\ \end Sum for ''k'' = 1 to ''ν'' and it will still be zero. :\begin & \sum_^\nu \left( Y_k X_k^ + \Lambda_1 Y_k X_k^ + \Lambda_2 Y_k X_k^ + \cdots + \Lambda_ Y_k X_k^ \right) = 0 \\ \end Collect each term into its own sum. :\begin & \left(\sum_^\nu Y_k X_k^ \right) + \left(\sum_^\nu \Lambda_1 Y_k X_k^\right) + \left(\sum_^\nu \Lambda_2 Y_k X_k^\right) + \cdots + \left(\sum_^\nu \Lambda_\nu Y_k X_k^j \right) = 0 \end Extract the constant values of \Lambda that are unaffected by the summation. :\begin & \left(\sum_^\nu Y_k X_k^ \right) + \Lambda_1 \left(\sum_^\nu Y_k X_k^\right) + \Lambda_2 \left(\sum_^\nu Y_k X_k^\right) + \cdots + \Lambda_\nu \left(\sum_^\nu Y_k X_k^j \right) = 0 \end These summations are now equivalent to the syndrome values, which we know and can substitute in! This therefore reduces to : S_ + \Lambda_1 S_ + \cdots + \Lambda_ S_ + \Lambda_ S_j = 0 \, Subtracting S_ from both sides yields : S_j \Lambda_ + S_\Lambda_ + \cdots + S_ \Lambda_1 = - S_ \ Recall that ''j'' was chosen to be any integer between 1 and ''v'' inclusive, and this equivalence is true for any and all such values. Therefore, we have ''v'' linear equations, not just one. This system of linear equations can therefore be solved for the coefficients Λ''i'' of the error location polynomial: :\begin S_1 & S_2 & \cdots & S_ \\ S_2 & S_3 & \cdots & S_ \\ \vdots & \vdots && \vdots \\ S_ & S_ & \cdots & S_ \end \begin \Lambda_ \\ \Lambda_ \\ \vdots \\ \Lambda_1 \end = \begin - S_ \\ - S_ \\ \vdots \\ - S_ \end The above assumes the decoder knows the number of errors ''ν'', but that number has not been determined yet. The PGZ decoder does not determine ''ν'' directly but rather searches for it by trying successive values. The decoder first assumes the largest value for a trial ''ν'' and sets up the linear system for that value. If the equations can be solved (i.e., the matrix determinant is nonzero), then that trial value is the number of errors. If the linear system cannot be solved, then the trial ''ν'' is reduced by one and the next smaller system is examined.


Find the roots of the error locator polynomial

Use the coefficients Λ''i'' found in the last step to build the error location polynomial. The roots of the error location polynomial can be found by exhaustive search. The error locators ''Xk'' are the reciprocals of those roots. The order of coefficients of the error location polynomial can be reversed, in which case the roots of that reversed polynomial are the error locators X_k (not their reciprocals X_k^). Chien search is an efficient implementation of this step.


Calculate the error values

Once the error locators ''Xk'' are known, the error values can be determined. This can be done by direct solution for ''Yk'' in the error equations matrix given above, or using the Forney algorithm.


Calculate the error locations

Calculate ''ik'' by taking the log base \alpha of ''Xk''. This is generally done using a precomputed lookup table.


Fix the errors

Finally, e(x) is generated from ''ik'' and ''eik'' and then is subtracted from r(x) to get the originally sent message s(x), with errors corrected.


Example

Consider the Reed–Solomon code defined in with and (this is used in
PDF417 PDF417 is a stacked linear barcode format used in a variety of applications such as transport, identification cards, and inventory management. "PDF" stands for Portable Data File. The "417" signifies that each pattern in the code consists of 4 ...
barcodes) for a RS(7,3) code. The generator polynomial is :g(x) = (x-3)(x-3^2)(x-3^3)(x-3^4) = x^4+809 x^3+723 x^2+568 x+522 If the message polynomial is , then a systematic codeword is encoded as follows. :s_r(x) = p(x) \, x^t \mod g(x) = 547 x^3 + 738 x^2 + 442 x + 455 :s(x) = p(x) \, x^t - s_r(x) = 3 x^6 + 2 x^5 + 1 x^4 + 382 x^3 + 191 x^2 + 487 x + 474 Errors in transmission might cause this to be received instead. :r(x) = s(x) + e(x) = 3 x^6 + 2 x^5 + 123 x^4 + 456 x^3 + 191 x^2 + 487 x + 474 The syndromes are calculated by evaluating ''r'' at powers of ''α''. :S_1 = r(3^1) = 3\cdot 3^6 + 2\cdot 3^5 + 123\cdot 3^4 + 456\cdot 3^3 + 191\cdot 3^2 + 487\cdot 3 + 474 = 732 :S_2 = r(3^2) = 637,\;S_3 = r(3^3) = 762,\;S_4 = r(3^4) = 925 :\begin 732 & 637 \\ 637 & 762 \end \begin \Lambda_2 \\ \Lambda_1 \end = \begin -762 \\ -925 \end = \begin 167 \\ 004 \end Using
Gaussian elimination In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of operations performed on the corresponding matrix of coefficients. This method can also be used ...
: :\begin 001 & 000 \\ 000 & 001 \end \begin \Lambda_2 \\ \Lambda_1 \end = \begin 329 \\ 821 \end :Λ(x) = 329 x2 + 821 x + 001, with roots x1 = 757 = 3−3 and x2 = 562 = 3−4 The coefficients can be reversed to produce roots with positive exponents, but typically this isn't used: :R(x) = 001 x2 + 821 x + 329, with roots 27 = 33 and 81 = 34 with the log of the roots corresponding to the error locations (right to left, location 0 is the last term in the codeword). To calculate the error values, apply the Forney algorithm. :Ω(x) = S(x) Λ(x) mod x4 = 546 x + 732 :Λ'(x) = 658 x + 821 :e1 = −Ω(x1)/Λ'(x1) = 074 :e2 = −Ω(x2)/Λ'(x2) = 122 Subtracting e_1x^3+e_2x^4=74x^3+122x^4 from the received polynomial ''r(x)'' reproduces the original codeword ''s''.


Berlekamp–Massey decoder

The Berlekamp–Massey algorithm is an alternate iterative procedure for finding the error locator polynomial. During each iteration, it calculates a discrepancy based on a current instance of Λ(x) with an assumed number of errors ''e'': : \Delta = S_ + \Lambda_1 \ S_ + \cdots + \Lambda_e \ S_ and then adjusts Λ(''x'') and ''e'' so that a recalculated Δ would be zero. The article Berlekamp–Massey algorithm has a detailed description of the procedure. In the following example, ''C''(''x'') is used to represent Λ(''x'').


Example

Using the same data as the Peterson Gorenstein Zierler example above: The final value of ''C'' is the error locator polynomial, Λ(''x'').


Euclidean decoder

Another iterative method for calculating both the error locator polynomial and the error value polynomial is based on Sugiyama's adaptation of the
extended Euclidean algorithm In arithmetic and computer programming, the extended Euclidean algorithm is an extension to the Euclidean algorithm, and computes, in addition to the greatest common divisor (gcd) of integers ''a'' and ''b'', also the coefficients of Bézout's ...
. Define S(x), Λ(x), and Ω(x) for ''t'' syndromes and ''e'' errors: : S(x) = S_ x^ + S_ x^ + \cdots + S_2 x + S_1 : \Lambda(x) = \Lambda_ x^ + \Lambda_ x^ + \cdots + \Lambda_ x + 1 : \Omega(x) = \Omega_ x^ + \Omega_ x^ + \cdots + \Omega_ x + \Omega_ The key equation is: : \Lambda(x) S(x) = Q(x) x^ + \Omega(x) For ''t'' = 6 and ''e'' = 3: :\begin \Lambda_3 S_6 & x^8 \\ \Lambda_2 S_6 + \Lambda_3 S_5 & x^7 \\ \Lambda_1 S_6 + \Lambda_2 S_5 + \Lambda_3 S_4 & x^6 \\ S_6 + \Lambda_1 S_5 + \Lambda_2 S_4 + \Lambda_3 S_3 & x^5 \\ S_5 + \Lambda_1 S_4 + \Lambda_2 S_3 + \Lambda_3 S_2 & x^4 \\ S_4 + \Lambda_1 S_3 + \Lambda_2 S_2 + \Lambda_3 S_1 & x^3 \\ S_3 + \Lambda_1 S_2 + \Lambda_2 S_1 & x^2 \\ S_2 + \Lambda_1 S_1 & x \\ S_1 \end = \begin Q_2 x^8 \\ Q_1 x^7 \\ Q_0 x^6 \\ 0 \\ 0 \\ 0 \\ \Omega_2 x^2 \\ \Omega_1 x \\ \Omega_0 \end The middle terms are zero due to the relationship between Λ and syndromes. The extended Euclidean algorithm can find a series of polynomials of the form : ''A''''i''(''x'') ''S''(''x'') + ''B''''i''(''x'') ''x''''t'' = ''R''''i''(''x'') where the degree of ''R'' decreases as ''i'' increases. Once the degree of ''R''''i''(''x'') < ''t''/2, then Ai(x) = Λ(x) Bi(x) = −Q(x) Ri(x) = Ω(x). B(x) and Q(x) don't need to be saved, so the algorithm becomes: :R−1 = xt :R0 = S(x) :A−1 = 0 :A0 = 1 :i = 0 :while degree of Ri ≥ t/2 ::i = i + 1 ::Q = Ri-2 / Ri-1 ::Ri = Ri-2 - Q Ri-1 ::Ai = Ai-2 - Q Ai-1 to set low order term of Λ(x) to 1, divide Λ(x) and Ω(x) by Ai(0): :Λ(x) = Ai / Ai(0) :Ω(x) = Ri / Ai(0) Ai(0) is the constant (low order) term of Ai.


Example

Using the same data as the Peterson–Gorenstein–Zierler example above: :Λ(x) = A2 / 544 = 329 x2 + 821 x + 001 :Ω(x) = R2 / 544 = 546 x + 732


Decoder using discrete Fourier transform

A discrete Fourier transform can be used for decoding. To avoid conflict with syndrome names, let ''c''(''x'') = ''s''(''x'') the encoded codeword. ''r''(''x'') and ''e''(''x'') are the same as above. Define ''C''(''x''), ''E''(''x''), and ''R''(''x'') as the discrete Fourier transforms of ''c''(''x''), ''e''(''x''), and ''r''(''x''). Since ''r''(''x'') = ''c''(''x'') + ''e''(''x''), and since a discrete Fourier transform is a linear operator, ''R''(''x'') = ''C''(''x'') + ''E''(''x''). Transform ''r''(''x'') to ''R''(''x'') using discrete Fourier transform. Since the calculation for a discrete Fourier transform is the same as the calculation for syndromes, ''t'' coefficients of ''R''(''x'') and ''E''(''x'') are the same as the syndromes: :R_j = E_j = S_j = r(\alpha^j) :\text 1 \le j \le t Use R_1 through R_t as syndromes (they're the same) and generate the error locator polynomial using the methods from any of the above decoders. Let v = number of errors. Generate E(x) using the known coefficients E_1 to E_t, the error locator polynomial, and these formulas :E_0 = - \frac(E_ + \Lambda_1 E_ + \cdots + \Lambda_ E_) :E_j = -(\Lambda_1 E_ + \Lambda_2 E_ + \cdots + \Lambda_v E_) :\text t < j < n Then calculate ''C''(''x'') = ''R''(''x'') − ''E''(''x'') and take the inverse transform (polynomial interpolation) of ''C''(''x'') to produce ''c''(''x'').


Decoding beyond the error-correction bound

The
Singleton bound In coding theory, the Singleton bound, named after Richard Collom Singleton, is a relatively crude upper bound on the size of an arbitrary block code C with block length n, size M and minimum distance d. It is also known as the Joshibound. proved b ...
states that the minimum distance ''d'' of a linear block code of size (''n'',''k'') is upper-bounded by ''n'' − ''k'' + 1. The distance ''d'' was usually understood to limit the error-correction capability to ⌊(d-1) / 2⌋. The Reed–Solomon code achieves this bound with equality, and can thus correct up to ⌊(n-k) / 2⌋ errors. However, this error-correction bound is not exact. In 1999,
Madhu Sudan Madhu Sudan (born 12 September 1966) is an Indian-American computer scientist. He has been a Gordon McKay Professor of Computer Science at the Harvard John A. Paulson School of Engineering and Applied Sciences since 2015. Career He received ...
and
Venkatesan Guruswami Venkatesan Guruswami (born 1976) is a senior scientist at the Simons Institute for the Theory of Computing and Professor of EECS and Mathematics at the University of California, Berkeley. He did his high schooling at Padma Seshadri Bala Bhavan ...
at MIT published "Improved Decoding of Reed–Solomon and Algebraic-Geometry Codes" introducing an algorithm that allowed for the correction of errors beyond half the minimum distance of the code. It applies to Reed–Solomon codes and more generally to
algebraic geometric code In mathematics, an algebraic geometric code (AG-code), otherwise known as a Goppa code, is a general type of linear code constructed by using an algebraic curve X over a finite field \mathbb_q. Such codes were introduced by Valerii Denisovich Gop ...
s. This algorithm produces a list of codewords (it is a list-decoding algorithm) and is based on interpolation and factorization of polynomials over GF(2^m) and its extensions.


Soft-decoding

The algebraic decoding methods described above are hard-decision methods, which means that for every symbol a hard decision is made about its value. For example, a decoder could associate with each symbol an additional value corresponding to the channel
demodulator Demodulation is extracting the original information-bearing signal from a carrier wave. A demodulator is an electronic circuit (or computer program in a software-defined radio) that is used to recover the information content from the modulated ...
's confidence in the correctness of the symbol. The advent of
LDPC In information theory, a low-density parity-check (LDPC) code is a linear error correcting code, a method of transmitting a message over a noisy transmission channel. An LDPC code is constructed using a sparse Tanner graph (subclass of the bip ...
and
turbo code In information theory, turbo codes (originally in French ''Turbocodes'') are a class of high-performance forward error correction (FEC) codes developed around 1990–91, but first published in 1993. They were the first practical codes to closely ...
s, which employ iterated soft-decision belief propagation decoding methods to achieve error-correction performance close to the theoretical limit, has spurred interest in applying soft-decision decoding to conventional algebraic codes. In 2003, Ralf Koetter and Alexander Vardy presented a polynomial-time soft-decision algebraic list-decoding algorithm for Reed–Solomon codes, which was based upon the work by Sudan and Guruswami. In 2016, Steven J. Franke and Joseph H. Taylor published a novel soft-decision decoder.


Matlab example


Encoder

Here we present a simple Matlab implementation for an encoder. function ncoded= rsEncoder(msg, m, prim_poly, n, k) % RSENCODER Encode message with the Reed-Solomon algorithm % m is the number of bits per symbol % prim_poly: Primitive polynomial p(x). Ie for DM is 301 % k is the size of the message % n is the total size (k+redundant) % Example: msg = uint8('Test') % enc_msg = rsEncoder(msg, 8, 301, 12, numel(msg)); % Get the alpha alpha = gf(2, m, prim_poly); % Get the Reed-Solomon generating polynomial g(x) g_x = genpoly(k, n, alpha); % Multiply the information by X^(n-k), or just pad with zeros at the end to % get space to add the redundant information msg_padded = gf( sg zeros(1, n - k) m, prim_poly); % Get the remainder of the division of the extended message by the % Reed-Solomon generating polynomial g(x) , remainder= deconv(msg_padded, g_x); % Now return the message with the redundant information encoded = msg_padded - remainder; end % Find the Reed-Solomon generating polynomial g(x), by the way this is the % same as the rsgenpoly function on matlab function g = genpoly(k, n, alpha) g = 1; % A multiplication on the galois field is just a convolution for k = mod(1 : n - k, n) g = conv(g, alpha .^ (k); end end


Decoder

Now the decoding part: function ecoded, error_pos, error_mag, g, S= rsDecoder(encoded, m, prim_poly, n, k) % RSDECODER Decode a Reed-Solomon encoded message % Example: % ec, ~, ~, ~, ~= rsDecoder(enc_msg, 8, 301, 12, numel(msg)) max_errors = floor((n - k) / 2); orig_vals = encoded.x; % Initialize the error vector errors = zeros(1, n); g = []; S = []; % Get the alpha alpha = gf(2, m, prim_poly); % Find the syndromes (Check if dividing the message by the generator % polynomial the result is zero) Synd = polyval(encoded, alpha .^ (1:n - k)); Syndromes = trim(Synd); % If all syndromes are zeros (perfectly divisible) there are no errors if isempty(Syndromes.x) decoded = orig_vals(1:k); error_pos = []; error_mag = []; g = []; S = Synd; return; end % Prepare for the euclidean algorithm (Used to find the error locating % polynomials) r0 = [1, zeros(1, 2 * max_errors)]; r0 = gf(r0, m, prim_poly); r0 = trim(r0); size_r0 = length(r0); r1 = Syndromes; f0 = gf( eros(1, size_r0 - 1) 1 m, prim_poly); f1 = gf(zeros(1, size_r0), m, prim_poly); g0 = f1; g1 = f0; % Do the euclidean algorithm on the polynomials r0(x) and Syndromes(x) in % order to find the error locating polynomial while true % Do a long division uotient, remainder= deconv(r0, r1); % Add some zeros quotient = pad(quotient, length(g1)); % Find quotient*g1 and pad c = conv(quotient, g1); c = trim(c); c = pad(c, length(g0)); % Update g as g0-quotient*g1 g = g0 - c; % Check if the degree of remainder(x) is less than max_errors if all(remainder(1:end - max_errors)

0) break; end % Update r0, r1, g0, g1 and remove leading zeros r0 = trim(r1); r1 = trim(remainder); g0 = g1; g1 = g; end % Remove leading zeros g = trim(g); % Find the zeros of the error polynomial on this galois field evalPoly = polyval(g, alpha .^ (n - 1 : - 1 : 0)); error_pos = gf(find(evalPoly

0), m); % If no error position is found we return the received work, because % basically is nothing that we could do and we return the received message if isempty(error_pos) decoded = orig_vals(1:k); error_mag = []; return; end % Prepare a linear system to solve the error polynomial and find the error % magnitudes size_error = length(error_pos); Syndrome_Vals = Syndromes.x; b(:, 1) = Syndrome_Vals(1:size_error); for idx = 1 : size_error e = alpha .^ (idx * (n - error_pos.x)); err = e.x; er(idx, :) = err; end % Solve the linear system error_mag = (gf(er, m, prim_poly) \ gf(b, m, prim_poly))'; % Put the error magnitude on the error vector errors(error_pos.x) = error_mag.x; % Bring this vector to the galois field errors_gf = gf(errors, m, prim_poly); % Now to fix the errors just add with the encoded code decoded_gf = encoded(1:k) + errors_gf(1:k); decoded = decoded_gf.x; end % Remove leading zeros from Galois array function gt = trim(g) gx = g.x; gt = gf(gx(find(gx, 1) : end), g.m, g.prim_poly); end % Add leading zeros function xpad = pad(x, k) len = length(x); if (len < k) xpad = eros(1, k - len) x end end


Reed Solomon original view decoders

The decoders described in this section use the Reed Solomon original view of a codeword as a sequence of polynomial values where the polynomial is based on the message to be encoded. The same set of fixed values are used by the encoder and decoder, and the decoder recovers the encoding polynomial (and optionally an error locating polynomial) from the received message.


Theoretical decoder

described a theoretical decoder that corrected errors by finding the most popular message polynomial. The decoder only knows the set of values a_1 to a_n and which encoding method was used to generate the codeword's sequence of values. The original message, the polynomial, and any errors are unknown. A decoding procedure could use a method like Lagrange interpolation on various subsets of n codeword values taken k at a time to repeatedly produce potential polynomials, until a sufficient number of matching polynomials are produced to reasonably eliminate any errors in the received codeword. Once a polynomial is determined, then any errors in the codeword can be corrected, by recalculating the corresponding codeword values. Unfortunately, in all but the simplest of cases, there are too many subsets, so the algorithm is impractical. The number of subsets is the
binomial coefficient In mathematics, the binomial coefficients are the positive integers that occur as coefficients in the binomial theorem. Commonly, a binomial coefficient is indexed by a pair of integers and is written \tbinom. It is the coefficient of the te ...
, \textstyle \binom = , and the number of subsets is infeasible for even modest codes. For a (255,249) code that can correct 3 errors, the naive theoretical decoder would examine 359 billion subsets.


Berlekamp Welch decoder

In 1986, a decoder known as the Berlekamp–Welch algorithm was developed as a decoder that is able to recover the original message polynomial as well as an error "locator" polynomial that produces zeroes for the input values that correspond to errors, with time complexity O(n^3), where n is the number of values in a message. The recovered polynomial is then used to recover (recalculate as needed) the original message.


Example

Using RS(7,3), GF(929), and the set of evaluation points ''a''''i'' = ''i'' − 1 : If the message polynomial is : The codeword is : Errors in transmission might cause this to be received instead. : The key equations are: :b_i E(a_i) - Q(a_i) = 0 Assume maximum number of errors: ''e'' = 2. The key equations become: :b_i(e_0 + e_1 a_i) - (q_0 + q_1 a_i + q_2 a_i^2 + q_3 a_i^3 + q_4 a_i^4) = - b_i a_i^2 :\begin 001 & 000 & 928 & 000 & 000 & 000 & 000 \\ 006 & 006 & 928 & 928 & 928 & 928 & 928 \\ 123 & 246 & 928 & 927 & 925 & 921 & 913 \\ 456 & 439 & 928 & 926 & 920 & 902 & 848 \\ 057 & 228 & 928 & 925 & 913 & 865 & 673 \\ 086 & 430 & 928 & 924 & 904 & 804 & 304 \\ 121 & 726 & 928 & 923 & 893 & 713 & 562 \end \begin e_0 \\ e_1 \\ q_0 \\ q_1 \\ q_2 \\ q_3 \\ q_4 \end = \begin 000 \\ 923 \\ 437 \\ 541 \\ 017 \\ 637 \\ 289 \end Using
Gaussian elimination In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of operations performed on the corresponding matrix of coefficients. This method can also be used ...
: : \begin 001 & 000 & 000 & 000 & 000 & 000 & 000 \\ 000 & 001 & 000 & 000 & 000 & 000 & 000 \\ 000 & 000 & 001 & 000 & 000 & 000 & 000 \\ 000 & 000 & 000 & 001 & 000 & 000 & 000 \\ 000 & 000 & 000 & 000 & 001 & 000 & 000 \\ 000 & 000 & 000 & 000 & 000 & 001 & 000 \\ 000 & 000 & 000 & 000 & 000 & 000 & 001 \end \begin e_0 \\ e_1 \\ q_0 \\ q_1 \\ q_2 \\ q_3 \\ q_4 \end = \begin 006 \\ 924 \\ 006 \\ 007 \\ 009 \\ 916 \\ 003 \end : : : Recalculate where to correct resulting in the corrected codeword: :


Gao decoder

In 2002, an improved decoder was developed by Shuhong Gao, based on the extended Euclid algorith
Gao_RS.pdf
.


Example

Using the same data as the Berlekamp Welch example above: :R_ = \prod_^n (x - a_i) :R_0 = Lagrange interpolation of \ for ''i'' = 1 to ''n'' :A_ = 0 :A_0 = 1 : : divide ''Q''(x) and ''E''(x) by most significant coefficient of ''E''(x) = 708. (Optional) : : : Recalculate where to correct resulting in the corrected codeword: :


See also

*
BCH code In coding theory, the Bose–Chaudhuri–Hocquenghem codes (BCH codes) form a class of cyclic error-correcting codes that are constructed using polynomials over a finite field (also called ''Galois field''). BCH codes were invented in 195 ...
* Cyclic code * Chien search * Berlekamp–Massey algorithm *
Forward error correction In computing, telecommunication, information theory, and coding theory, an error correction code, sometimes error correcting code, (ECC) is used for controlling errors in data over unreliable or noisy communication channels. The central idea i ...
* Berlekamp–Welch algorithm *
Folded Reed–Solomon code In coding theory, folded Reed–Solomon codes are like Reed–Solomon codes, which are obtained by mapping m Reed–Solomon codewords over a larger alphabet by careful bundling of codeword symbols. Folded Reed–Solomon codes are also a special ...


Notes


References


Further reading

* * * * * * * * * * * * * *


External links


Information and tutorials


Introduction to Reed–Solomon codes: principles, architecture and implementation
(CMU)

* ttp://sidewords.files.wordpress.com/2007/12/thesis.pdf Algebraic soft-decoding of Reed–Solomon codes* Wikiversity:Reed–Solomon codes for coders
BBC R&D White Paper WHP031
* *
Concatenated codes
by Dr.
Dave Forney George David Forney Jr. (born March 6, 1940) is an American electrical engineer who made contributions in telecommunication system theory, specifically in coding theory and information theory. Biography Forney received the B.S.E. degree in elect ...
(scholarpedia.org). *


Implementations


FEC library in C by Phil Karn (aka KA9Q) includes Reed–Solomon codec, both arbitrary and optimized (223,255) version

Schifra Open Source C++ Reed–Solomon Codec

Henry Minsky's RSCode library, Reed–Solomon encoder/decoder

Open Source C++ Reed–Solomon Soft Decoding library

Matlab implementation of errors and-erasures Reed–Solomon decoding



Pure-Python implementation of a Reed–Solomon codec
{{DEFAULTSORT:Reed-Solomon error correction Error detection and correction Coding theory