In
coding theory
Coding theory is the study of the properties of codes and their respective fitness for specific applications. Codes are used for data compression, cryptography, error detection and correction, data transmission and computer data storage, data sto ...
, the Bose–Chaudhuri–Hocquenghem codes (BCH codes) form a class of
cyclic error-correcting codes that are constructed using
polynomial
In mathematics, a polynomial is a Expression (mathematics), mathematical expression consisting of indeterminate (variable), indeterminates (also called variable (mathematics), variables) and coefficients, that involves only the operations of addit ...
s over a
finite field
In mathematics, a finite field or Galois field (so-named in honor of Évariste Galois) is a field (mathematics), field that contains a finite number of Element (mathematics), elements. As with any field, a finite field is a Set (mathematics), s ...
(also called a ''
Galois field''). BCH codes were invented in 1959 by French mathematician
Alexis Hocquenghem, and independently in 1960 by
Raj Chandra Bose and
D. K. Ray-Chaudhuri. The name ''Bose–Chaudhuri–Hocquenghem'' (and the acronym ''BCH'') arises from the initials of the inventors' surnames (mistakenly, in the case of Ray-Chaudhuri).
One of the key features of BCH codes is that during code design, there is a precise control over the number of symbol errors correctable by the code. In particular, it is possible to design binary BCH codes that can correct multiple bit errors. Another advantage of BCH codes is the ease with which they can be decoded, namely, via an
algebraic method known as
syndrome decoding. This simplifies the design of the decoder for these codes, using small low-power electronic hardware.
BCH codes are used in applications such as satellite communications,
compact disc
The compact disc (CD) is a Digital media, digital optical disc data storage format co-developed by Philips and Sony to store and play digital audio recordings. It employs the Compact Disc Digital Audio (CD-DA) standard and was capable of hol ...
players,
DVDs,
disk drives,
USB flash drive
A flash drive (also thumb drive, memory stick, and pen drive/pendrive) is a data storage device that includes flash memory with an integrated USB interface. A typical USB drive is removable, rewritable, and smaller than an optical disc, and u ...
s,
solid-state drives, and
two-dimensional bar codes.
Definition and illustration
Primitive narrow-sense BCH codes
Given a
prime number
A prime number (or a prime) is a natural number greater than 1 that is not a Product (mathematics), product of two smaller natural numbers. A natural number greater than 1 that is not prime is called a composite number. For example, 5 is prime ...
and
prime power with positive integers and such that , a primitive narrow-sense BCH code over the
finite field
In mathematics, a finite field or Galois field (so-named in honor of Évariste Galois) is a field (mathematics), field that contains a finite number of Element (mathematics), elements. As with any field, a finite field is a Set (mathematics), s ...
(or Galois field) with code length and
distance
Distance is a numerical or occasionally qualitative measurement of how far apart objects, points, people, or ideas are. In physics or everyday usage, distance may refer to a physical length or an estimation based on other criteria (e.g. "two co ...
at least is constructed by the following method.
Let be a
primitive element of .
For any positive integer , let be the
minimal polynomial with coefficients in of .
The
generator polynomial of the BCH code is defined as the
least common multiple
In arithmetic and number theory, the least common multiple (LCM), lowest common multiple, or smallest common multiple (SCM) of two integers ''a'' and ''b'', usually denoted by , is the smallest positive integer that is divisible by both ''a'' and ...
.
It can be seen that is a polynomial with coefficients in and divides .
Therefore, the
polynomial code defined by is a cyclic code.
Example
Let and (therefore ). We will consider different values of for based on the reducing polynomial , using primitive element . There are fourteen minimum polynomials with coefficients in satisfying
:
The minimal polynomials are
:
The BCH code with
has the generator polynomial
:
It has minimal
Hamming distance
In information theory, the Hamming distance between two String (computer science), strings or vectors of equal length is the number of positions at which the corresponding symbols are different. In other words, it measures the minimum number ...
at least 3 and corrects up to one error. Since the generator polynomial is of degree 4, this code has 11 data bits and 4 checksum bits. It is also denoted as: (15, 11) BCH code.
The BCH code with
has the generator polynomial
:
It has minimal Hamming distance at least 5 and corrects up to two errors. Since the generator polynomial is of degree 8, this code has 7 data bits and 8 checksum bits. It is also denoted as: (15, 7) BCH code.
The BCH code with
has the generator polynomial
:
It has minimal Hamming distance at least 7 and corrects up to three errors. Since the generator polynomial is of degree 10, this code has 5 data bits and 10 checksum bits. It is also denoted as: (15, 5) BCH code. (This particular generator polynomial has a real-world application, in the "format information" of the
QR code.)
The BCH code with
and higher has the generator polynomial
:
This code has minimal Hamming distance 15 and corrects 7 errors. It has 1 data bit and 14 checksum bits. It is also denoted as: (15, 1) BCH code. In fact, this code has only two codewords: 000000000000000 and 111111111111111 (a trivial
repetition code
In coding theory, the repetition code is one of the most basic linear error-correcting codes. In order to transmit a message over a noisy channel that may corrupt the transmission in a few places, the idea of the repetition code is to just repeat ...
).
General BCH codes
General BCH codes differ from primitive narrow-sense BCH codes in two respects.
First, the requirement that
be a primitive element of
can be relaxed. By relaxing this requirement, the code length changes from
to
the
order of the element
Second, the consecutive roots of the generator polynomial may run from
instead of
Definition. Fix a finite field
where
is a prime power. Choose positive integers
such that
and
is the
multiplicative order
In number theory, given a positive integer ''n'' and an integer ''a'' coprime to ''n'', the multiplicative order of ''a'' modulo ''n'' is the smallest positive integer ''k'' such that a^k\ \equiv\ 1 \pmod n.
In other words, the multiplicative orde ...
of
modulo
As before, let
be a
primitive th root of unity in
and let
be the
minimal polynomial over
of
for all
The generator polynomial of the BCH code is defined as the
least common multiple
In arithmetic and number theory, the least common multiple (LCM), lowest common multiple, or smallest common multiple (SCM) of two integers ''a'' and ''b'', usually denoted by , is the smallest positive integer that is divisible by both ''a'' and ...
Note: if
as in the simplified definition, then
is 1, and the order of
modulo
is
Therefore, the simplified definition is indeed a special case of the general one.
Special cases
* A BCH code with
is called a ''narrow-sense BCH code''.
* A BCH code with
is called ''primitive''.
The generator polynomial
of a BCH code has coefficients from
In general, a cyclic code over
with
as the generator polynomial is called a BCH code over
The BCH code over
and generator polynomial
with successive powers of
as roots is one type of
Reed–Solomon code where the decoder (syndromes) alphabet is the same as the channel (data and generator polynomial) alphabet, all elements of
. The other type of Reed Solomon code is an
original view Reed Solomon code which is not a BCH code.
Properties
The generator polynomial of a BCH code has degree at most
. Moreover, if
and
, the generator polynomial has degree at most
.
Each minimal polynomial
has degree at most
. Therefore, the least common multiple of
of them has degree at most
. Moreover, if
then
for all
. Therefore,
is the least common multiple of at most
minimal polynomials
for odd indices
each of degree at most
.
A BCH code has minimal Hamming distance at least
.
Suppose that
is a code word with fewer than
non-zero terms. Then
:
Encoding
Because any polynomial that is a multiple of the generator polynomial is a valid BCH codeword, BCH encoding is merely the process of finding some polynomial that has the generator as a factor.
The BCH code itself is not prescriptive about the meaning of the coefficients of the polynomial; conceptually, a BCH decoding algorithm's sole concern is to find the valid codeword with the minimal Hamming distance to the received codeword. Therefore, the BCH code may be implemented either as a
systematic code In coding theory, a systematic code is any error-correcting code in which the input data are embedded in the encoded output. Conversely, in a non-systematic code the output does not contain the input symbols.
Systematic codes have the advantage th ...
or not, depending on how the implementor chooses to embed the message in the encoded polynomial.
Non-systematic encoding: The message as a factor
The most straightforward way to find a polynomial that is a multiple of the generator is to compute the product of some arbitrary polynomial and the generator. In this case, the arbitrary polynomial can be chosen using the symbols of the message as coefficients.
:
s(x) = p(x)g(x)
As an example, consider the generator polynomial
g(x)=x^+x^9+x^8+x^6+x^5+x^3+1, chosen for use in the (31, 21) binary BCH code used by
POCSAG
Radio-paging code No. 1 (usually and hereafter called POCSAG) is an asynchronous protocol used to transmit data to pagers. Its usual designation is an acronym of the Post Office Code Standardisation Advisory Group, the name of the group that devel ...
and others. To encode the 21-bit message , we first represent it as a polynomial over
GF(2):
:
p(x) = x^+x^+x^+x^+x^+x^+x^+x^+x^9+x^8+x^6+x^5+x^4+x^3+x^2+1
Then, compute (also over
GF(2)):
:
\begin
s(x) &= p(x)g(x)\\
&= \left(x^+x^+x^+x^+x^+x^+x^+x^+x^9+x^8+x^6+x^5+x^4+x^3+x^2+1\right)\left(x^+x^9+x^8+x^6+x^5+x^3+1\right)\\
&= x^+x^+x^+x^+x^+x^+x^+x^+x^+x^+x^+x^+x^+x^9+x^8+x^6+x^5+x^4+x^2+1
\end
Thus, the transmitted codeword is .
The receiver can use these bits as coefficients in
s(x) and, after error-correction to ensure a valid codeword, can recompute
p(x) = s(x)/g(x)
Systematic encoding: The message as a prefix
A systematic code is one in which the message appears verbatim somewhere within the codeword. Therefore, systematic BCH encoding involves first embedding the message polynomial within the codeword polynomial, and then adjusting the coefficients of the remaining (non-message) terms to ensure that
s(x) is divisible by
g(x).
This encoding method leverages the fact that subtracting the remainder from a dividend results in a multiple of the divisor. Hence, if we take our message polynomial
p(x) as before and multiply it by
x^ (to "shift" the message out of the way of the remainder), we can then use
Euclidean division
In arithmetic, Euclidean division – or division with remainder – is the process of dividing one integer (the dividend) by another (the divisor), in a way that produces an integer quotient and a natural number remainder strictly smaller than ...
of polynomials to yield:
:
p(x)x^ = q(x)g(x) + r(x)
Here, we see that
q(x)g(x) is a valid codeword. As
r(x) is always of degree less than
n-k (which is the degree of
g(x)), we can safely subtract it from
p(x)x^ without altering any of the message coefficients, hence we have our
s(x) as
:
s(x) = q(x)g(x) = p(x)x^ - r(x)
Over
GF(2) (i.e. with binary BCH codes), this process is indistinguishable from appending a
cyclic redundancy check, and if a systematic binary BCH code is used only for error-detection purposes, we see that BCH codes are just a generalization of the
mathematics of cyclic redundancy checks.
The advantage to the systematic coding is that the receiver can recover the original message by discarding everything after the first
k coefficients, after performing error correction.
Decoding
There are many algorithms for decoding BCH codes. The most common ones follow this general outline:
# Calculate the syndromes ''s
j'' for the received vector
# Determine the number of errors ''t'' and the error locator polynomial ''Λ(x)'' from the syndromes
# Calculate the roots of the error location polynomial to find the error locations ''X
i''
# Calculate the error values ''Y
i'' at those error locations
# Correct the errors
During some of these steps, the decoding algorithm may determine that the received vector has too many errors and cannot be corrected. For example, if an appropriate value of ''t'' is not found, then the correction would fail. In a truncated (not primitive) code, an error location may be out of range. If the received vector has more errors than the code can correct, the decoder may unknowingly produce an apparently valid message that is not the one that was sent.
Calculate the syndromes
The received vector
R is the sum of the correct codeword
C and an unknown error vector
E. The syndrome values are formed by considering
R as a polynomial and evaluating it at
\alpha^c, \ldots, \alpha^. Thus the syndromes are
:
s_j = R\left(\alpha^j\right) = C\left(\alpha^j\right) + E\left(\alpha^j\right)
for
j = c to
c + d - 2.
Since
\alpha^ are the zeros of
g(x), of which
C(x) is a multiple,
C\left(\alpha^j\right) = 0. Examining the syndrome values thus isolates the error vector so one can begin to solve for it.
If there is no error,
s_j = 0 for all
j. If the syndromes are all zero, then the decoding is done.
Calculate the error location polynomial
If there are nonzero syndromes, then there are errors. The decoder needs to figure out how many errors and the location of those errors.
If there is a single error, write this as
E(x) = e\,x^i, where
i is the location of the error and
e is its magnitude. Then the first two syndromes are
:
\begin
s_c &= e\,\alpha^ \\
s_ &= e\,\alpha^ = \alpha^i s_c
\end
so together they allow us to calculate
e and provide some information about
i (completely determining it in the case of Reed–Solomon codes).
If there are two or more errors,
:
E(x) = e_1 x^ + e_2 x^ + \cdots \,
It is not immediately obvious how to begin solving the resulting syndromes for the unknowns
e_k and
i_k.
The first step is finding, compatible with computed syndromes and with minimal possible
t, locator polynomial:
:
\Lambda(x) = \prod_^t \left(x\alpha^ - 1\right)
Three popular algorithms for this task are:
#
Peterson–Gorenstein–Zierler algorithm
#
Berlekamp–Massey algorithm
#
Sugiyama Euclidean algorithm
Peterson–Gorenstein–Zierler algorithm
Peterson's algorithm is the step 2 of the generalized BCH decoding procedure. Peterson's algorithm is used to calculate the error locator polynomial coefficients
\lambda_1 , \lambda_2, \dots, \lambda_ of a polynomial
:
\Lambda(x) = 1 + \lambda_1 x + \lambda_2 x^2 + \cdots + \lambda_v x^v .
Now the procedure of the Peterson–Gorenstein–Zierler algorithm. Expect we have at least 2''t'' syndromes ''s''
''c'', …, ''s''
''c''+2''t''−1. Let ''v'' = ''t''.
Factor error locator polynomial
Now that you have the
\Lambda(x) polynomial, its roots can be found in the form
\Lambda(x) = \left(\alpha^ x - 1\right)\left(\alpha^ x - 1\right) \cdots \left(\alpha^ x - 1\right) by brute force for example using the
Chien search algorithm. The exponential
powers of the primitive element
\alpha will yield the positions where errors occur in the received word; hence the name 'error locator' polynomial.
The zeros of Λ(''x'') are ''α''
−''i''1, …, ''α''
−''i''''v''.
Calculate error values
Once the error locations are known, the next step is to determine the error values at those locations. The error values are then used to correct the received values at those locations to recover the original codeword.
For the case of binary BCH, (with all characters readable) this is trivial; just flip the bits for the received word at these positions, and we have the corrected code word. In the more general case, the error weights
e_j can be determined by solving the linear system
:
\begin
s_c & = e_1 \alpha^ + e_2 \alpha^ + \cdots \\
s_ & = e_1 \alpha^ + e_2 \alpha^ + \cdots \\
& \ \vdots
\end
Forney algorithm
However, there is a more efficient method known as the
Forney algorithm.
Let
:
S(x) = s_c + s_x + s_x^2 + \cdots + s_x^.
:
v \leqslant d-1, \lambda_0 \neq 0 \qquad \Lambda(x) = \sum_^v\lambda_i x^i = \lambda_0 \prod_^ \left(\alpha^x - 1\right).
And the error evaluator polynomial
:
\Omega(x) \equiv S(x) \Lambda(x) \bmod
Finally:
:
\Lambda'(x) = \sum_^v i \cdot \lambda_i x^,
where
:
i \cdot x := \sum_^i x.
Than if syndromes could be explained by an error word, which could be nonzero only on positions
i_k, then error values are
:
e_k = -.
For narrow-sense BCH codes, ''c'' = 1, so the expression simplifies to:
:
e_k = -.
Explanation of Forney algorithm computation
It is based on
Lagrange interpolation and techniques of
generating function
In mathematics, a generating function is a representation of an infinite sequence of numbers as the coefficients of a formal power series. Generating functions are often expressed in closed form (rather than as a series), by some expression invo ...
s.
Consider
S(x)\Lambda(x), and for the sake of simplicity suppose
\lambda_k = 0 for
k > v, and
s_k = 0 for
k > c + d - 2. Then
:
S(x)\Lambda(x) = \sum_^\sum_^j s_\lambda_i x^j.
:
\begin
S(x)\Lambda(x)
&= S(x) \left \ \\
&= \left \ \left \ \\
&= \left \ \left \ \\
&= \left \ \left \ \\
&= \lambda_0 \sum_^v e_j\alpha^ \frac \prod_^v \left (\alpha^x-1 \right ) \\
&= \lambda_0 \sum_^v e_j\alpha^ \left ( \left (x\alpha^ \right)^-1 \right ) \prod_ \left (\alpha^x-1 \right )
\end
We want to compute unknowns
e_j, and we could simplify the context by removing the
\left(x\alpha^\right)^ terms. This leads to the error evaluator polynomial
:
\Omega(x) \equiv S(x) \Lambda(x) \bmod.
Thanks to
v\leqslant d-1 we have
:
\Omega(x) = -\lambda_0\sum_^v e_j\alpha^ \prod_ \left(\alpha^x - 1\right).
Thanks to
\Lambda (the Lagrange interpolation trick) the sum degenerates to only one summand for
x = \alpha^
:
\Omega \left(\alpha^\right) = -\lambda_0 e_k\alpha^\prod_ \left(\alpha^\alpha^ - 1\right).
To get
e_k we just should get rid of the product. We could compute the product directly from already computed roots
\alpha^ of
\Lambda, but we could use simpler form.
As
formal derivative
In mathematics, the formal derivative is an operation on elements of a polynomial ring or a ring of formal power series that mimics the form of the derivative from calculus. Though they appear similar, the algebraic advantage of a formal deriv ...
:
\Lambda'(x) = \lambda_0\sum_^v \alpha^\prod_ \left(\alpha^x - 1\right),
we get again only one summand in
:
\Lambda'\left(\alpha^\right) = \lambda_0\alpha^\prod_ \left(\alpha^\alpha^ - 1\right).
So finally
:
e_k = -\frac.
This formula is advantageous when one computes the formal derivative of
\Lambda form
:
\Lambda(x) = \sum_^v \lambda_i x^i
yielding:
:
\Lambda'(x) = \sum_^v i \cdot \lambda_i x^,
where
:
i\cdot x := \sum_^i x.
Decoding based on extended Euclidean algorithm
An alternate process of finding both the polynomial Λ and the error locator polynomial is based on Yasuo Sugiyama's adaptation of the
Extended Euclidean algorithm
In arithmetic and computer programming, the extended Euclidean algorithm is an extension to the Euclidean algorithm, and computes, in addition to the greatest common divisor (gcd) of integers ''a'' and ''b'', also the coefficients of Bézout's id ...
.
[Yasuo Sugiyama, Masao Kasahara, Shigeichi Hirasawa, and Toshihiko Namekawa. A method for solving key equation for decoding Goppa codes. Information and Control, 27:87–99, 1975.] Correction of unreadable characters could be incorporated to the algorithm easily as well.
Let
k_1, ..., k_k be positions of unreadable characters. One creates polynomial localising these positions
\Gamma(x) = \prod_^k\left(x\alpha^ - 1\right).
Set values on unreadable positions to 0 and compute the syndromes.
As we have already defined for the Forney formula let
S(x)=\sum_^s_x^i.
Let us run extended Euclidean algorithm for locating least common divisor of polynomials
S(x)\Gamma(x) and
x^.
The goal is not to find the least common divisor, but a polynomial
r(x) of degree at most
\lfloor (d+k-3)/2\rfloor and polynomials
a(x), b(x) such that
r(x)=a(x)S(x)\Gamma(x)+b(x)x^.
Low degree of
r(x) guarantees, that
a(x) would satisfy extended (by
\Gamma) defining conditions for
\Lambda.
Defining
\Xi(x)=a(x)\Gamma(x) and using
\Xi on the place of
\Lambda(x) in the Fourney formula will give us error values.
The main advantage of the algorithm is that it meanwhile computes
\Omega(x)=S(x)\Xi(x)\bmod x^=r(x) required in the Forney formula.
Explanation of the decoding process
The goal is to find a codeword which differs from the received word minimally as possible on readable positions. When expressing the received word as a sum of nearest codeword and error word, we are trying to find error word with minimal number of non-zeros on readable positions. Syndrom
s_i restricts error word by condition
:
s_i=\sum_^e_j\alpha^.
We could write these conditions separately or we could create polynomial
:
S(x)=\sum_^s_x^i
and compare coefficients near powers
0 to
d-2.
:
S(x) \stackrel E(x)=\sum_^\sum_^e_j\alpha^\alpha^x^i.
Suppose there is unreadable letter on position
k_1, we could replace set of syndromes
\ by set of syndromes
\ defined by equation
t_i=\alpha^s_i-s_. Suppose for an error word all restrictions by original set
\ of syndromes hold,
than
:
t_i=\alpha^s_i-s_=\alpha^\sum_^e_j\alpha^-\sum_^e_j\alpha^j\alpha^=\sum_^e_j\left(\alpha^ - \alpha^j\right)\alpha^.
New set of syndromes restricts error vector
:
f_j=e_j\left(\alpha^ - \alpha^j\right)
the same way the original set of syndromes restricted the error vector
e_j. Except the coordinate
k_1, where we have
f_=0, an
f_j is zero, if
e_j = 0. For the goal of locating error positions we could change the set of syndromes in the similar way to reflect all unreadable characters. This shortens the set of syndromes by
k.
In polynomial formulation, the replacement of syndromes set
\ by syndromes set
\ leads to
:
T(x) = \sum_^t_x^i=\alpha^\sum_^s_x^i-\sum_^s_x^.
Therefore,
:
xT(x) \stackrel \left(x\alpha^ - 1\right)S(x).
After replacement of
S(x) by
S(x)\Gamma(x), one would require equation for coefficients near powers
k,\cdots,d-2.
One could consider looking for error positions from the point of view of eliminating influence of given positions similarly as for unreadable characters. If we found
v positions such that eliminating their influence leads to obtaining set of syndromes consisting of all zeros, than there exists error vector with errors only on these coordinates.
If
\Lambda(x) denotes the polynomial eliminating the influence of these coordinates, we obtain
:
S(x)\Gamma(x)\Lambda(x) \stackrel 0.
In Euclidean algorithm, we try to correct at most
\tfrac(d-1-k) errors (on readable positions), because with bigger error count there could be more codewords in the same distance from the received word. Therefore, for
\Lambda(x) we are looking for, the equation must hold for coefficients near powers starting from
:
k + \left\lfloor \frac (d-1-k) \right\rfloor.
In Forney formula,
\Lambda(x) could be multiplied by a scalar giving the same result.
It could happen that the Euclidean algorithm finds
\Lambda(x) of degree higher than
\tfrac(d-1-k) having number of different roots equal to its degree, where the Fourney formula would be able to correct errors in all its roots, anyway correcting such many errors could be risky (especially with no other restrictions on received word). Usually after getting
\Lambda(x) of higher degree, we decide not to correct the errors. Correction could fail in the case
\Lambda(x) has roots with higher multiplicity or the number of roots is smaller than its degree. Fail could be detected as well by Forney formula returning error outside the transmitted alphabet.
Correct the errors
Using the error values and error location, correct the errors and form a corrected code vector by subtracting error values at error locations.
Decoding examples
Decoding of binary code without unreadable characters
Consider a BCH code in GF(2
4) with
d=7 and
g(x) = x^ + x^8 + x^5 + x^4 + x^2 + x + 1. (This is used in
QR codes.) Let the message to be transmitted be
1 0 1 1/nowiki>, or in polynomial notation, M(x) = x^4 + x^3 + x + 1.
The "checksum" symbols are calculated by dividing x^ M(x) by g(x) and taking the remainder, resulting in x^9 + x^4 + x^2 or 1 0 0 0 0 1 0 1 0 0 /nowiki>. These are appended to the message, so the transmitted codeword is 1 1 0 1 1 1 0 0 0 0 1 0 1 0 0 /nowiki>.
Now, imagine that there are two bit-errors in the transmission, so the received codeword is 1 0 1 1 1 0 0 0 1 0 1 0 0 In polynomial notation:
:R(x) = C(x) + x^ + x^5 = x^ + x^ + x^ + x^9 + x^5 + x^4 + x^2
In order to correct the errors, first calculate the syndromes. Taking \alpha = 0010, we have s_1 = R(\alpha^1) = 1011, s_2 = 1001, s_3 = 1011, s_4 = 1101, s_5 = 0001, and s_6 = 1001.
Next, apply the Peterson procedure by row-reducing the following augmented matrix.
:\left C_ \right =
\begins_1&s_2&s_3&s_4\\
s_2&s_3&s_4&s_5\\
s_3&s_4&s_5&s_6\end =
\begin1011&1001&1011&1101\\
1001&1011&1101&0001\\
1011&1101&0001&1001\end \Rightarrow
\begin0001&0000&1000&0111\\
0000&0001&1011&0001\\
0000&0000&0000&0000
\end
Due to the zero row, is singular, which is no surprise since only two errors were introduced into the codeword.
However, the upper-left corner of the matrix is identical to , which gives rise to the solution \lambda_2 = 1000, \lambda_1 = 1011.
The resulting error locator polynomial is \Lambda(x) = 1000 x^2 + 1011 x + 0001, which has zeros at 0100 = \alpha^ and 0111 = \alpha^.
The exponents of \alpha correspond to the error locations.
There is no need to calculate the error values in this example, as the only possible value is 1.
Decoding with unreadable characters
Suppose the same scenario, but the received word has two unreadable characters 1 0 ? 1 1 ? 0 0 1 0 1 0 0 We replace the unreadable characters by zeros while creating the polynomial reflecting their positions \Gamma(x) = \left(\alpha^8x - 1\right)\left(\alpha^x - 1\right). We compute the syndromes s_1=\alpha^, s_2=\alpha^, s_3=\alpha^, s_4=\alpha^, s_5=\alpha^, and s_6=\alpha^. (Using log notation which is independent on GF(24) isomorphisms. For computation checking we can use the same representation for addition as was used in previous example. Hexadecimal description of the powers of \alpha are consecutively 1,2,4,8,3,6,C,B,5,A,7,E,F,D,9 with the addition based on bitwise xor.)
Let us make syndrome polynomial
:S(x)=\alpha^+\alpha^x+\alpha^x^2+\alpha^x^3+\alpha^x^4+\alpha^x^5,
compute
:S(x)\Gamma(x)=\alpha^+\alpha^x+\alpha^x^2+\alpha^x^3+\alpha^x^4+\alpha^x^5+\alpha^x^6+\alpha^x^7.
Run the extended Euclidean algorithm:
:\begin
&\beginS(x)\Gamma(x)\\ x^6\end \\ pt = &\begin\alpha^ +\alpha^x+ \alpha^x^2+ \alpha^x^3+ \alpha^x^4+ \alpha^x^5 +\alpha^x^6+ \alpha^x^7 \\ x^6\end \\ pt = &\begin\alpha^+ \alpha^x & 1\\ 1 & 0\end
\beginx^6\\ \alpha^ +\alpha^x +\alpha^x^2 +\alpha^x^3 +\alpha^x^4 +\alpha^x^5 +2\alpha^x^6 +2\alpha^x^7\end \\ pt = &\begin\alpha^+ \alpha^x & 1\\ 1 & 0\end
\begin\alpha^4 + \alpha^x & 1\\ 1 & 0\end \\
&\qquad \begin\alpha^+ \alpha^x+ \alpha^x^2+ \alpha^x^3+ \alpha^x^4+ \alpha^x^5\\ \alpha^ +\left(\alpha^+ \alpha^\right)x+ \left(\alpha^+ \alpha^\right)x^2+ \left(\alpha^+ \alpha^\right)x^3+ \left(\alpha^3+ \alpha^\right)x^4+ 2\alpha^x^5+ 2x^6\end \\ pt = &\begin\left(1+ \alpha^\right)+ \left(\alpha^+ \alpha^\right)x+ \alpha^x^2 & \alpha^+ \alpha^x \\ \alpha^4+ \alpha^x & 1\end
\begin\alpha^+ \alpha^x+ \alpha^x^2+ \alpha^x^3+ \alpha^x^4+ \alpha^x^5\\ \alpha^+ \alpha^x+ \alpha^x^2+ \alpha^x^3+ \alpha^x^4\end \\ pt = &\begin\alpha^+ \alpha^x+ \alpha^x^2 & \alpha^+ \alpha^x \\ \alpha^4+ \alpha^x & 1\end
\begin\alpha^+ \alpha^x & 1\\ 1 & 0 \end \\
&\qquad \begin\alpha^+ \alpha^x+ \alpha^x^2+ \alpha^x^3+ \alpha^x^4\\ \left(\alpha^+ \alpha^\right)+ \left(2\alpha^+ \alpha^\right)x+ \left(\alpha^+ \alpha^+ \alpha^\right)x^2+ \left(\alpha^+ \alpha^+ \alpha^\right)x^3+ \left(\alpha^+ \alpha^+ \alpha^\right)x^4+ 2\alpha^x^5\end \\ pt = &\begin\alpha^x+ \alpha^x^2+ \alpha^x^3 & \alpha^+ \alpha^x+ \alpha^x^2\\ \alpha^+ \alpha^x+ \alpha^x^2 & \alpha^4+ \alpha^x\end
\begin\alpha^+ \alpha^x+ \alpha^x^2+ \alpha^x^3+ \alpha^x^4\\ \alpha^+ \alpha^x+ \alpha^x^2+ \alpha^x^3\end.
\end
We have reached polynomial of degree at most 3, and as
:\begin-\left(\alpha^4+ \alpha^x\right) & \alpha^+ \alpha^x+ \alpha^x^2\\ \alpha^+ \alpha^x+ \alpha^x^2 & -\left(\alpha^x+ \alpha^x^2+ \alpha^x^3\right)\end \begin\alpha^x+ \alpha^x^2+ \alpha^x^3 & \alpha^ + \alpha^x + \alpha^x^2\\ \alpha^ + \alpha^x + \alpha^x^2 & \alpha^4 + \alpha^x\end = \begin1 & 0\\ 0 & 1\end,
we get
:\begin-\left(\alpha^4+ \alpha^x\right) & \alpha^+ \alpha^x+ \alpha^x^2\\ \alpha^+ \alpha^x+ \alpha^x^2 & -\left(\alpha^x+ \alpha^x^2+ \alpha^x^3\right)\end
\beginS(x)\Gamma(x)\\ x^6\end = \begin\alpha^+ \alpha^x+ \alpha^x^2+ \alpha^x^3+ \alpha^x^4\\ \alpha^+ \alpha^x+ \alpha^x^2+ \alpha^x^3\end.
Therefore,
:S(x)\Gamma(x)\left(\alpha^ + \alpha^x + \alpha^x^2\right) - \left(\alpha^x + \alpha^x^2 + \alpha^x^3\right)x^6 = \alpha^ + \alpha^x + \alpha^x^2 + \alpha^x^3.
Let \Lambda(x) = \alpha^+ \alpha^x+ \alpha^x^2. Don't worry that \lambda_0\neq 1. Find by brute force a root of \Lambda. The roots are \alpha^2, and \alpha^ (after finding for example \alpha^2 we can divide \Lambda by corresponding monom \left(x - \alpha^2\right) and the root of resulting monom could be found easily).
Let
:\begin
\Xi(x) &= \Gamma(x)\Lambda(x) = \alpha^3 + \alpha^4x^2 + \alpha^2x^3 + \alpha^x^4 \\
\Omega(x) &= S(x)\Xi(x) \equiv \alpha^ + \alpha^4x + \alpha^2x^2 + \alpha^x^3 \bmod
\end
Let us look for error values using formula
:e_j = -\frac,
where \alpha^ are roots of \Xi(x). \Xi'(x)=\alpha^x^2. We get
:\begin
e_1 &=-\frac = \frac =\frac=1 \\
e_2 &=-\frac = \frac=0 \\
e_3 &=-\frac = \frac=\frac=1 \\
e_4 &=-\frac = \frac=\frac=1
\end
Fact, that e_3=e_4=1, should not be surprising.
Corrected code is therefore 1 0 1 1 0 0 1 0 1 0 0
Decoding with unreadable characters with a small number of errors
Let us show the algorithm behaviour for the case with small number of errors. Let the received word is 1 0 ? 1 1 ? 0 0 0 1 0 1 0 0
Again, replace the unreadable characters by zeros while creating the polynomial reflecting their positions \Gamma(x) = \left(\alpha^x - 1\right)\left(\alpha^x - 1\right).
Compute the syndromes s_1 = \alpha^, s_2 = \alpha^, s_3 = \alpha^, s_4 = \alpha^, s_5 = \alpha^, and s_6 = \alpha^.
Create syndrome polynomial
:\begin
S(x) &= \alpha^ + \alpha^x + \alpha^x^2 + \alpha^x^3 + \alpha^x^4 + \alpha^x^5, \\
S(x)\Gamma(x) &= \alpha^ + \alpha^x + \alpha^x^2 + \alpha^x^3 + \alpha^x^4 + \alpha^x^5 + \alpha^x^6 + \alpha^x^7.
\end
Let us run the extended Euclidean algorithm:
:\begin
\begin
S(x)\Gamma(x) \\
x^6
\end
&= \begin
\alpha^ + \alpha^x + \alpha^x^2 + \alpha^x^3 + \alpha^x^4 + \alpha^x^5 + \alpha^x^6 + \alpha^x^7 \\
x^6
\end \\
&= \begin
\alpha^ + \alpha^x & 1 \\
1 & 0
\end \begin
x^6 \\
\alpha^ + \alpha^x + \alpha^x^2 + \alpha^x^3 + \alpha^x^4 + \alpha^x^5 + 2\alpha^x^6 + 2\alpha^x^7
\end \\
&= \begin
\alpha^ + \alpha^x & 1 \\
1 & 0
\end \begin
\alpha^ + \alpha^x & 1 \\
1 & 0
\end \begin
\alpha^ + \alpha^x + \alpha^x^2 + \alpha^x^3 + \alpha^x^4 + \alpha^x^5 \\
\alpha^ + \left(\alpha^ + \alpha^\right)x + 2\alpha^x^2 + 2\alpha^x^3 + 2\alpha^x^4 + 2\alpha^x^5 + 2x^6
\end \\
&= \begin
\left(1 + \alpha^\right) + \left(\alpha^ + \alpha^\right)x + \alpha^x^2 & \alpha^ + \alpha^x \\
\alpha^ + \alpha^x & 1
\end \begin
\alpha^ + \alpha^x + \alpha^x^2 + \alpha^x^3 + \alpha^x^4 + \alpha^x^5 \\
\alpha^ + \alpha^x
\end
\end
We have reached polynomial of degree at most 3, and as
:
\begin
-1 & \alpha^ + \alpha^x \\
\alpha^ + \alpha^x & -\left(\alpha^ + \alpha^x + \alpha^x^2\right)
\end \begin
\alpha^ + \alpha^x + \alpha^x^2 & \alpha^ + \alpha^x \\
\alpha^ + \alpha^x & 1
\end = \begin 1 & 0 \\ 0 & 1 \end,
we get
:
\begin
-1 & \alpha^ + \alpha^x \\
\alpha^ + \alpha^x & -\left(\alpha^ + \alpha^x + \alpha^x^2\right)
\end\begin
S(x)\Gamma(x) \\ x^6
\end = \begin
\alpha^ + \alpha^x + \alpha^x^2 + \alpha^x^3 + \alpha^x^4 + \alpha^x^5 \\
\alpha^ + \alpha^x
\end.
Therefore,
: S(x)\Gamma(x)\left(\alpha^ + \alpha^x\right) - \left(\alpha^ + \alpha^x + \alpha^x^2\right)x^6 = \alpha^ + \alpha^x.
Let \Lambda(x) = \alpha^ + \alpha^x. Don't worry that \lambda_0 \neq 1. The root of \Lambda(x) is \alpha^.
Let
:\begin
\Xi(x) &= \Gamma(x)\Lambda(x) = \alpha^ + \alpha^x + \alpha^x^2 + \alpha^x^3, \\
\Omega(x) &= S(x)\Xi(x) \equiv \alpha^ + \alpha^x \bmod
\end
Let us look for error values using formula e_j = -\Omega\left(\alpha^\right)/\Xi'\left(\alpha^\right), where \alpha^ are roots of polynomial \Xi(x).
: \Xi'(x) = \alpha^ + \alpha^x^2.
We get
:\begin
e_1 &= -\frac
= \frac
= \frac
= 1 \\
e_2 &= -\frac
= \frac
= 0 \\
e_3 &= -\frac
= \frac
= \frac
= 1
\end
The fact that e_3 = 1 should not be surprising.
Corrected code is therefore 1 0 1 1 0 0 0 1 0 1 0 0
Citations
References
Primary sources
*
*
Secondary sources
* Course notes are apparently being redone for 2012: http://www.stanford.edu/class/ee387/
*
*
*
Further reading
*
*
*
*
*
{{DEFAULTSORT:Bch Code
Error detection and correction
Finite fields
Coding theory