Turbo code
   HOME

TheInfoList



OR:

In information theory, turbo codes (originally in French ''Turbocodes'') are a class of high-performance forward error correction (FEC) codes developed around 1990–91, but first published in 1993. They were the first practical codes to closely approach the maximum channel capacity or Shannon limit, a theoretical maximum for the code rate at which reliable communication is still possible given a specific noise level. Turbo codes are used in 3G/ 4G mobile communications (e.g., in
UMTS The Universal Mobile Telecommunications System (UMTS) is a third generation mobile cellular system for networks based on the GSM standard. Developed and maintained by the 3GPP (3rd Generation Partnership Project), UMTS is a component of the In ...
and LTE) and in ( deep space)
satellite A satellite or artificial satellite is an object intentionally placed into orbit in outer space. Except for passive satellites, most satellites have an electricity generation system for equipment on board, such as solar panels or radioi ...
communications Communication (from la, communicare, meaning "to share" or "to be in relation with") is usually defined as the transmission of information. The term may also refer to the message communicated through such transmissions or the field of inquir ...
as well as other applications where designers seek to achieve reliable information transfer over bandwidth- or latency-constrained communication links in the presence of data-corrupting noise. Turbo codes compete with low-density parity-check (LDPC) codes, which provide similar performance. The name "turbo code" arose from the feedback loop used during normal turbo code decoding, which was analogized to the exhaust feedback used for engine
turbocharging In an internal combustion engine, a turbocharger (often called a turbo) is a forced induction device that is powered by the flow of exhaust gases. It uses this energy to compress the intake gas, forcing more air into the engine in order to pr ...
. Hagenauer has argued the term turbo code is a misnomer since there is no feedback involved in the encoding process.


History

The fundamental patent application for turbo codes was filed on 23 April 1991. The patent application lists Claude Berrou as the sole inventor of turbo codes. The patent filing resulted in several patents includin
US Patent 5,446,747
which expired 29 August 2013. The first public paper on turbo codes was "''Near Shannon Limit Error-correcting Coding and Decoding: Turbo-codes''". This paper was published 1993 in the Proceedings of IEEE International Communications Conference. The 1993 paper was formed from three separate submissions that were combined due to space constraints. The merger caused the paper to list three authors: Berrou, Glavieux, and Thitimajshima (from Télécom Bretagne, former ENST Bretagne, France). However, it is clear from the original patent filing that Berrou is the sole inventor of turbo codes and that the other authors of the paper contributed material other than the core concepts. Turbo codes were so revolutionary at the time of their introduction that many experts in the field of coding did not believe the reported results. When the performance was confirmed a small revolution in the world of coding took place that led to the investigation of many other types of iterative signal processing. The first class of turbo code was the parallel concatenated convolutional code (PCCC). Since the introduction of the original parallel turbo codes in 1993, many other classes of turbo code have been discovered, including serial versions serial concatenated convolutional codes and repeat-accumulate codes. Iterative turbo decoding methods have also been applied to more conventional FEC systems, including Reed–Solomon corrected convolutional codes, although these systems are too complex for practical implementations of iterative decoders. Turbo equalization also flowed from the concept of turbo coding. In addition to turbo codes, Berrou also invented recursive systematic convolutional (RSC) codes, which are used in the example implementation of turbo codes described in the patent. Turbo codes that use RSC codes seem to perform better than turbo codes that do not use RSC codes. Prior to turbo codes, the best constructions were serial concatenated codes based on an outer Reed–Solomon error correction code combined with an inner Viterbi-decoded short constraint length
convolutional code In telecommunication, a convolutional code is a type of error-correcting code that generates parity symbols via the sliding application of a boolean polynomial function to a data stream. The sliding application represents the 'convolution' of t ...
, also known as RSV codes. In a later paper, Berrou gave credit to the intuition of "G. Battail, J. Hagenauer and P. Hoeher, who, in the late 80s, highlighted the interest of probabilistic processing." He adds " R. Gallager and M. Tanner had already imagined coding and decoding techniques whose general principles are closely related," although the necessary calculations were impractical at that time.


An example encoder

There are many different instances of turbo codes, using different component encoders, input/output ratios, interleavers, and puncturing patterns. This example encoder implementation describes a classic turbo encoder, and demonstrates the general design of parallel turbo codes. This encoder implementation sends three sub-blocks of bits. The first sub-block is the ''m''-bit block of payload data. The second sub-block is ''n/2'' parity bits for the payload data, computed using a recursive systematic
convolutional code In telecommunication, a convolutional code is a type of error-correcting code that generates parity symbols via the sliding application of a boolean polynomial function to a data stream. The sliding application represents the 'convolution' of t ...
(RSC code). The third sub-block is ''n/2'' parity bits for a known permutation of the payload data, again computed using an RSC code. Thus, two redundant but different sub-blocks of parity bits are sent with the payload. The complete block has bits of data with a code rate of . The permutation of the payload data is carried out by a device called an interleaver. Hardware-wise, this turbo code encoder consists of two identical RSC coders, ''C''1 and ''C''2, as depicted in the figure, which are connected to each other using a concatenation scheme, called ''parallel concatenation'': In the figure, ''M'' is a memory register. The delay line and interleaver force input bits dk to appear in different sequences. At first iteration, the input sequence ''d''k appears at both outputs of the encoder, ''x''k and'' y''1k or ''y''2k due to the encoder's systematic nature. If the encoders ''C''1 and ''C''2 are used in ''n''1 and ''n''2 iterations, their rates are respectively equal to :\begin ~R_1 &= \frac\\ ~R_2 &= \frac \end


The decoder

The decoder is built in a similar way to the above encoder. Two elementary decoders are interconnected to each other, but in series, not in parallel. The \textstyle DEC_1 decoder operates on lower speed (i.e., \textstyle R_1), thus, it is intended for the \textstyle C_1 encoder, and \textstyle DEC_2 is for \textstyle C_2 correspondingly. \textstyle DEC_1 yields a soft decision which causes \textstyle L_1 delay. The same delay is caused by the delay line in the encoder. The \textstyle DEC_2's operation causes \textstyle L_2 delay. An interleaver installed between the two decoders is used here to scatter error bursts coming from \textstyle DEC_1 output. ''DI'' block is a demultiplexing and insertion module. It works as a switch, redirecting input bits to \textstyle DEC_1 at one moment and to \textstyle DEC_2 at another. In OFF state, it feeds both \textstyle y_ and \textstyle y_ inputs with padding bits (zeros). Consider a memoryless
AWGN Additive white Gaussian noise (AWGN) is a basic noise model used in information theory to mimic the effect of many random processes that occur in nature. The modifiers denote specific characteristics: * ''Additive'' because it is added to any noi ...
channel, and assume that at ''k''-th iteration, the decoder receives a pair of random variables: :\begin ~x_k &= (2d_k - 1) + a_k\\ ~y_k &= 2( Y_k - 1) + b_k \end where \textstyle a_k and \textstyle b_k are independent noise components having the same variance \textstyle \sigma^2. \textstyle Y_k is a ''k''-th bit from \textstyle y_k encoder output. Redundant information is demultiplexed and sent through ''DI'' to \textstyle DEC_1 (when \textstyle y_k = y_) and to \textstyle DEC_2 (when \textstyle y_k = y_). \textstyle DEC_1 yields a soft decision; i.e.: :\Lambda(d_k) = \log\frac and delivers it to \textstyle DEC_2. \textstyle \Lambda(d_k) is called the ''logarithm of the likelihood ratio'' (LLR). \textstyle p(d_k = i),\, i \in \ is the ''a posteriori probability'' (APP) of the \textstyle d_k data bit which shows the probability of interpreting a received \textstyle d_k bit as \textstyle i. Taking the ''LLR'' into account, \textstyle DEC_2 yields a hard decision; i.e., a decoded bit. It is known that the
Viterbi algorithm The Viterbi algorithm is a dynamic programming algorithm for obtaining the maximum a posteriori probability estimate of the most likely sequence of hidden states—called the Viterbi path—that results in a sequence of observed events, especiall ...
is unable to calculate APP, thus it cannot be used in \textstyle DEC_1. Instead of that, a modified BCJR algorithm is used. For \textstyle DEC_2, the
Viterbi algorithm The Viterbi algorithm is a dynamic programming algorithm for obtaining the maximum a posteriori probability estimate of the most likely sequence of hidden states—called the Viterbi path—that results in a sequence of observed events, especiall ...
is an appropriate one. However, the depicted structure is not an optimal one, because \textstyle DEC_1 uses only a proper fraction of the available redundant information. In order to improve the structure, a feedback loop is used (see the dotted line on the figure).


Soft decision approach

The decoder front-end produces an integer for each bit in the data stream. This integer is a measure of how likely it is that the bit is a 0 or 1 and is also called ''soft bit''. The integer could be drawn from the range ˆ’127, 127 where: * −127 means "certainly 0" * −100 means "very likely 0" * 0 means "it could be either 0 or 1" * 100 means "very likely 1" * 127 means "certainly 1" This introduces a probabilistic aspect to the data-stream from the front end, but it conveys more information about each bit than just 0 or 1. For example, for each bit, the front end of a traditional wireless-receiver has to decide if an internal analog voltage is above or below a given threshold voltage level. For a turbo code decoder, the front end would provide an integer measure of how far the internal voltage is from the given threshold. To decode the -bit block of data, the decoder front-end creates a block of likelihood measures, with one likelihood measure for each bit in the data stream. There are two parallel decoders, one for each of the -bit parity sub-blocks. Both decoders use the sub-block of ''m'' likelihoods for the payload data. The decoder working on the second parity sub-block knows the permutation that the coder used for this sub-block.


Solving hypotheses to find bits

The key innovation of turbo codes is how they use the likelihood data to reconcile differences between the two decoders. Each of the two convolutional decoders generates a hypothesis (with derived likelihoods) for the pattern of ''m'' bits in the payload sub-block. The hypothesis bit-patterns are compared, and if they differ, the decoders exchange the derived likelihoods they have for each bit in the hypotheses. Each decoder incorporates the derived likelihood estimates from the other decoder to generate a new hypothesis for the bits in the payload. Then they compare these new hypotheses. This iterative process continues until the two decoders come up with the same hypothesis for the ''m''-bit pattern of the payload, typically in 15 to 18 cycles. An analogy can be drawn between this process and that of solving cross-reference puzzles like crossword or
sudoku Sudoku (; ja, 数独, sÅ«doku, digit-single; originally called Number Place) is a logic-based, combinatorial number-placement puzzle. In classic Sudoku, the objective is to fill a 9 Ã— 9 grid with digits so that each column, each row ...
. Consider a partially completed, possibly garbled crossword puzzle. Two puzzle solvers (decoders) are trying to solve it: one possessing only the "down" clues (parity bits), and the other possessing only the "across" clues. To start, both solvers guess the answers (hypotheses) to their own clues, noting down how confident they are in each letter (payload bit). Then, they compare notes, by exchanging answers and confidence ratings with each other, noticing where and how they differ. Based on this new knowledge, they both come up with updated answers and confidence ratings, repeating the whole process until they converge to the same solution.


Performance

Turbo codes perform well due to the attractive combination of the code's random appearance on the channel together with the physically realisable decoding structure. Turbo codes are affected by an error floor.


Practical applications using turbo codes

Telecommunications: * Turbo codes are used extensively in 3G and 4G mobile telephony standards; e.g., in HSPA, EV-DO and LTE. * MediaFLO, terrestrial mobile television system from Qualcomm. * The interaction channel of
satellite communication A communications satellite is an artificial satellite that relays and amplifies radio telecommunication signals via a transponder; it creates a communication channel between a source transmitter and a receiver at different locations on Earth. C ...
systems, such as DVB-RCS an
DVB-RCS2
* Recent
NASA The National Aeronautics and Space Administration (NASA ) is an independent agencies of the United States government, independent agency of the US federal government responsible for the civil List of government space agencies, space program ...
missions such as
Mars Reconnaissance Orbiter ''Mars Reconnaissance Orbiter'' (MRO) is a spacecraft designed to study the geology and climate of Mars, provide reconnaissance of future landing sites, and relay data from surface missions back to Earth. It was launched on August 12, 2005, an ...
use turbo codes as an alternative to Reed–Solomon error correction- Viterbi decoder codes. * IEEE 802.16 (
WiMAX Worldwide Interoperability for Microwave Access (WiMAX) is a family of wireless broadband communication standards based on the IEEE 802.16 set of standards, which provide physical layer (PHY) and media access control (MAC) options. The WiMAX ...
), a wireless metropolitan network standard, uses block turbo coding and convolutional turbo coding.


Bayesian formulation

From an
artificial intelligence Artificial intelligence (AI) is intelligence—perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by animals and humans. Example tasks in which this is done include speech r ...
viewpoint, turbo codes can be considered as an instance of loopy
belief propagation A belief is an attitude that something is the case, or that some proposition is true. In epistemology, philosophers use the term "belief" to refer to attitudes about the world which can be either true or false. To believe something is to take i ...
in
Bayesian network A Bayesian network (also known as a Bayes network, Bayes net, belief network, or decision network) is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). Bay ...
s.


See also

*
Convolutional code In telecommunication, a convolutional code is a type of error-correcting code that generates parity symbols via the sliding application of a boolean polynomial function to a data stream. The sliding application represents the 'convolution' of t ...
*
Viterbi algorithm The Viterbi algorithm is a dynamic programming algorithm for obtaining the maximum a posteriori probability estimate of the most likely sequence of hidden states—called the Viterbi path—that results in a sequence of observed events, especiall ...
*
Soft-decision decoding In information theory, a soft-decision decoder is a kind of decoding methods – a class of algorithm used to decode data that has been encoded with an error correcting code. Whereas a hard-decision decoder operates on data that take on a fixed se ...
* Interleaver * BCJR algorithm * Low-density parity-check code * Serial concatenated convolutional codes * Turbo equalizer * Forward error correction


References


Further reading


Publications

* Battail, Gérard. "A conceptual framework for understanding turbo codes." IEEE Journal on Selected Areas in Communications 16.2 (1998): 245–254. * Brejza, Matthew F., et al. "20 years of turbo coding and energy-aware design guidelines for energy-constrained wireless applications." IEEE Communications Surveys & Tutorials 18.1 (2016): 8–28. * Garzón-Bohórquez, Ronald, Charbel Abdel Nour, and Catherine Douillard. "Improving Turbo codes for 5G with parity puncture-constrained interleavers." Turbo Codes and Iterative Information Processing (ISTC), 2016 9th International Symposium on. IEEE, 2016.


External links


"Closing In On The Perfect Code"
IEEE Spectrum, March 2004
"The UMTS Turbo Code and an Efficient Decoder Implementation Suitable for Software-Defined Radios"
(''International Journal of Wireless Information Networks'') *
previewcopy

"Pushing the Limit"
a '' Science News'' feature about the development and genesis of turbo codes
International Symposium On Turbo Codes


an open source library for simulating turbo codes in matlab
"Turbo Equalization: Principles and New Results"
an ''
IEEE Transactions on Communications ''IEEE Transactions on Communications'' is a monthly peer-reviewed scientific journal published by the IEEE Communications Society that focuses on all aspects of telecommunication technology, including telephone, telegraphy, facsimile, and point-to ...
'' article about using convolutional codes jointly with channel equalization.
IT++ Home Page
The IT++ is a powerful C++ library which in particular supports turbo codes
Turbo codes publications by David MacKay

AFF3CT Home Page
(A Fast Forward Error Correction Toolbox) for high speed turbo codes simulations in software
Turbo code
by Dr. Sylvie Kerouédan and Dr. Claude Berrou (scholarpedia.org).
3GPP LTE Turbo Reference Design


(MatLab).

{{DEFAULTSORT:Turbo Code Error detection and correction Capacity-approaching codes