HOME

TheInfoList



OR:

Arithmetic coding (AC) is a form of
entropy encoding In information theory, an entropy coding (or entropy encoding) is any lossless data compression method that attempts to approach the lower bound declared by Shannon's source coding theorem, which states that any lossless data compression method ...
used in
lossless data compression Lossless compression is a class of data compression that allows the original data to be perfectly reconstructed from the compressed data with no loss of information. Lossless compression is possible because most real-world data exhibits Redundanc ...
. Normally, a string of characters is represented using a fixed number of bits per character, as in the
ASCII ASCII ( ), an acronym for American Standard Code for Information Interchange, is a character encoding standard for representing a particular set of 95 (English language focused) printable character, printable and 33 control character, control c ...
code. When a string is converted to arithmetic encoding, frequently used characters will be stored with fewer bits and not-so-frequently occurring characters will be stored with more bits, resulting in fewer bits used in total. Arithmetic coding differs from other forms of entropy encoding, such as
Huffman coding In computer science and information theory, a Huffman code is a particular type of optimal prefix code that is commonly used for lossless data compression. The process of finding or using such a code is Huffman coding, an algorithm developed by ...
, in that rather than separating the input into component symbols and replacing each with a code, arithmetic coding encodes the entire message into a single number, an arbitrary-precision fraction ''q'', where . It represents the current information as a range, defined by two numbers. A recent family of entropy coders called asymmetric numeral systems allows for faster implementations thanks to directly operating on a single natural number representing the current information.J. Duda, K. Tahboub, N. J. Gadil, E. J. Delp, ''The use of asymmetric numeral systems as an accurate replacement for Huffman coding''
Picture Coding Symposium, 2015.


Implementation details and examples


Equal probabilities

In the simplest case, the probability of each symbol occurring is equal. For example, consider a set of three symbols, A, B, and C, each equally likely to occur. Encoding the symbols one by one would require 2 bits per symbol, which is wasteful: one of the bit variations is never used. That is to say, symbols A, B and C might be encoded respectively as 00, 01 and 10, with 11 unused. A more efficient solution is to represent a sequence of these three symbols as a rational number in base 3 where each digit represents a symbol. For example, the sequence "ABBCAB" could become 0.0112013, in arithmetic coding as a value in the interval [0, 1). The next step is to encode this ternary number using a fixed-point binary number of sufficient precision to recover it, such as 0.00101100012 – this is only 10 bits; 2 bits are saved in comparison with naïve block encoding. This is feasible for long sequences because there are efficient, in-place algorithms for converting the base of arbitrarily precise numbers. To decode the value, knowing the original string had length 6, one can simply convert back to base 3, round to 6 digits, and recover the string.


Defining a model

In general, arithmetic coders can produce near-optimal output for any given set of symbols and probabilities. (The optimal value is −log2''P'' bits for each symbol of probability ''P''; see '' Source coding theorem''.) Compression algorithms that use arithmetic coding start by determining a
model A model is an informative representation of an object, person, or system. The term originally denoted the plans of a building in late 16th-century English, and derived via French and Italian ultimately from Latin , . Models can be divided in ...
of the data – basically a prediction of what patterns will be found in the symbols of the message. The more accurate this prediction is, the closer to optimal the output will be. Example: a simple, static model for describing the output of a particular monitoring instrument over time might be: *60% chance of symbol NEUTRAL *20% chance of symbol POSITIVE *10% chance of symbol NEGATIVE *10% chance of symbol END-OF-DATA. ''(The presence of this symbol means that the stream will be 'internally terminated', as is fairly common in data compression; when this symbol appears in the data stream, the decoder will know that the entire stream has been decoded.)'' Models can also handle alphabets other than the simple four-symbol set chosen for this example. More sophisticated models are also possible: ''higher-order'' modelling changes its estimation of the current probability of a symbol based on the symbols that precede it (the ''context''), so that in a model for English text, for example, the percentage chance of "u" would be much higher when it follows a "Q" or a "q". Models can even be '' adaptive'', so that they continually change their prediction of the data based on what the stream actually contains. The decoder must have the same model as the encoder.


Encoding and decoding: overview

In general, each step of the encoding process, except for the last, is the same; the encoder has basically just three pieces of data to consider: * The next symbol that needs to be encoded * The current interval (at the very start of the encoding process, the interval is set to ,1/nowiki>, but that will change) * The probabilities the model assigns to each of the various symbols that are possible at this stage (as mentioned earlier, higher-order or adaptive models mean that these probabilities are not necessarily the same in each step.) The encoder divides the current interval into sub-intervals, each representing a fraction of the current interval proportional to the probability of that symbol in the current context. Whichever interval corresponds to the actual symbol that is next to be encoded becomes the interval used in the next step. Example: for the four-symbol model above: * the interval for NEUTRAL would be [0, 0.6) * the interval for POSITIVE would be [0.6, 0.8) * the interval for NEGATIVE would be [0.8, 0.9) * the interval for END-OF-DATA would be [0.9, 1). When all symbols have been encoded, the resulting interval unambiguously identifies the sequence of symbols that produced it. Anyone who has the same final interval and model that is being used can reconstruct the symbol sequence that must have entered the encoder to result in that final interval. It is not necessary to transmit the final interval, however; it is only necessary to transmit ''one fraction'' that lies within that interval. In particular, it is only necessary to transmit enough digits (in whatever base) of the fraction so that all fractions that begin with those digits fall into the final interval; this will guarantee that the resulting code is a prefix code.


Encoding and decoding: example

Consider the process for decoding a message encoded with the given four-symbol model. The message is encoded in the fraction 0.538 (using decimal for clarity, instead of binary; also assuming that there are only as many digits as needed to decode the message.) The process starts with the same interval used by the encoder:
bits; the same message could have been encoded in the binary fraction 0.10001001 (equivalent to 0.53515625 decimal) at a cost of only 8bits. This 8 bit output is larger than the information content, or entropy Entropy is a scientific concept, most commonly associated with states of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodynamics, where it was first recognized, to the micros ...
, of the message, which is : \sum -\log_2(p_i) = -\log_2(0.6) - \log_2(0.1) - \log_2(0.1) = 7.381 \text. But an integer number of bits must be used in the binary encoding, so an encoder for this message would use at least 8 bits, resulting in a message 8.4% larger than the entropy contents. This inefficiency of at most 1 bit results in relatively less overhead as the message size grows. Moreover, the claimed symbol probabilities were [0.6, 0.2, 0.1, 0.1), but the actual frequencies in this example are [0.33, 0, 0.33, 0.33). If the intervals are readjusted for these frequencies, the entropy of the message would be 4.755 bits and the same NEUTRAL NEGATIVE END-OF-DATA message could be encoded as intervals [0, 1/3); [1/9, 2/9); [5/27, 6/27); and a binary interval of [0.00101111011, 0.00111000111). This is also an example of how statistical coding methods like arithmetic encoding can produce an output message that is larger than the input message, especially if the probability model is off.


Adaptive arithmetic coding

One advantage of arithmetic coding over other similar methods of data compression is the convenience of adaptation. ''Adaptation'' is the changing of the frequency (or probability) tables while processing the data. The decoded data matches the original data as long as the frequency table in decoding is replaced in the same way and in the same step as in encoding. The synchronization is, usually, based on a combination of symbols occurring during the encoding and decoding process.


Precision and renormalization

The above explanations of arithmetic coding contain some simplification. In particular, they are written as if the encoder first calculated the fractions representing the endpoints of the interval in full, using infinite precision (arithmetic)">precision, and only converted the fraction to its final form at the end of encoding. Rather than try to simulate infinite precision, most arithmetic coders instead operate at a fixed limit of precision which they know the decoder will be able to match, and round the calculated fractions to their nearest equivalents at that precision. An example shows how this would work if the model called for the interval to be divided into thirds, and this was approximated with 8 bit precision. Note that since now the precision is known, so are the binary ranges we'll be able to use. A process called ''renormalization'' keeps the finite precision from becoming a limit on the total number of symbols that can be encoded. Whenever the range is reduced to the point where all values in the range share certain beginning digits, those digits are sent to the output. For however many digits of precision the computer ''can'' handle, it is now handling fewer than that, so the existing digits are shifted left, and at the right, new digits are added to expand the range as widely as possible. Note that this result occurs in two of the three cases from our previous example.


Arithmetic coding as a generalized change of radix

Recall that in the case where the symbols had equal probabilities, arithmetic coding could be implemented by a simple change of base, or radix. In general, arithmetic (and range) coding may be interpreted as a ''generalized'' change of radix. For example, we may look at any sequence of symbols: :\mathrm as a number in a certain base presuming that the involved symbols form an ordered set and each symbol in the ordered set denotes a sequential integer A = 0, B = 1, C = 2, D = 3, and so on. This results in the following frequencies and cumulative frequencies: The ''cumulative frequency'' for an item is the sum of all frequencies preceding the item. In other words, cumulative frequency is a running total of frequencies. In a positional
numeral system A numeral system is a writing system for expressing numbers; that is, a mathematical notation for representing numbers of a given set, using digits or other symbols in a consistent manner. The same sequence of symbols may represent differe ...
the radix, or base, is numerically equal to a number of different symbols used to express the number. For example, in the decimal system the number of symbols is 10, namely 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. The radix is used to express any finite integer in a presumed multiplier in polynomial form. For example, the number 457 is actually 4×102 + 5×101 + 7×100, where base 10 is presumed but not shown explicitly. Initially, we will convert DABDDB into a base-6 numeral, because 6 is the length of the string. The string is first mapped into the digit string 301331, which then maps to an integer by the polynomial: :6^5 \times 3 + 6^4 \times 0 + 6^3 \times 1 + 6^2 \times 3 + 6^1 \times 3 + 6^0 \times 1 = 23671 The result 23671 has a length of 15 bits, which is not very close to the theoretical limit (the
entropy Entropy is a scientific concept, most commonly associated with states of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodynamics, where it was first recognized, to the micros ...
of the message), which is approximately 9 bits. To encode a message with a length closer to the theoretical limit imposed by information theory we need to slightly generalize the classic formula for changing the radix. We will compute lower and upper bounds ''L'' and ''U'' and choose a number between them. For the computation of ''L'' we multiply each term in the above expression by the product of the frequencies of all previously occurred symbols: :\begin L = &(6^5 \times 3) + \\ & 3 \times (6^4 \times 0) + \\ & (3 \times 1) \times (6^3 \times 1) + \\ & (3 \times 1 \times 2) \times (6^2 \times 3) + \\ & (3 \times 1 \times 2 \times 3) \times (6^1 \times 3) + \\ & (3 \times 1 \times 2 \times 3 \times 3) \times (6^0 \times 1) \\ = & 25002 \end The difference between this polynomial and the polynomial above is that each term is multiplied by the product of the frequencies of all previously occurring symbols. More generally, ''L'' may be computed as: : L = \sum_^n n^ C_i where \scriptstyle C_i are the cumulative frequencies and \scriptstyle f_k are the frequencies of occurrences. Indexes denote the position of the symbol in a message. In the special case where all frequencies \scriptstyle f_k are 1, this is the change-of-base formula. The upper bound ''U'' will be ''L'' plus the product of all frequencies; in this case ''U'' = ''L'' + (3 × 1 × 2 × 3 × 3 × 2) = 25002 + 108 = 25110. In general, ''U'' is given by: : U = L + \prod_^ f_k Now we can choose any number from the interval
entropy Entropy is a scientific concept, most commonly associated with states of disorder, randomness, or uncertainty. The term and the concept are used in diverse fields, from classical thermodynamics, where it was first recognized, to the micros ...
, which for long messages is very close to optimal: :-\left[\sum_^A f_i \log_2(f_i)\right] n = n H In other words, the efficiency of arithmetic encoding approaches the theoretical limit of H bits per symbol, as the message length approaches infinity.


Asymptotic equipartition

We can understand this intuitively. Suppose the source is ergodic, then it has the asymptotic equipartition property (AEP). By the AEP, after a long stream of n symbols, the interval of (0, 1) is almost partitioned into almost equally-sized intervals. Technically, for any small \epsilon > 0, for all large enough n, there exists 2^ strings x_, such that each string has almost equal probability Pr(x_) = 2^ , and their total probability is 1-O(\epsilon). For any such string, it is arithmetically encoded by a binary string of length k, where k is the smallest k such that there exists a fraction of form \frac in the interval for x_. Since the interval for x_ has size 2^ , we should expect it to contain one fraction of form \frac when k = nH(X)(1+O(\epsilon)). Thus, with high probability, x_ can be arithmetically encoded with a binary string of length nH(X) ( 1 + O(\epsilon)).


Connections with other compression methods


Huffman coding

Because arithmetic coding doesn't compress one datum at a time, it can get arbitrarily close to entropy when compressing IID strings. By contrast, using the extension of
Huffman coding In computer science and information theory, a Huffman code is a particular type of optimal prefix code that is commonly used for lossless data compression. The process of finding or using such a code is Huffman coding, an algorithm developed by ...
(to strings) does not reach entropy unless all probabilities of alphabet symbols are powers of two, in which case both Huffman and arithmetic coding achieve entropy. When naively Huffman coding binary strings, no compression is possible, even if entropy is low (e.g. () has probabilities ). Huffman encoding assigns 1 bit to each value, resulting in a code of the same length as the input. By contrast, arithmetic coding compresses bits well, approaching the optimal compression ratio of : 1 - 0.95 \log_2(0.95) + -0.05 \log_2(0.05)\approx 71.4\%. One simple way to address Huffman coding's suboptimality is to concatenate symbols ("blocking") to form a new alphabet in which each new symbol represents a sequence of original symbols – in this case bits – from the original alphabet. In the above example, grouping sequences of three symbols before encoding would produce new "super-symbols" with the following frequencies: * : 85.7% * , , : 4.5% each * , , : 0.24% each * : 0.0125% With this grouping, Huffman coding averages 1.3 bits for every three symbols, or 0.433 bits per symbol, compared with one bit per symbol in the original encoding, i.e., 56.7\% compression. Allowing arbitrarily large sequences gets arbitrarily close to entropy – just like arithmetic coding – but requires huge codes to do so, so is not as practical as arithmetic coding for this purpose. An alternative is encoding run lengths via Huffman-based Golomb-Rice codes. Such an approach allows simpler and faster encoding/decoding than arithmetic coding or even Huffman coding, since the latter requires a table lookups. In the example, a Golomb-Rice code with a four-bit remainder achieves a compression ratio of 71.1\%, far closer to optimum than using three-bit blocks. Golomb-Rice codes only apply to Bernoulli inputs such as the one in this example, however, so it is not a substitute for blocking in all cases.


History and patents

Basic algorithms for arithmetic coding were developed independently by Jorma J. Rissanen, at IBM Research, and by Richard C. Pasco, a Ph.D. student at
Stanford University Leland Stanford Junior University, commonly referred to as Stanford University, is a Private university, private research university in Stanford, California, United States. It was founded in 1885 by railroad magnate Leland Stanford (the eighth ...
; both were published in May 1976. Pasco cites a pre-publication draft of Rissanen's article and comments on the relationship between their works: Less than a year after publication, IBM filed for a US
patent A patent is a type of intellectual property that gives its owner the legal right to exclude others from making, using, or selling an invention for a limited period of time in exchange for publishing an sufficiency of disclosure, enabling discl ...
on Rissanen's work. Pasco's work was not patented. A variety of specific techniques for arithmetic coding have historically been covered by US patents, although various well-known methods have since passed into the public domain as the patents have expired. Techniques covered by patents may be essential for implementing the algorithms for arithmetic coding that are specified in some formal international standards. When this is the case, such patents are generally available for licensing under what is called "reasonable and non-discriminatory" ( RAND) licensing terms (at least as a matter of standards-committee policy). In some well-known instances, (including some involving IBM patents that have since expired), such licenses were available for free, and in other instances, licensing fees have been required. The availability of licenses under RAND terms does not necessarily satisfy everyone who might want to use the technology, as what may seem "reasonable" for a company preparing a proprietary commercial software product may seem much less reasonable for a
free software Free software, libre software, libreware sometimes known as freedom-respecting software is computer software distributed open-source license, under terms that allow users to run the software for any purpose as well as to study, change, distribut ...
or
open source Open source is source code that is made freely available for possible modification and redistribution. Products include permission to use and view the source code, design documents, or content of the product. The open source model is a decentrali ...
project. At least one significant compression software program,
bzip2 bzip2 is a free and open-source file compression program that uses the Burrows–Wheeler algorithm. It only compresses single files and is not a file archiver. It relies on separate external utilities such as tar for tasks such as handli ...
, deliberately discontinued the use of arithmetic coding in favor of Huffman coding due to the perceived patent situation at the time. Also, encoders and decoders of the
JPEG JPEG ( , short for Joint Photographic Experts Group and sometimes retroactively referred to as JPEG 1) is a commonly used method of lossy compression for digital images, particularly for those images produced by digital photography. The degr ...
file format, which has options for both Huffman encoding and arithmetic coding, typically only support the Huffman encoding option, which was originally because of patent concerns; the result is that nearly all JPEG images in use today use Huffman encoding although JPEG's arithmetic coding patents have expired due to the age of the JPEG standard (the design of which was approximately completed by 1990). JPEG XL, as well as archivers like PackJPG, Brunsli and Lepton, that can losslessly convert Huffman encoded files to ones with arithmetic coding (or asymmetric numeral systems in case of JPEG XL), show up to 25% size saving. The JPEG image compression format's arithmetic coding algorithm is based on the following cited patents (since expired). * – (
IBM International Business Machines Corporation (using the trademark IBM), nicknamed Big Blue, is an American Multinational corporation, multinational technology company headquartered in Armonk, New York, and present in over 175 countries. It is ...
) Filed 4 February 1986, granted 24 March 1987 – Kottappuram M. A. Mohiuddin, Jorma Johannes Rissanen – Multiplication-free multi-alphabet arithmetic code * – (IBM) Filed 18 November 1988, granted 27 February 1990 – Glen George Langdon, Joan L. Mitchell, William B. Pennebaker, Jorma Johannes Rissanen – Arithmetic coding encoder and decoder system * – (IBM) Filed 20 July 1988, granted 19 June 1990 – William B. Pennebaker, Joan L. Mitchell – Probability adaptation for arithmetic coders
JP Patent 1021672
– ( Mitsubishi) Filed 21 January 1989, granted 10 August 1990 – Toshihiro Kimura, Shigenori Kino, Fumitaka Ono, Masayuki Yoshida – Coding system
JP Patent 2-46275
– (Mitsubishi) Filed 26 February 1990, granted 5 November 1991 – Fumitaka Ono, Tomohiro Kimura, Masayuki Yoshida, Shigenori Kino – Coding apparatus and coding method Other patents (mostly also expired) related to arithmetic coding include the following. * – (IBM) Filed 4 March 1977, granted 24 October 1978 – Glen George Langdon, Jorma Johannes Rissanen – Method and means for arithmetic string coding * – (IBM) Filed 28 November 1979, granted 25 August 1981 – Glen George Langdon, Jorma Johannes Rissanen – Method and means for arithmetic coding utilizing a reduced number of operations * – (IBM) Filed 30 March 1981, granted 21 August 1984 – Glen George Langdon, Jorma Johannes Rissanen – High-speed arithmetic compression coding using concurrent value updating * – (IBM) Filed 15 September 1986, granted 2 January 1990 – Joan L. Mitchell, William B. Pennebaker – Arithmetic coding data compression/de-compression by selectively employed, diverse arithmetic coding encoders and decoders
JP Patent 11782787
– ( NEC) Filed 13 May 1987, granted 18 November 1988 – Michio Shimada – Data compressing arithmetic encoding device
JP Patent 15015487
– ( KDDI) Filed 18 June 1987, granted 22 December 1988 – Shuichi Matsumoto, Masahiro Saito – System for preventing carrying propagation in arithmetic coding * – (IBM) Filed 3 May 1988, granted 12 June 1990 – William B. Pennebaker, Joan L. Mitchell – Probability adaptation for arithmetic coders * – (IBM) Filed 19 June 1989, granted 29 January 1991 – Dan S. Chevion, Ehud D. Karnin, Eugeniusz Walach – Data string compression using arithmetic encoding with simplified probability subinterval estimation * – (IBM) Filed 5 January 1990, granted 24 March 1992 – William B. Pennebaker, Joan L. Mitchell – Probability adaptation for arithmetic coders * – (
Ricoh is a Japanese multinational imaging and electronics company. It was founded by the now-defunct commercial division of the Institute of Physical and Chemical Research (Riken) known as the ''Riken Concern'', on 6 February 1936 as . Ricoh's hea ...
) Filed 17 August 1992, granted 21 December 1993 – James D. Allen – Method and apparatus for entropy coding Note: This list is not exhaustive. See the following links for a list of more US patents. The Dirac codec uses arithmetic coding and is not patent pending. Patents on arithmetic coding may exist in other jurisdictions; see
software patent A software patent is a patent on a piece of software, such as a computer program, library, user interface, or algorithm. The validity of these patents can be difficult to evaluate, as software is often at once a product of engineering, something ...
s for a discussion of the patentability of software around the world.


Benchmarks and other technical characteristics

Every programmatic implementation of arithmetic encoding has a different compression ratio and performance. While compression ratios vary only a little (usually under 1%),For instance, discuss versions of arithmetic coding based on real-number ranges, integer approximations to those ranges, and an even more restricted type of approximation that they call binary quasi-arithmetic coding. They state that the difference between real and integer versions is negligible, prove that the compression loss for their quasi-arithmetic method can be made arbitrarily small, and bound the compression loss incurred by one of their approximations as less than 0.06%. See: . the code execution time can vary by a factor of 10. Choosing the right encoder from a list of publicly available encoders is not a simple task because performance and compression ratio depend also on the type of data, particularly on the size of the alphabet (number of different symbols). One of two particular encoders may have better performance for small alphabets while the other may show better performance for large alphabets. Most encoders have limitations on the size of the alphabet and many of them are specialized for alphabets of exactly two symbols (0 and 1).


See also

* Asymmetric numeral systems * Context-adaptive binary arithmetic coding (CABAC) *
Data compression In information theory, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compressi ...
*
Entropy encoding In information theory, an entropy coding (or entropy encoding) is any lossless data compression method that attempts to approach the lower bound declared by Shannon's source coding theorem, which states that any lossless data compression method ...
*
Huffman coding In computer science and information theory, a Huffman code is a particular type of optimal prefix code that is commonly used for lossless data compression. The process of finding or using such a code is Huffman coding, an algorithm developed by ...
* Range encoding *
Run-length encoding Run-length encoding (RLE) is a form of lossless data compression in which ''runs'' of data (consecutive occurrences of the same data value) are stored as a single occurrence of that data value and a count of its consecutive occurrences, rather th ...


Notes


References

* * * * * * Rodionov Anatoly, Volkov Sergey (2010) "p-adic arithmetic coding" Contemporary Mathematics Volume 508, 2010 Contemporary Mathematics * Rodionov Anatoly, Volkov Sergey (2007) "p-adic arithmetic coding"
P-adic arithmetic coding


External links

*
Newsgroup posting
with a short worked example of arithmetic encoding (integer-only).



The article explains both range and arithmetic coding. It has also code samples for 3 different arithmetic encoders along with performance comparison.
Introduction to Arithmetic Coding
. 60 pages. * Eric Bodden, Malte Clasen and Joachim Kneis
Arithmetic Coding revealed
Technical Report 2007-5, Sable Research Group, McGill University.

by Mark Nelson.

by Mark Nelson (2014)
Fast implementation of range coding and rANS
by James K. Bonfield {{Compression Methods Entropy coding Data compression