HOME

TheInfoList



OR:

Context mixing is a type of
data compression In information theory, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compressi ...
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
in which the next-
symbol A symbol is a mark, sign, or word that indicates, signifies, or is understood as representing an idea, object, or relationship. Symbols allow people to go beyond what is known or seen by creating linkages between otherwise very different conc ...
predictions of two or more
statistical model A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of sample data (and similar data from a larger population). A statistical model represents, often in considerably idealized form, ...
s are combined to yield a prediction that is often more accurate than any of the individual predictions. For example, one simple method (not necessarily the best) is to
average In ordinary language, an average is a single number taken as representative of a list of numbers, usually the sum of the numbers divided by how many numbers are in the list (the arithmetic mean). For example, the average of the numbers 2, 3, 4, 7, ...
the
probabilities Probability is the branch of mathematics concerning numerical descriptions of how likely an event is to occur, or how likely it is that a proposition is true. The probability of an event is a number between 0 and 1, where, roughly speakin ...
assigned by each
model A model is an informative representation of an object, person or system. The term originally denoted the plans of a building in late 16th-century English, and derived via French and Italian ultimately from Latin ''modulus'', a measure. Models c ...
. The
random forest Random forests or random decision forests is an ensemble learning method for classification, regression and other tasks that operates by constructing a multitude of decision trees at training time. For classification tasks, the output of th ...
is another method: it outputs the prediction that is the mode of the predictions output by individual models. Combining models is an active area of research in
machine learning Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence. Machine ...
. The
PAQ PAQ is a series of lossless data compression archivers that have gone through collaborative development to top rankings on several benchmarks measuring compression ratio (although at the expense of speed and memory usage). Specialized versions ...
series of
data compression In information theory, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compressi ...
programs use context mixing to assign probabilities to individual
bit The bit is the most basic unit of information in computing and digital communications. The name is a portmanteau of binary digit. The bit represents a logical state with one of two possible values. These values are most commonly represented a ...
s of the input.


Application to Data Compression

Suppose that we are given two conditional probabilities, P(X, A) and P(X, B), and we wish to estimate P(X, A,B), the probability of event X given both conditions A and B. There is insufficient information for
probability theory Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set o ...
to give a result. In fact, it is possible to construct scenarios in which the result could be anything at all. But intuitively, we would expect the result to be some kind of average of the two. The problem is important for data compression. In this application, A and B are contexts, X is the event that the next bit or symbol of the data to be compressed has a particular value, and P(X, A) and P(X, B) are the probability estimates by two independent models. The
compression ratio The compression ratio is the ratio between the volume of the cylinder and combustion chamber in an internal combustion engine at their maximum and minimum values. A fundamental specification for such engines, it is measured two ways: the stat ...
depends on how closely the estimated probability approaches the true but unknown probability of event X. It is often the case that contexts A and B have occurred often enough to accurately estimate P(X, A) and P(X, B) by counting occurrences of X in each context, but the two contexts either have not occurred together frequently, or there are insufficient computing resources (time and memory) to collect statistics for the combined case. For example, suppose that we are compressing a text file. We wish to predict whether the next character will be a linefeed, given that the previous character was a period (context A) and that the last linefeed occurred 72 characters ago (context B). Suppose that a linefeed previously occurred after 1 of the last 5 periods (P(X, A=0.2) and in 5 out of the last 10 lines at column 72 (P(X, B)=0.5). How should these predictions be combined? Two general approaches have been used, linear and logistic mixing. Linear mixing uses a weighted average of the predictions weighted by evidence. In this example, P(X, B) gets more weight than P(X, A) because P(X, B) is based on a greater number of tests. Older versions of
PAQ PAQ is a series of lossless data compression archivers that have gone through collaborative development to top rankings on several benchmarks measuring compression ratio (although at the expense of speed and memory usage). Specialized versions ...
uses this approach. Newer versions use logistic (or
neural network A neural network is a network or neural circuit, circuit of biological neurons, or, in a modern sense, an artificial neural network, composed of artificial neurons or nodes. Thus, a neural network is either a biological neural network, made up ...
) mixing by first transforming the predictions into the logistic domain, log(p/(1-p)) before averaging. This effectively gives greater weight to predictions near 0 or 1, in this case P(X, A). In both cases, additional weights may be given to each of the input models and adapted to favor the models that have given the most accurate predictions in the past. All but the oldest versions of PAQ use adaptive weighting. Most context mixing compressors predict one bit of input at a time. The output probability is simply the probability that the next bit will be a 1.


Linear Mixing

We are given a set of predictions Pi(1) = n1i/ni, where ni = n0i + n1i, and n0i and n1i are the counts of 0 and 1 bits respectively for the i'th model. The probabilities are computed by weighted addition of the 0 and 1 counts: *S0 = Σi wi n0i *S1 = Σi wi n1i *S = S0 + S1 *P(0) = S0 / S *P(1) = S1 / S The weights wi are initially equal and always sum to 1. Under the initial conditions, each model is weighted in proportion to evidence. The weights are then adjusted to favor the more accurate models. Suppose we are given that the actual bit being predicted is y (0 or 1). Then the weight adjustment is: *ni = n0i + n1i *error = y – P(1) *wi ← wi + S n1i - S1 ni) / (S0 S1)error Compression can be improved by bounding ni so that the model weighting is better balanced. In PAQ6, whenever one of the bit counts is incremented, the part of the other count that exceeds 2 is halved. For example, after the sequence 000000001, the counts would go from (n0, n1) = (8, 0) to (5, 1).


Logistic Mixing

Let Pi(1) be the prediction by the i'th model that the next bit will be a 1. Then the final prediction P(1) is calculated: *xi = stretch(Pi(1)) *P(1) = squash(Σi wi xi) where P(1) is the probability that the next bit will be a 1, Pi(1) is the probability estimated by the ''i'th'' model, and *stretch(x) = ln(x / (1 - x)) *squash(x) = 1 / (1 + e−x) (inverse of stretch). After each prediction, the model is updated by adjusting the weights to minimize coding cost. *wi ← wi + η xi (y - P(1)) where η is the learning rate (typically 0.002 to 0.01), ''y'' is the predicted bit, and (y - P(1)) is the prediction error.


List of Context Mixing Compressors

All versions below use logistic mixing unless otherwise indicated. * All
PAQ PAQ is a series of lossless data compression archivers that have gone through collaborative development to top rankings on several benchmarks measuring compression ratio (although at the expense of speed and memory usage). Specialized versions ...
versions (Matt Mahoney, Serge Osnach, Alexander Ratushnyak, Przemysław Skibiński, Jan Ondrus, and others

PAQAR and versions prior to PAQ7 used linear mixing. Later versions used logistic mixing. * All LPAQ versions (Matt Mahoney, Alexander Ratushnyak

*
ZPAQ ZPAQ is an open source command line archiver for Windows and Linux. It uses a journaling or append-only format which can be rolled back to an earlier state to retrieve older versions of files and directories. It supports fast incremental update b ...
(Matt Mahoney

* WinRK 3.0.3 (Malcolm Taylor) in maximum compression PWCM mod

Version 3.0.2 was based on linear mixing. * NanoZip (Sami Runsas) in maximum compression mode (option -cc

* xwrt 3.2 (Przemysław Skibiński) in maximum compression mode (options -i10 through -i14

as a back end to a dictionary encoder. * cmm1 through cmm4, M1, and M1X2 (Christopher Mattern) use a small number of contexts for high speed. M1 and M1X2 use a
genetic algorithm In computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to gen ...
to select two bit masked contexts in a separate optimization pass. * ccm (Christian Martelock). * bit (Osman Turan

* pimple, pimple2, tc, and px (Ilia Muraviev

* enc (Serge Osnach) tries several methods based on Prediction by Partial Matching, PPM and (linear) context mixing and chooses the best one

* fpaq2 (Nania Francesco Antonio) using fixed weight averaging for high speed.
cmix
(Byron Knoll) mixes many models, and is currently ranked first in the Large Text Compression benchmark, as well as the Silesia corpus and has surpassed the winning entry of the
Hutter Prize The Hutter Prize is a cash prize funded by Marcus Hutter which rewards data compression improvements on a specific 1 GB English text file, with the goal of encouraging research in artificial intelligence (AI). Launched in 2006, the prize awards ...
although it is not eligible due to using too much memory.


References

{{DEFAULTSORT:Context Mixing Data compression