Lossless compression is a class of
data compression
In information theory, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compressi ...
that allows the original data to be perfectly reconstructed from the compressed data with no loss of
information
Information is an Abstraction, abstract concept that refers to something which has the power Communication, to inform. At the most fundamental level, it pertains to the Interpretation (philosophy), interpretation (perhaps Interpretation (log ...
. Lossless compression is possible because most real-world data exhibits
statistical redundancy. By contrast,
lossy compression permits reconstruction only of an approximation of the original
data
Data ( , ) are a collection of discrete or continuous values that convey information, describing the quantity, quality, fact, statistics, other basic units of meaning, or simply sequences of symbols that may be further interpreted for ...
, though usually with greatly improved
compression rates (and therefore reduced media sizes).
By operation of the
pigeonhole principle
In mathematics, the pigeonhole principle states that if items are put into containers, with , then at least one container must contain more than one item. For example, of three gloves, at least two must be right-handed or at least two must be l ...
, no lossless compression algorithm can shrink the size of all possible data: Some data will get longer by at least one symbol or bit.
Compression algorithms are usually effective for human- and machine-readable documents and cannot shrink the size of
random
In common usage, randomness is the apparent or actual lack of definite pattern or predictability in information. A random sequence of events, symbols or steps often has no order and does not follow an intelligible pattern or combination. ...
data that contain no
redundancy. Different algorithms exist that are designed either with a specific type of input data in mind or with specific assumptions about what kinds of redundancy the uncompressed data are likely to contain.
Lossless data compression is used in many applications. For example, it is used in the
ZIP file format and in the
GNU tool
gzip. It is also often used as a component within lossy data compression technologies (e.g. lossless
mid/side joint stereo preprocessing by
MP3 encoders and other lossy audio encoders).
Lossless compression is used in cases where it is important that the original and the decompressed data be identical, or where deviations from the original data would be unfavourable. Common examples are executable programs, text documents, and source code. Some image file formats, like
PNG or
GIF
The Graphics Interchange Format (GIF; or , ) is a Raster graphics, bitmap Image file formats, image format that was developed by a team at the online services provider CompuServe led by American computer scientist Steve Wilhite and released ...
, use only lossless compression, while others like
TIFF and
MNG may use either lossless or lossy methods.
Lossless audio formats are most often used for archiving or production purposes, while smaller
lossy audio files are typically used on portable players and in other cases where storage space is limited or exact replication of the audio is unnecessary.
Techniques
Most lossless compression programs do two things in sequence: the first step generates a ''statistical model'' for the input data, and the second step uses this model to map input data to bit sequences in such a way that "probable" (i.e. frequently encountered) data will produce shorter output than "improbable" data.
The primary encoding algorithms used to produce bit sequences are
Huffman coding
In computer science and information theory, a Huffman code is a particular type of optimal prefix code that is commonly used for lossless data compression. The process of finding or using such a code is Huffman coding, an algorithm developed by ...
(also used by the
deflate algorithm) and
arithmetic coding
Arithmetic coding (AC) is a form of entropy encoding used in lossless data compression. Normally, a String (computer science), string of characters is represented using a fixed number of bits per character, as in the American Standard Code for In ...
. Arithmetic coding achieves compression rates close to the best possible for a particular statistical model, which is given by the
information entropy
In information theory, the entropy of a random variable quantifies the average level of uncertainty or information associated with the variable's potential states or possible outcomes. This measures the expected amount of information needed ...
, whereas Huffman compression is simpler and faster but produces poor results for models that deal with symbol probabilities close to 1.
There are two primary ways of constructing statistical models: in a ''static'' model, the data is analyzed and a model is constructed, then this model is stored with the compressed data. This approach is simple and modular, but has the disadvantage that the model itself can be expensive to store, and also that it forces using a single model for all data being compressed, and so performs poorly on files that contain heterogeneous data. ''Adaptive'' models dynamically update the model as the data is compressed. Both the encoder and decoder begin with a trivial model, yielding poor compression of initial data, but as they learn more about the data, performance improves. Most popular types of compression used in practice now use adaptive coders.
Lossless compression methods may be categorized according to the type of data they are designed to compress. While, in principle, any general-purpose lossless compression algorithm (''general-purpose'' meaning that they can accept any bitstring) can be used on any type of data, many are unable to achieve significant compression on data that are not of the form for which they were designed to compress. Many of the lossless compression techniques used for text also work reasonably well for
indexed images.
Multimedia
These techniques take advantage of the specific characteristics of images such as the common phenomenon of contiguous 2-D areas of similar tones.
Every pixel but the first is replaced by the difference to its left neighbor. This leads to small values having a much higher probability than large values.
This is often also applied to sound files, and can compress files that contain mostly low frequencies and low volumes.
For images, this step can be repeated by taking the difference to the top pixel, and then in videos, the difference to the pixel in the next frame can be taken.
A hierarchical version of this technique takes neighboring pairs of data points, stores their difference and sum, and on a higher level with lower resolution continues with the sums. This is called
discrete wavelet transform.
JPEG2000 additionally uses data points from other pairs and multiplication factors to mix them into the difference. These factors must be integers, so that the result is an integer under all circumstances. So the values are increased, increasing file size, but the distribution of values could be more peaked.
The adaptive encoding uses the probabilities from the previous sample in sound encoding, from the left and upper pixel in image encoding, and additionally from the previous frame in video encoding. In the wavelet transformation, the probabilities are also passed through the hierarchy.
Historical legal issues
Many of these methods are implemented in open-source and proprietary tools, particularly LZW and its variants. Some algorithms are patented in the
United States
The United States of America (USA), also known as the United States (U.S.) or America, is a country primarily located in North America. It is a federal republic of 50 U.S. state, states and a federal capital district, Washington, D.C. The 48 ...
and other countries and their legal usage requires licensing by the patent holder. Because of patents on certain kinds of
LZW compression, and in particular licensing practices by patent holder Unisys that many developers considered abusive, some open source proponents encouraged people to avoid using the
Graphics Interchange Format
The Graphics Interchange Format (GIF; or , ) is a bitmap image format that was developed by a team at the online services provider CompuServe led by American computer scientist Steve Wilhite and released on June 15, 1987.
The format can ...
(GIF) for compressing still image files in favor of
Portable Network Graphics
Portable Network Graphics (PNG, officially pronounced , colloquially pronounced ) is a raster graphics, raster-graphics file graphics file format, format that supports lossless data compression. PNG was developed as an improved, non-patented ...
(PNG), which combines the
LZ77
LZ77 and LZ78 are the two lossless data compression algorithms published in papers by Abraham Lempel and Jacob Ziv in 1977 and 1978.
They are also known as Lempel-Ziv 1 (LZ1) and Lempel-Ziv 2 (LZ2) respectively. These two algorithms form the basis ...
-based
deflate algorithm with a selection of domain-specific prediction filters. However, the patents on LZW expired on June 20, 2003.
Many of the lossless compression techniques used for text also work reasonably well for
indexed images, but there are other techniques that do not work for typical text that are useful for some images (particularly simple bitmaps), and other techniques that take advantage of the specific characteristics of images (such as the common phenomenon of contiguous 2-D areas of similar tones, and the fact that color images usually have a preponderance of a limited range of colors out of those representable in the color space).
As mentioned previously, lossless sound compression is a somewhat specialized area. Lossless sound compression algorithms can take advantage of the repeating patterns shown by the wave-like nature of the essentially using
autoregressive models to predict the "next" value and encoding the (possibly small) difference between the expected value and the actual data. If the difference between the predicted and the actual data (called the ''error'') tends to be small, then certain difference values (like 0, +1, −1 etc. on sample values) become very frequent, which can be exploited by encoding them in few output bits.
It is sometimes beneficial to compress only the differences between two versions of a file (or, in
video compression
In information theory, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compression ...
, of successive images within a sequence). This is called
delta encoding (from the Greek letter
Δ, which in mathematics, denotes a difference), but the term is typically only used if both versions are meaningful outside compression and decompression. For example, while the process of compressing the error in the above-mentioned lossless audio compression scheme could be described as delta encoding from the approximated sound wave to the original sound wave, the approximated version of the sound wave is not meaningful in any other context.
Methods
No lossless compression algorithm can efficiently compress all possible data . For this reason, many different algorithms exist that are designed either with a specific type of input data in mind or with specific assumptions about what kinds of redundancy the uncompressed data are likely to contain.
Some of the most common lossless compression algorithms are listed below.
General purpose
*
ANS – Entropy encoding, used by
LZFSE and
Zstandard
*
Arithmetic coding
Arithmetic coding (AC) is a form of entropy encoding used in lossless data compression. Normally, a String (computer science), string of characters is represented using a fixed number of bits per character, as in the American Standard Code for In ...
– Entropy encoding
*
Burrows–Wheeler transform
The Burrows–Wheeler transform (BWT) rearranges a character string into runs of similar characters, in a manner that can be reversed to recover the original string. Since compression techniques such as move-to-front transform and run-length enc ...
reversible transform for making textual data more compressible, used by
bzip2
bzip2 is a free and open-source file compression program that uses the Burrows–Wheeler algorithm. It only compresses single files and is not a file archiver. It relies on separate external utilities such as tar for tasks such as handli ...
*
Huffman coding
In computer science and information theory, a Huffman code is a particular type of optimal prefix code that is commonly used for lossless data compression. The process of finding or using such a code is Huffman coding, an algorithm developed by ...
– Entropy encoding, pairs well with other algorithms
*
Lempel-Ziv compression (LZ77 and LZ78) – Dictionary-based algorithm that forms the basis for many other algorithms
**
Deflate – Combines LZ77 compression with Huffman coding, used by
ZIP,
gzip, and
PNG images
**
Lempel–Ziv–Markov chain algorithm
The Lempel–Ziv–Markov chain algorithm (LZMA) is an algorithm used to perform lossless data compression. It has been used in the 7z format of the 7-Zip archiver since 2001. This algorithm uses a Dictionary coder, dictionary compression scheme ...
(LZMA) – Very high compression ratio, used by
7zip and
xz
**
Lempel–Ziv–Storer–Szymanski (LZSS) – Used by
WinRAR in tandem with Huffman coding
**
Lempel–Ziv–Welch (LZW) – Used by
GIF
The Graphics Interchange Format (GIF; or , ) is a Raster graphics, bitmap Image file formats, image format that was developed by a team at the online services provider CompuServe led by American computer scientist Steve Wilhite and released ...
images and Unix's
compress
utility
*
Prediction by partial matching
Prediction by partial matching (PPM) is an adaptive statistical data compression technique based on context modeling and prediction. PPM models use a set of previous symbols in the uncompressed symbol stream to predict the next symbol in the str ...
(PPM) – Optimized for compressing
plain text
In computing, plain text is a loose term for data (e.g. file contents) that represent only characters of readable material but not its graphical representation nor other objects ( floating-point numbers, images, etc.). It may also include a lim ...
*
Run-length encoding
Run-length encoding (RLE) is a form of lossless data compression in which ''runs'' of data (consecutive occurrences of the same data value) are stored as a single occurrence of that data value and a count of its consecutive occurrences, rather th ...
(RLE) – Simple scheme that provides good compression of data containing many runs of the same value
Audio
*
Adaptive Transform Acoustic Coding (ATRAC)
*
Apple Lossless (ALAC – Apple Lossless Audio Codec)
*
Audio Lossless Coding (also known as MPEG-4 ALS)
*
Direct Stream Transfer (DST)
*
Dolby TrueHD
*
DTS-HD Master Audio
*
Free Lossless Audio Codec (FLAC)
*
Meridian Lossless Packing (MLP)
*
Monkey's Audio (Monkey's Audio APE)
*
MPEG-4 SLS
MPEG-4 SLS, or MPEG-4 Scalable to Lossless as per International Organization for Standardization, ISO/International Electrotechnical Commission, IEC 14496-3:2005/Amd 3:2006 (Scalable Lossless Coding), is an extension to the MPEG-4 Part 3 (MPEG-4 ...
(also known as HD-AAC)
*
OptimFROG
*
Original Sound Quality (OSQ)
*
RealPlayer
RealPlayer, formerly RealAudio Player, RealOne Player and RealPlayer G2, is a cross-platform media player (software), media player app, developed by RealNetworks. The media player is compatible with numerous container file formats of the multimed ...
(RealAudio Lossless)
*
Shorten (SHN)
* TTA (True Audio Lossless)
*
WavPack (WavPack lossless)
*
WMA Lossless (Windows Media Lossless)
Raster graphics
* Lossless only encoding
**
BMP
**
PNG – Portable Network Graphics
**
GIF
The Graphics Interchange Format (GIF; or , ) is a Raster graphics, bitmap Image file formats, image format that was developed by a team at the online services provider CompuServe led by American computer scientist Steve Wilhite and released ...
– Graphics Interchange Format
* Lossy and Lossless encoding options
**
AVIF – AV1 Image File Format
**
FLIF – Free Lossless Image Format
**
HEIF – High Efficiency Image File Format, using
HEVC
High Efficiency Video Coding (HEVC), also known as H.265 and MPEG-H Part 2, is a video compression standard designed as part of the MPEG-H project as a successor to the widely used Advanced Video Coding (AVC, H.264, or MPEG-4 Part 10). In co ...
**
ILBM – (RLE compression of
Amiga
Amiga is a family of personal computers produced by Commodore International, Commodore from 1985 until the company's bankruptcy in 1994, with production by others afterward. The original model is one of a number of mid-1980s computers with 16-b ...
IFF
In logic and related fields such as mathematics and philosophy, "if and only if" (often shortened as "iff") is paraphrased by the biconditional, a logical connective between statements. The biconditional is true in two cases, where either both ...
images)
**
JBIG2 – compression of B&W images
**
JPEG 2000
JPEG 2000 (JP2) is an image compression standard and coding system. It was developed from 1997 to 2000 by a Joint Photographic Experts Group committee chaired by Touradj Ebrahimi (later the JPEG president), with the intention of superseding their ...
– (via Le Gall–Tabatabai 5/3
reversible integer
wavelet transform)
**
JPEG-LS
**
JPEG XL
**
JPEG XR – formerly ''WMPhoto'' and ''HD Photo''
**
LDCT –
Discrete Cosine Transform
A discrete cosine transform (DCT) expresses a finite sequence of data points in terms of a sum of cosine functions oscillating at different frequency, frequencies. The DCT, first proposed by Nasir Ahmed (engineer), Nasir Ahmed in 1972, is a widely ...
**
PCX – PiCture eXchange
**
QOI – Quite OK Image Format
**
TGA – Truevision TGA
**
TIFF – Tag Image File Format
**
WebP
3D Graphics
*
OpenCTM – Lossless compression of 3D triangle meshes
Video
See
list of lossless video codecs
Cryptography
Cryptosystem
In cryptography, a cryptosystem is a suite of cryptographic algorithms needed to implement a particular security service, such as confidentiality (encryption).
Typically, a cryptosystem consists of three algorithms: one for key generation, one ...
s often compress data (the "plaintext") ''before'' encryption for added security. When properly implemented, compression greatly increases the
unicity distance by removing patterns that might facilitate
cryptanalysis
Cryptanalysis (from the Greek ''kryptós'', "hidden", and ''analýein'', "to analyze") refers to the process of analyzing information systems in order to understand hidden aspects of the systems. Cryptanalysis is used to breach cryptographic se ...
.
However, many ordinary lossless compression algorithms produce headers, wrappers, tables, or other predictable output that might instead make cryptanalysis easier. Thus, cryptosystems must utilize compression algorithms whose output does not contain these predictable patterns.
Genetics and genomics
Genetics compression algorithms (not to be confused with
genetic algorithm
In computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to g ...
s) are the latest generation of lossless algorithms that compress data (typically sequences of nucleotides) using both conventional compression algorithms and specific algorithms adapted to genetic data. In 2012, a team of scientists from Johns Hopkins University published the first genetic compression algorithm that does not rely on external genetic databases for compression. HAPZIPPER was tailored for
HapMap data and achieves over 20-fold compression (95% reduction in file size), providing 2- to 4-fold better compression much faster than leading general-purpose compression utilities.
Genomic sequence compression algorithms, also known as DNA sequence compressors, explore the fact that DNA sequences have characteristic properties, such as inverted repeats. The most successful compressors are XM and GeCo.
For
eukaryotes
The eukaryotes ( ) constitute the domain of Eukaryota or Eukarya, organisms whose cells have a membrane-bound nucleus. All animals, plants, fungi, seaweeds, and many unicellular organisms are eukaryotes. They constitute a major group of ...
XM is slightly better in compression ratio, though for sequences larger than 100 MB its computational requirements are impractical.
Executables
Self-extracting executables contain a compressed application and a decompressor. When executed, the decompressor transparently decompresses and runs the original application. This is especially often used in
demo coding, where competitions are held for demos with strict size limits, as small as 1
kilobyte
The kilobyte is a multiple of the unit byte for Computer data storage, digital information.
The International System of Units (SI) defines the prefix ''kilo-, kilo'' as a multiplication factor of 1000 (103); therefore, one kilobyte is 1000&nbs ...
.
This type of compression is not strictly limited to binary executables, but can also be applied to scripts, such as
JavaScript
JavaScript (), often abbreviated as JS, is a programming language and core technology of the World Wide Web, alongside HTML and CSS. Ninety-nine percent of websites use JavaScript on the client side for webpage behavior.
Web browsers have ...
.
Benchmarks
Lossless compression algorithms and their implementations are routinely tested in head-to-head
benchmarks. There are a number of better-known compression benchmarks. Some benchmarks cover only the
data compression ratio, so winners in these benchmarks may be unsuitable for everyday use due to the slow speed of the top performers. Another drawback of some benchmarks is that their data files are known, so some program writers may optimize their programs for best performance on a particular data set. The winners on these benchmarks often come from the class of
context-mixing compression software.
Matt Mahoney, in his February 2010 edition of the free booklet ''Data Compression Explained'', additionally lists the following:
* The
Calgary Corpus dating back to 1987 is no longer widely used due to its small size. Matt Mahoney maintained the Calgary Compression Challenge, created and maintained from May 21, 1996, through May 21, 2016, by Leonid A. Broukhis.
* The Large Text Compression Benchmark and the similar
Hutter Prize both use a trimmed
Wikipedia
Wikipedia is a free content, free Online content, online encyclopedia that is written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and the wiki software MediaWiki. Founded by Jimmy Wales and La ...
XML
Extensible Markup Language (XML) is a markup language and file format for storing, transmitting, and reconstructing data. It defines a set of rules for encoding electronic document, documents in a format that is both human-readable and Machine-r ...
UTF-8
UTF-8 is a character encoding standard used for electronic communication. Defined by the Unicode Standard, the name is derived from ''Unicode Transformation Format 8-bit''. Almost every webpage is transmitted as UTF-8.
UTF-8 supports all 1,112,0 ...
data set.
* The Generic Compression Benchmark, maintained by Matt Mahoney, tests compression of data generated by random
Turing machines
A Turing machine is a mathematical model of computation describing an abstract machine that manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, it is capable of implementing any computer alg ...
.
* Sami Runsas (the author of NanoZip) maintained Compression Ratings, a benchmark similar to Maximum Compression multiple file test, but with minimum speed requirements. It offered the calculator that allowed the user to weight the importance of speed and compression ratio. The top programs were fairly different due to the speed requirement. In January 2010, the top program was NanoZip followed by
FreeArc,
CCM,
flashzip, and
7-Zip
7-Zip is a free and open-source file archiver, a utility used to place groups of files within compressed containers known as "archives". It is developed by Igor Pavlov and was first released in 1999. 7-Zip has its own Archive file, archive forma ...
.
* The Monster of Compression benchmark by Nania Francesco Antonio tested compression on 1Gb of public data with a 40-minute time limit. In December 2009, the top ranked archiver was NanoZip 0.07a and the top ranked single file compressor was
ccmx 1.30c.
The Compression Ratings website published a chart summary of the "frontier" in compression ratio and time.
The Compression Analysis Tool is a Windows application that enables end users to benchmark the performance characteristics of streaming implementations of LZF4, Deflate, ZLIB, GZIP, BZIP2 and LZMA using their own data. It produces measurements and charts with which users can compare the compression speed, decompression speed and compression ratio of the different compression methods and to examine how the compression level, buffer size and flushing operations affect the results.
Limitations
Lossless data compression algorithms cannot guarantee compression for all input data sets. In other words, for any lossless data compression algorithm, there will be an input data set that does not get smaller when processed by the algorithm, and for any lossless data compression algorithm that makes at least one file smaller, there will be at least one file that it makes larger. This is easily proven with elementary mathematics using a counting argument called the
pigeonhole principle
In mathematics, the pigeonhole principle states that if items are put into containers, with , then at least one container must contain more than one item. For example, of three gloves, at least two must be right-handed or at least two must be l ...
, as follows:
[ See in particula]
pp. 8–9
* Assume that each file is represented as a string of bits of some arbitrary length.
* Suppose that there is a compression algorithm that transforms every file into an output file that is no longer than the original file, and that at least one file will be compressed into an output file that is shorter than the original file.
* Let ''M'' be the least number such that there is a file ''F'' with length ''M'' bits that compresses to something shorter. Let ''N'' be the length (in bits) of the compressed version of ''F''.
* Because ''N''<''M'', every file of length ''N'' keeps its size during compression. There are 2
''N'' such files possible. Together with ''F'', this makes 2
''N''+1 files that all compress into one of the 2
''N'' files of length ''N''.
* But 2
''N'' is smaller than 2
''N''+1, so by the pigeonhole principle there must be some file of length ''N'' that is simultaneously the output of the compression function on two different inputs. That file cannot be decompressed reliably (which of the two originals should that yield?), which contradicts the assumption that the algorithm was lossless.
* We must therefore conclude that our original hypothesis (that the compression function makes no file longer) is necessarily untrue.
Most practical compression algorithms provide an "escape" facility that can turn off the normal coding for files that would become longer by being encoded. In theory, only a single additional bit is required to tell the decoder that the normal coding has been turned off for the entire input; however, most encoding algorithms use at least one full byte (and typically more than one) for this purpose. For example,
deflate compressed files never need to grow by more than 5 bytes per 65,535 bytes of input.
In fact, if we consider files of length N, if all files were equally probable, then for any lossless compression that reduces the size of some file, the expected length of a compressed file (averaged over all possible files of length N) must necessarily be ''greater'' than N. So if we know nothing about the properties of the data we are compressing, we might as well not compress it at all. A lossless compression algorithm is useful only when we are more likely to compress certain types of files than others; then the algorithm could be designed to compress those types of data better.
Thus, the main lesson from the argument is not that one risks big losses, but merely that one cannot always win. To choose an algorithm always means implicitly to select a ''subset'' of all files that will become usefully shorter. This is the theoretical reason why we need to have different compression algorithms for different kinds of files: there cannot be any algorithm that is good for all kinds of data.
The "trick" that allows lossless compression algorithms, used on the type of data they were designed for, to consistently compress such files to a shorter form is that the files the algorithms are designed to act on all have some form of easily modeled
redundancy that the algorithm is designed to remove, and thus belong to the subset of files that that algorithm can make shorter, whereas other files would not get compressed or even get bigger. Algorithms are generally quite specifically tuned to a particular type of file: for example, lossless audio compression programs do not work well on text files, and vice versa.
In particular, files of
random
In common usage, randomness is the apparent or actual lack of definite pattern or predictability in information. A random sequence of events, symbols or steps often has no order and does not follow an intelligible pattern or combination. ...
data cannot be consistently compressed by any conceivable lossless data compression algorithm; indeed, this result is used to ''define'' the concept of randomness in
Kolmogorov complexity.
It is provably impossible to create an algorithm that can losslessly compress any data. While there have been many claims through the years of companies achieving "perfect compression" where an arbitrary number ''N'' of random bits can always be compressed to ''N'' − 1 bits, these kinds of claims can be safely discarded without even looking at any further details regarding the purported compression scheme. Such an algorithm contradicts fundamental laws of mathematics because, if it existed, it could be applied repeatedly to losslessly reduce any file to length 1.
[
On the other hand, it has also been proven that there is no algorithm to determine whether a file is incompressible in the sense of Kolmogorov complexity. Hence it is possible that any particular file, even if it appears random, may be significantly compressed, even including the size of the decompressor. An example is the digits of the mathematical constant '' pi'', which appear random but can be generated by a very small program. However, even though it cannot be determined whether a particular file is incompressible, a simple theorem about incompressible strings shows that over 99% of files of any given length cannot be compressed by more than one byte (including the size of the decompressor).
]
Mathematical background
Abstractly, a compression algorithm can be viewed as a function on sequences (normally of octets). Compression is successful if the resulting sequence is shorter than the original sequence (and the instructions for the decompression map). For a compression algorithm to be lossless, the compression map must form an injection from "plain" to "compressed" bit sequences. The pigeonhole principle prohibits a bijection between the collection of sequences of length ''N'' and any subset of the collection of sequences of length ''N''−1. Therefore, it is not possible to produce a lossless algorithm that reduces the size of every possible input sequence.
Points of application in real compression theory
Real compression algorithm designers accept that streams of high information entropy cannot be compressed, and accordingly, include facilities for detecting and handling this condition. An obvious way of detection is applying a raw compression algorithm and testing if its output is smaller than its input. Sometimes, detection is made by heuristics
A heuristic or heuristic technique (''problem solving'', '' mental shortcut'', ''rule of thumb'') is any approach to problem solving that employs a pragmatic method that is not fully optimized, perfected, or rationalized, but is nevertheless ...
; for example, a compression application may consider files whose names end in ".zip", ".arj" or ".lha" uncompressible without any more sophisticated detection. A common way of handling this situation is quoting input, or uncompressible parts of the input in the output, minimizing the compression overhead. For example, the zip data format specifies the 'compression method' of 'Stored' for input files that have been copied into the archive verbatim.
The Million Random Digit Challenge
Mark Nelson, in response to claims of "magic" compression algorithms appearing in comp.compression, has constructed a 415,241 byte binary file of highly entropic content, and issued a public challenge of $100 to anyone to write a program that, together with its input, would be smaller than his provided binary data yet be able to reconstitute it without error.
A similar challenge, with $5,000 as reward, was issued by Mike Goldman.
See also
* Comparison of file archivers
* Data compression
In information theory, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compressi ...
* David A. Huffman
* Entropy (information theory)
In information theory, the entropy of a random variable quantifies the average level of uncertainty or information associated with the variable's potential states or possible outcomes. This measures the expected amount of information needed ...
* Grammar-based code
* Hutter Prize
* Information theory
Information theory is the mathematical study of the quantification (science), quantification, Data storage, storage, and telecommunications, communication of information. The field was established and formalized by Claude Shannon in the 1940s, ...
* Kolmogorov complexity
* List of codecs
* Lossless Transform Audio Compression (LTAC)
* Lossy compression
* Normal number
* Universal code (data compression)
References
Further reading
* (790 pages)
* (488 pages)
External links
*
*
*
*
* overview of
US patent #7,096,360
, " "Frequency-Time Based Data Compression Method" supporting the compression, encryption, decompression, and decryption and persistence of many binary digits through frequencies where each frequency represents many bits."
{{Compression Methods
Data compression
Lossless compression algorithms