Image file compression
   HOME

TheInfoList



OR:

Image compression is a type of
data compression In information theory, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compressio ...
applied to digital images, to reduce their cost for storage or transmission.
Algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing ...
s may take advantage of
visual perception Visual perception is the ability to interpret the surrounding Biophysical environment, environment through photopic vision (daytime vision), color vision, scotopic vision (night vision), and mesopic vision (twilight vision), using light in the ...
and the statistical properties of image data to provide superior results compared with generic
data compression In information theory, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compressio ...
methods which are used for other digital data.


Lossy and lossless image compression

Image compression may be
lossy In information technology, lossy compression or irreversible compression is the class of data compression methods that uses inexact approximations and partial data discarding to represent the content. These techniques are used to reduce data size ...
or
lossless Lossless compression is a class of data compression that allows the original data to be perfectly reconstructed from the compressed data with no loss of information. Lossless compression is possible because most real-world data exhibits statistic ...
. Lossless compression is preferred for archival purposes and often for medical imaging, technical drawings,
clip art Clip art (also clipart, clip-art) is a type of graphic art. Pieces are pre-made images used to illustrate any medium. Today, clip art is used extensively and comes in many forms, both electronic and printed. However, most clip art today is creat ...
, or comics. Lossy compression methods, especially when used at low bit rates, introduce compression artifacts. Lossy methods are especially suitable for natural images such as photographs in applications where minor (sometimes imperceptible) loss of fidelity is acceptable to achieve a substantial reduction in bit rate. Lossy compression that produces negligible differences may be called visually lossless. Methods for
lossy compression In information technology, lossy compression or irreversible compression is the class of data compression methods that uses inexact approximations and partial data discarding to represent the content. These techniques are used to reduce data size ...
: *
Transform coding Transform coding is a type of data compression for "natural" data like audio signals or photographic images. The transformation is typically lossless (perfectly reversible) on its own but is used to enable better (more targeted) quantization, ...
– This is the most commonly used method. ** Discrete Cosine Transform (DCT) – The most widely used form of lossy compression. It is a type of
Fourier-related transform This is a list of linear transformations of functions related to Fourier analysis. Such transformations map a function to a set of coefficients of basis functions, where the basis functions are sinusoidal and are therefore strongly localized i ...
, and was originally developed by Nasir Ahmed, T. Natarajan and K. R. Rao in 1974. The DCT is sometimes referred to as "DCT-II" in the context of a family of discrete cosine transforms (see discrete cosine transform). It is generally the most efficient form of image compression. *** DCT is used in JPEG, the most popular lossy format, and the more recent
HEIF High Efficiency Image File Format (HEIF) is a container format for storing individual digital images and image sequences. The standard covers multimedia files that can also include other media streams, such as timed text, audio and video. HEIF ...
. ** The more recently developed
wavelet transform In mathematics, a wavelet series is a representation of a square-integrable ( real- or complex-valued) function by a certain orthonormal series generated by a wavelet. This article provides a formal, mathematical definition of an orthonormal ...
is also used extensively, followed by quantization and
entropy coding In information theory, an entropy coding (or entropy encoding) is any lossless data compression method that attempts to approach the lower bound declared by Shannon's source coding theorem, which states that any lossless data compression method ...
. *
Color quantization In computer graphics, color quantization or color image quantization is quantization applied to color spaces; it is a process that reduces the number of distinct colors used in an image, usually with the intention that the new image should be as v ...
- Reducing the
color space A color space is a specific organization of colors. In combination with color profiling supported by various physical devices, it supports reproducible representations of colorwhether such representation entails an analog or a digital represen ...
to a few "representative" colors in the image. The selected colors are specified in the colour palette in the header of the compressed image. Each pixel just references the index of a color in the color palette. This method can be combined with
dithering Dither is an intentionally applied form of noise used to randomize quantization error, preventing large-scale patterns such as color banding in images. Dither is routinely used in processing of both digital audio and video data, and is often ...
to avoid
posterization Posterization or posterisation of an image is the conversion of a continuous gradation of tone to several regions of fewer tones, causing abrupt changes from one tone to another. This was originally done with photographic processes to create p ...
. ** Whole-image palette, typically 256 colors, used in GIF and PNG file formats. ** block palette, typically 2 or 4 colors for each block of 4x4 pixels, used in BTC, CCC,
S2TC S2TC (short for Super Simple Texture Compression) is a texture compression algorithm based on Color Cell Compression. It is designed to be compatible with existing patented S3TC decompressors while avoiding any need for patent licensing fees. A ...
, and S3TC. *
Chroma subsampling Chroma subsampling is the practice of encoding images by implementing less resolution for chroma information than for luma information, taking advantage of the human visual system's lower acuity for color differences than for luminance. It is u ...
. This takes advantage of the fact that the human eye perceives spatial changes of brightness more sharply than those of color, by averaging or dropping some of the chrominance information in the image. *
Fractal compression Fractal compression is a lossy compression method for digital images, based on fractals. The method is best suited for textures and natural images, relying on the fact that parts of an image often resemble other parts of the same image. Fractal a ...
. Methods for lossless compression: * Run-length encoding – used in default method in
PCX PCX, standing for ''PiCture eXchange'', was an image file format developed by the now-defunct ZSoft Corporation of Marietta, Georgia, United States. It was the native file format for PC Paintbrush and became one of the first widely accepted DOS ...
and as one of possible in BMP, TGA,
TIFF Tag Image File Format, abbreviated TIFF or TIF, is an image file format for storing raster graphics images, popular among graphic artists, the publishing industry, and photographers. TIFF is widely supported by scanning, faxing, word process ...
* Area image compression * Predictive coding – used in
DPCM Differential pulse-code modulation (DPCM) is a signal encoder that uses the baseline of pulse-code modulation (PCM) but adds some functionalities based on the prediction of the samples of the signal. The input can be an analog signal or a digita ...
*
Entropy encoding In information theory, an entropy coding (or entropy encoding) is any lossless data compression method that attempts to approach the lower bound declared by Shannon's source coding theorem, which states that any lossless data compression method ...
– the two most common entropy encoding techniques are
arithmetic coding Arithmetic coding (AC) is a form of entropy encoding used in lossless data compression. Normally, a string of characters is represented using a fixed number of bits per character, as in the ASCII code. When a string is converted to arithmetic ...
and
Huffman coding In computer science and information theory, a Huffman code is a particular type of optimal prefix code that is commonly used for lossless data compression. The process of finding or using such a code proceeds by means of Huffman coding, an algo ...
* Adaptive dictionary algorithms such as LZW – used in GIF and
TIFF Tag Image File Format, abbreviated TIFF or TIF, is an image file format for storing raster graphics images, popular among graphic artists, the publishing industry, and photographers. TIFF is widely supported by scanning, faxing, word process ...
* DEFLATE – used in PNG, MNG, and
TIFF Tag Image File Format, abbreviated TIFF or TIF, is an image file format for storing raster graphics images, popular among graphic artists, the publishing industry, and photographers. TIFF is widely supported by scanning, faxing, word process ...
*
Chain code A chain code is a lossless compression based image segmentation method for binary images based upon tracing image contours. The basic principle of chain coding, like other contour codings, is to separately encode each connected component, or "blob" ...
s


Other properties

The best image quality at a given compression rate (or bit rate) is the main goal of image compression, however, there are other important properties of image compression schemes: Scalability generally refers to a quality reduction achieved by manipulation of the bitstream or file (without decompression and re-compression). Other names for scalability are ''progressive coding'' or ''embedded bitstreams''. Despite its contrary nature, scalability also may be found in lossless codecs, usually in form of coarse-to-fine pixel scans. Scalability is especially useful for previewing images while downloading them (e.g., in a web browser) or for providing variable quality access to e.g., databases. There are several types of scalability: * Quality progressive or layer progressive: The bitstream successively refines the reconstructed image. * Resolution progressive: First encode a lower image resolution; then encode the difference to higher resolutions. * Component progressive: First encode grey-scale version; then adding full color. Region of interest coding. Certain parts of the image are encoded with higher quality than others. This may be combined with scalability (encode these parts first, others later). Meta information. Compressed data may contain information about the image which may be used to categorize, search, or browse images. Such information may include color and texture statistics, small
preview Preview may refer to: Theatre, film, television * Preview (subscription service), an early subscription television service in the United States * Preview (theatre), a public performance of a theatrical show before the official opening * Previe ...
images, and author or copyright information. Processing power. Compression algorithms require different amounts of
processing power In computing, computer performance is the amount of useful work accomplished by a computer system. Outside of specific contexts, computer performance is estimated in terms of accuracy, efficiency and speed of executing computer program instruction ...
to encode and decode. Some high compression algorithms require high processing power. The quality of a compression method often is measured by the
peak signal-to-noise ratio Peak signal-to-noise ratio (PSNR) is an engineering term for the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation. Because many signals have a very wide dynamic ...
. It measures the amount of noise introduced through a lossy compression of the image, however, the subjective judgment of the viewer also is regarded as an important measure, perhaps, being the most important measure.


History

Entropy coding In information theory, an entropy coding (or entropy encoding) is any lossless data compression method that attempts to approach the lower bound declared by Shannon's source coding theorem, which states that any lossless data compression method ...
started in the 1940s with the introduction of
Shannon–Fano coding In the field of data compression, Shannon–Fano coding, named after Claude Shannon and Robert Fano, is a name given to two different but related techniques for constructing a prefix code based on a set of symbols and their probabilities (estimat ...
, the basis for
Huffman coding In computer science and information theory, a Huffman code is a particular type of optimal prefix code that is commonly used for lossless data compression. The process of finding or using such a code proceeds by means of Huffman coding, an algo ...
which was developed in 1950.
Transform coding Transform coding is a type of data compression for "natural" data like audio signals or photographic images. The transformation is typically lossless (perfectly reversible) on its own but is used to enable better (more targeted) quantization, ...
dates back to the late 1960s, with the introduction of fast Fourier transform (FFT) coding in 1968 and the
Hadamard transform The Hadamard transform (also known as the Walsh–Hadamard transform, Hadamard–Rademacher–Walsh transform, Walsh transform, or Walsh–Fourier transform) is an example of a generalized class of Fourier transforms. It performs an orthogonal ...
in 1969.William K. Pratt, Julius Kane, Harry C. Andrews:
Hadamard transform image coding
, in Proceedings of the IEEE 57.1 (1969): Seiten 58–68
An important development in image
data compression In information theory, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compressio ...
was the discrete cosine transform (DCT), a
lossy compression In information technology, lossy compression or irreversible compression is the class of data compression methods that uses inexact approximations and partial data discarding to represent the content. These techniques are used to reduce data size ...
technique first proposed by Nasir Ahmed, T. Natarajan and K. R. Rao in 1973. JPEG was introduced by the
Joint Photographic Experts Group The Joint Photographic Experts Group (JPEG) is the joint committee between ISO/IEC JTC 1/SC 29 and ITU-T Study Group 16 that created and maintains the JPEG, JPEG 2000, JPEG XR, JPEG XT, JPEG XS, JPEG XL, and related digital image standards. I ...
(JPEG) in 1992. JPEG compresses images down to much smaller file sizes, and has become the most widely used
image file format An Image file format is a file format for a digital image. There are many formats that can be used, such as JPEG, PNG, and GIF. Most formats up until 2022 were for storing 2D images, not 3D ones. The data stored in an image file format may be ...
. JPEG was largely responsible for the wide proliferation of digital images and digital photos, with several billion JPEG images produced every day as of 2015.
Lempel–Ziv–Welch Lempel–Ziv–Welch (LZW) is a universal lossless data compression algorithm created by Abraham Lempel, Jacob Ziv, and Terry Welch. It was published by Welch in 1984 as an improved implementation of the LZ78 algorithm published by Lempe ...
(LZW) is a lossless compression algorithm developed by Abraham Lempel,
Jacob Ziv Jacob Ziv ( he, יעקב זיו; born 1931) is an Israeli electrical engineer who, along with Abraham Lempel, developed the LZ family of lossless data compression algorithms. Biography Ziv was born in Tiberias, British mandate Palestine, on 27 ...
and Terry Welch in 1984. It is used in the GIF format, introduced in 1987. DEFLATE, a lossless compression algorithm developed by
Phil Katz Phillip Walter Katz (November 3, 1962 – April 14, 2000) was a computer programmer best known as the co-creator of the Zip file format for data compression, and the author of PKZIP, a program for creating zip files that ran under DOS. A ...
and specified in 1996, is used in the Portable Network Graphics (PNG) format. The JPEG 2000 standard was developed from 1997 to 2000 by a JPEG committee chaired by Touradj Ebrahimi (later the JPEG president). In contrast to the DCT algorithm used by the original JPEG format, JPEG 2000 instead uses
discrete wavelet transform In numerical analysis and functional analysis, a discrete wavelet transform (DWT) is any wavelet transform for which the wavelets are discretely sampled. As with other wavelet transforms, a key advantage it has over Fourier transforms is temporal ...
(DWT) algorithms. It uses the CDF 9/7 wavelet transform (developed by
Ingrid Daubechies Baroness Ingrid Daubechies ( ; ; born 17 August 1954) is a Belgian physicist and mathematician. She is best known for her work with wavelets in image compression. Daubechies is recognized for her study of the mathematical methods that enhance ...
in 1992) for its lossy compression algorithm, and the Le Gall–Tabatabai (LGT) 5/3 wavelet transform (developed by Didier Le Gall and Ali J. Tabatabai in 1988) for its lossless compression algorithm. JPEG 2000 technology, which includes the
Motion JPEG 2000 Motion JPEG 2000 (MJ2 or MJP2) is a file format for motion sequences of JPEG 2000 images and associated audio, based on the MP4 and QuickTime format. Filename extensions for Motion JPEG 2000 video files are .mj2 and .mjp2, as defined in RFC 3 ...
extension, was selected as the
video coding standard A video coding format (or sometimes video compression format) is a content representation format for storage or transmission of digital video content (such as in a data file or bitstream). It typically uses a standardized video compression algori ...
for
digital cinema Digital cinema refers to adoption of digital technology within the film industry to distribute or project motion pictures as opposed to the historical use of reels of motion picture film, such as 35 mm film. Whereas film reels have to be sh ...
in 2004.


Notes and references


External links


Image compression
lecture from MIT OpenCourseWare
Image Compressor tool online



A study about image compression
with basics, comparing different compression methods like JPEG2000, JPEG and JPEG XR / HD Photo
Data Compression Basics
includes comparison of PNG, JPEG and JPEG-2000 formats
FAQ:What is the state of the art in lossless image compression?
from ews://comp.compression comp.compression
IPRG
an open group related to image processing research resources
Advantages And Disadvantages of Image Compressor
{{Compression formats