HOME
*





Global Motion Compensation
{{refimprove, date=September 2008 ''Global motion compensation'' ''(GMC)'' is a motion compensation technique used in video compression to reduce the bitrate required to encode video. It is most commonly used in MPEG-4 ASP, such as with the DivX and Xvid codecs. Operation Global motion compensation describes the motion in a scene based on a single affine transform instruction. The reference frame is panned, rotated and zoomed in accordance to GMC warp points to create a prediction of how the following frame will look. Since this operation works on individual pixels (rather than blocks), it is capable of creating predictions that are not possible using block-based approaches. Each macroblock in such a frame can be compensated using global motion (no further motion information is then signalled) or, alternatively, local motion (as if GMC were off). This choice, while costing an additional bit per macroblock, can improve prediction quality and therefore reduce residual. Because ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Motion Compensation
Motion compensation in computing, is an algorithmic technique used to predict a frame in a video, given the previous and/or future frames by accounting for motion of the camera and/or objects in the video. It is employed in the encoding of video data for video compression, for example in the generation of MPEG-2 files. Motion compensation describes a picture in terms of the transformation of a reference picture to the current picture. The reference picture may be previous in time or even from the future. When images can be accurately synthesized from previously transmitted/stored images, the compression efficiency can be improved. Motion compensation is one of the two key video compression techniques used in video coding standards, along with the discrete cosine transform (DCT). Most video coding standards, such as the H.26x and MPEG formats, typically use motion-compensated DCT hybrid coding, known as block motion compensation (BMC) or motion-compensated DCT (MC DCT). Funct ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Residual Frame
In video compression algorithms a residual frame is formed by subtracting the reference frame from the desired frame. This difference is known as the error or residual frame. The residual frame normally has less information entropy, due to nearby video frames having similarities, and therefore requires fewer bits to compress. An encoder will use various algorithms such as motion estimation to construct a frame that describes the differences. This allows a decoder to use the reference frame plus the differences to construct the desired frame. See also *Motion compensation Motion compensation in computing, is an algorithmic technique used to predict a frame in a video, given the previous and/or future frames by accounting for motion of the camera and/or objects in the video. It is employed in the encoding of video d ... References Video compression {{Software-stub ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

MPEG
The Moving Picture Experts Group (MPEG) is an alliance of working groups established jointly by ISO and IEC that sets standards for media coding, including compression coding of audio, video, graphics, and genomic data; and transmission and file formats for various applications.John Watkinson, ''The MPEG Handbook'', p. 1 Together with JPEG, MPEG is organized under ISO/IEC JTC 1/SC 29 – ''Coding of audio, picture, multimedia and hypermedia information'' (ISO/IEC Joint Technical Committee 1, Subcommittee 29). MPEG formats are used in various multimedia systems. The most well known older MPEG media formats typically use MPEG-1, MPEG-2, and MPEG-4 AVC media coding and MPEG-2 systems transport streams and program streams. Newer systems typically use the MPEG base media file format and dynamic streaming (a.k.a. MPEG-DASH). History MPEG was established in 1988 by the initiative of Dr. Hiroshi Yasuda ( NTT) and Dr. Leonardo Chiariglione ( CSELT). Chiariglione was the group's ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Video Codecs
The following is a list of compression formats and related codecs. Audio compression formats Non-compression * Linear pulse-code modulation (LPCM, generally only described as PCM) is the format for uncompressed audio in media files and it is also the standard for CD-DA; note that in computers, LPCM is usually stored in container formats such as WAV, AIFF, or AU, or as raw audio format, although not technically necessary. ** FFmpeg * Pulse-density modulation (PDM) ** Direct Stream Digital (DSD) is standard for Super Audio CD *** foobar2000 Super Audio CD Decoder (based on MPEG-4 DST reference decoder) *** FFmpeg (based on dsd2pcm) * Pulse-amplitude modulation (PAM) Lossless compression * Actively used ** Most popular *** Free Lossless Audio Codec (FLAC) **** libFLAC **** FFmpeg *** Apple Lossless Audio Codec (ALAC) **** Apple QuickTime **** libalac **** FFmpeg **** Apple Music *** Monkey's Audio (APE) **** Monkey's Audio SDK **** FFmpeg (decoder only) *** OptimFROG (OFR ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Motion Compensation
Motion compensation in computing, is an algorithmic technique used to predict a frame in a video, given the previous and/or future frames by accounting for motion of the camera and/or objects in the video. It is employed in the encoding of video data for video compression, for example in the generation of MPEG-2 files. Motion compensation describes a picture in terms of the transformation of a reference picture to the current picture. The reference picture may be previous in time or even from the future. When images can be accurately synthesized from previously transmitted/stored images, the compression efficiency can be improved. Motion compensation is one of the two key video compression techniques used in video coding standards, along with the discrete cosine transform (DCT). Most video coding standards, such as the H.26x and MPEG formats, typically use motion-compensated DCT hybrid coding, known as block motion compensation (BMC) or motion-compensated DCT (MC DCT). Funct ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


I-frame
In the field of video compression a video frame is compressed using different algorithms with different advantages and disadvantages, centered mainly around amount of data compression. These different algorithms for video frames are called picture types or frame types. The three major picture types used in the different video algorithms are I, P and B. They are different in the following characteristics: * I‑frames are the least compressible but don't require other video frames to decode. * P‑frames can use data from previous frames to decompress and are more compressible than I‑frames. * B‑frames can use both previous and forward frames for data reference to get the highest amount of data compression. Summary Three types of ''pictures'' (or frames) are used in video compression: I, P, and B frames. An I‑frame ( Intra-coded picture) is a complete image, like a JPG or BMP image file. A P‑frame (Predicted picture) holds only the changes in the image from the ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




B-frame
In the field of video compression a video frame is compressed using different algorithms with different advantages and disadvantages, centered mainly around amount of data compression. These different algorithms for video frames are called picture types or frame types. The three major picture types used in the different video algorithms are I, P and B. They are different in the following characteristics: * I‑frames are the least compressible but don't require other video frames to decode. * P‑frames can use data from previous frames to decompress and are more compressible than I‑frames. * B‑frames can use both previous and forward frames for data reference to get the highest amount of data compression. Summary Three types of ''pictures'' (or frames) are used in video compression: I, P, and B frames. An I‑frame ( Intra-coded picture) is a complete image, like a JPG JPEG ( ) is a commonly used method of lossy compression for digital images, particularly ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


P-frame
In the field of video compression a video frame is compressed using different algorithms with different advantages and disadvantages, centered mainly around amount of data compression. These different algorithms for video frames are called picture types or frame types. The three major picture types used in the different video algorithms are I, P and B. They are different in the following characteristics: * I‑frames are the least compressible but don't require other video frames to decode. * P‑frames can use data from previous frames to decompress and are more compressible than I‑frames. * B‑frames can use both previous and forward frames for data reference to get the highest amount of data compression. Summary Three types of ''pictures'' (or frames) are used in video compression: I, P, and B frames. An I‑frame ( Intra-coded picture) is a complete image, like a JPG or BMP image file. A P‑frame (Predicted picture) holds only the changes in the image from th ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Macroblock
The macroblock is a processing unit in image and video compression formats based on linear block transforms, typically the discrete cosine transform (DCT). A macroblock typically consists of 16×16 samples, and is further subdivided into transform blocks, and may be further subdivided into prediction blocks. Formats which are based on macroblocks include JPEG, where they are called MCU blocks, H.261, MPEG-1 Part 2, H.262/MPEG-2 Part 2, H.263, MPEG-4 Part 2, and H.264/MPEG-4 AVC. In H.265/HEVC, the macroblock as a basic processing unit has been replaced by the coding tree unit. Technical details Transform blocks A macroblock is divided into transform blocks, which serve as input to the linear block transform, e.g. the DCT. In H.261, the first video codec to use macroblocks, transform blocks have a fixed size of 8×8 samples. In the YCbCr color space with 4:2:0 chroma subsampling, a 16×16 macroblock consists of 16×16 luma (Y) samples and 8×8 chroma (Cb and Cr) samples. T ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Video Compression
In information theory, data compression, source coding, or bit-rate reduction is the process of encoding information using fewer bits than the original representation. Any particular compression is either lossy or lossless. Lossless compression reduces bits by identifying and eliminating statistical redundancy. No information is lost in lossless compression. Lossy compression reduces bits by removing unnecessary or less important information. Typically, a device that performs data compression is referred to as an encoder, and one that performs the reversal of the process (decompression) as a decoder. The process of reducing the size of a data file is often referred to as data compression. In the context of data transmission, it is called source coding; encoding done at the source of the data before it is stored or transmitted. Source coding should not be confused with channel coding, for error detection and correction or line coding, the means for mapping data onto a signal. ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Affine Transform
In Euclidean geometry, an affine transformation or affinity (from the Latin, ''affinis'', "connected with") is a geometric transformation that preserves lines and parallelism, but not necessarily Euclidean distances and angles. More generally, an affine transformation is an automorphism of an affine space (Euclidean spaces are specific affine spaces), that is, a function which maps an affine space onto itself while preserving both the dimension of any affine subspaces (meaning that it sends points to points, lines to lines, planes to planes, and so on) and the ratios of the lengths of parallel line segments. Consequently, sets of parallel affine subspaces remain parallel after an affine transformation. An affine transformation does not necessarily preserve angles between lines or distances between points, though it does preserve ratios of distances between points lying on a straight line. If is the point set of an affine space, then every affine transformation on can be ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Codec
A codec is a device or computer program that encodes or decodes a data stream or signal. ''Codec'' is a portmanteau of coder/decoder. In electronic communications, an endec is a device that acts as both an encoder and a decoder on a signal or data stream, and hence is a type of codec. ''Endec'' is a portmanteau of encoder/decoder. A coder or encoder encodes a data stream or a signal for transmission or storage, possibly in encrypted form, and the decoder function reverses the encoding for playback or editing. Codecs are used in videoconferencing, streaming media, and video editing applications. History In the mid-20th century, a codec was a device that coded analog signals into digital form using pulse-code modulation (PCM). Later, the name was also applied to software for converting between digital signal formats, including companding functions. Examples An audio codec converts analog audio signals into digital signals for transmission or encodes them for storage. A r ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]