Deinterlace
   HOME

TheInfoList



OR:

Deinterlacing is the process of converting
interlaced video Interlaced video (also known as interlaced scan) is a technique for doubling the perceived frame rate of a video display without consuming extra bandwidth. The interlaced signal contains two fields of a video frame captured consecutively. Th ...
into a non-interlaced or progressive form. Interlaced video signals are commonly found in
analog television Analog television is the original television technology that uses analog signals to transmit video and audio. In an analog television broadcast, the brightness, colors and sound are represented by amplitude, phase and frequency of an analog ...
, digital television (
HDTV High-definition television (HD or HDTV) describes a television system which provides a substantially higher image resolution than the previous generation of technologies. The term has been used since 1936; in more recent times, it refers to the g ...
) when in the
1080i 1080i (also known as Full HD or BT.709) is a combination of frame resolution and scan type. 1080i is used in high-definition television (HDTV) and high-definition video. The number "1080" refers to the number of horizontal lines on the scree ...
format, some DVD titles, and a smaller number of
Blu-ray The Blu-ray Disc (BD), often known simply as Blu-ray, is a digital optical disc data storage format. It was invented and developed in 2005 and released on June 20, 2006 worldwide. It is designed to supersede the DVD format, and capable of st ...
discs. An
interlaced video Interlaced video (also known as interlaced scan) is a technique for doubling the perceived frame rate of a video display without consuming extra bandwidth. The interlaced signal contains two fields of a video frame captured consecutively. Th ...
frame consists of two fields taken in sequence: the first containing all the odd lines of the image, and the second all the even lines. Analog television employed this technique because it allowed for less transmission bandwidth while keeping a high
frame rate Frame rate (expressed in or FPS) is the frequency (rate) at which consecutive images ( frames) are captured or displayed. The term applies equally to film and video cameras, computer graphics, and motion capture systems. Frame rate may also be ...
for smoother and more life-like motion. A non-interlaced (or
progressive scan Progressive scanning (alternatively referred to as noninterlaced scanning) is a format of displaying, storing, or transmitting moving images in which all the lines of each frame are drawn in sequence. This is in contrast to interlaced video use ...
) signal that uses the same bandwidth only updates the display half as often and was found to create a perceived flicker or stutter. CRT-based displays were able to display interlaced video correctly due to their complete analog nature, blending in the alternating lines seamlessly. However, since the early 2000s, displays such as televisions and computer monitors have become almost entirely digital - in that the display is composed of discrete pixels - and on such displays the interlacing becomes noticeable and can appear as a distracting visual defect. The deinterlacing process should try to minimize these. Deinterlacing is thus a necessary process and comes built-in to most modern DVD players, Blu-ray players, LCD/LED televisions, digital projectors, TV set-top boxes, professional broadcast equipment, and computer video players and editors - although each with varying levels of quality. Deinterlacing has been researched for decades and employs complex processing algorithms; however, consistent results have been very hard to achieve.


Background

Both
video Video is an electronic medium for the recording, copying, playback, broadcasting, and display of moving visual media. Video was first developed for mechanical television systems, which were quickly replaced by cathode-ray tube (CRT) sy ...
and
photographic film Photographic film is a strip or sheet of transparent film base coated on one side with a gelatin emulsion containing microscopically small light-sensitive silver halide crystals. The sizes and other characteristics of the crystals determine ...
capture a series of frames (still images) in rapid succession; however, television systems read the captured image by serially scanning the
image sensor An image sensor or imager is a sensor that detects and conveys information used to make an image. It does so by converting the variable attenuation of light waves (as they pass through or reflect off objects) into signals, small bursts of c ...
by lines (rows). In analog television, each frame is divided into two consecutive fields, one containing all even lines, another with the odd lines. The fields are captured in succession at a rate twice that of the nominal frame rate. For instance,
PAL Phase Alternating Line (PAL) is a colour encoding system for analogue television. It was one of three major analogue colour television standards, the others being NTSC and SECAM. In most countries it was broadcast at 625 lines, 50 fields (25 ...
and
SECAM SECAM, also written SÉCAM (, ''Séquentiel de couleur à mémoire'', French for ''color sequential with memory''), is an analog color television system that was used in France, some parts of Europe and Africa, and Russia. It was one of th ...
systems have a rate of 25 frames/sec or 50 fields/sec, while the
NTSC The first American standard for analog television broadcast was developed by National Television System Committee (NTSC)National Television System Committee (1951–1953), Report and Reports of Panel No. 11, 11-A, 12–19, with Some supplement ...
system delivers 29.97 frames/sec or 59.94 fields/sec. This process of dividing frames into half-resolution fields at double the frame rate is known as ''interlacing''. Since the interlaced signal contains the two fields of a video frame shot at two different times, it enhances motion perception to the viewer and reduces flicker by taking advantage of the
persistence of vision Persistence of vision traditionally refers to the optical illusion that occurs when visual perception of an object does not cease for some time after the rays of light proceeding from it have ceased to enter the eye. The illusion has also been d ...
effect. This results in an effective doubling of time resolution as compared with non-interlaced footage (for frame rates equal to field rates). However, interlaced signal requires a display that is natively capable of showing the individual fields in a sequential order, and only traditional CRT-based TV sets are capable of displaying interlaced signal, due to the electronic scanning and lack of apparent fixed resolution. Most modern displays, such as LCD, DLP and
plasma display A plasma display panel (PDP) is a type of flat panel display that uses small cells containing plasma: ionized gas that responds to electric fields. Plasma televisions were the first large (over 32 inches diagonal) flat panel displays to be rele ...
s, are not able to work in interlaced mode, because they are fixed-resolution displays and only support progressive scanning. In order to display interlaced signal on such displays, the two interlaced fields must be converted to one progressive frame with a process known as ''de-interlacing''. However, when the two fields taken at different points in time are re-combined to a full frame displayed at once, visual defects called ''interlace artifacts'' or ''combing'' occur with moving objects in the image. A good deinterlacing algorithm should try to avoid interlacing artifacts as much as possible and not sacrifice image quality in the process, which is hard to achieve consistently. There are several techniques available that extrapolate the missing picture information, however they rather fall into the category of intelligent frame creation and require complex algorithms and substantial processing power. Deinterlacing techniques require complex processing and thus can introduce a delay into the video feed. While not generally noticeable, this can result in the display of older video games lagging behind controller input. Many TVs thus have a "game mode" in which minimal processing is done in order to maximize speed at the expense of image quality. Deinterlacing is only partly responsible for such lag; scaling also involves complex algorithms that take milliseconds to run.


Progressive source material

Some interlaced video may have been originally created from progressive footage, and the deinterlacing process should consider this as well. Typical movie material is shot on 24 frames/s film. Converting film to interlaced video typically uses a process called
telecine Telecine ( or ) is the process of transferring film into video and is performed in a color suite. The term is also used to refer to the equipment used in the post-production process. Telecine enables a motion picture, captured originally on fi ...
whereby each frame is converted to multiple fields. In some cases, each film frame can be presented by exactly two
progressive segmented frame Progressive segmented Frame (PsF, sF, SF) is a scheme designed to acquire, store, modify, and distribute progressive scan video using interlaced equipment. With PsF, a progressive frame is divided into two ''segments'', with the odd lines in one s ...
s (PsF), and in this format it does not require a complex deinterlacing algorithm because each field contains a part of the very same progressive frame. However, to match 50 field interlaced PAL/SECAM or 59.94/60 field interlaced NTSC signal, frame rate conversion is necessary using various "pulldown" techniques. Most advanced TV sets can restore the original 24 frame/s signal using an
inverse telecine Telecine ( or ) is the process of transferring film into video and is performed in a color suite. The term is also used to refer to the equipment used in the post-production process. Telecine enables a motion picture, captured originally on fi ...
process. Another option is to speed up 24-frame film by 4% (to 25 frames/s) for PAL/SECAM conversion; this method is still widely used for DVDs, as well as television broadcasts (SD & HD) in the PAL markets. DVDs can either encode movies using one of these methods, or store original 24 frame/s progressive video and use MPEG-2 decoder tags to instruct the video player on how to convert them to the interlaced format. Most movies on
Blu-ray The Blu-ray Disc (BD), often known simply as Blu-ray, is a digital optical disc data storage format. It was invented and developed in 2005 and released on June 20, 2006 worldwide. It is designed to supersede the DVD format, and capable of st ...
s have preserved the original non interlaced 24 frame/s motion film rate and allow output in the progressive 1080p24 format directly to display devices, with no conversion necessary. Some 1080i HDV camcorders also offer PsF mode with cinema-like frame rates of 24 or 25 frame/s. TV production crews can also use special film cameras which operate at 25 or 30 frame/s, where such material does not need framerate conversion for broadcasting in the intended video system format.


Deinterlacing methods

Deinterlacing requires the display to buffer one or more fields and recombine them into full frames. In theory this would be as simple as capturing one field and combining it with the next field to be received, producing a single frame. However, the originally recorded signal was produced from two fields at different points in time, and without special processing any motion across the fields usually results in a "combing" effect where alternate lines are slightly displaced from each other. There are various methods to deinterlace video, each producing different problems or artifacts of its own. Some methods are much cleaner in artifacts than other methods. Most deinterlacing techniques fall under three broad groups: # Field combination deinterlacing which takes the even and odd fields and combine them into one frame. This halves the perceived frame-rate (the temporal resolution) whereby 50i or 60i is converted to 25p or 30p. # Field extension deinterlacing which takes each field (with only half the lines) and extend it to the entire screen to make a frame. This halves the vertical resolution of the image but maintains the original field-rate (50i or 60i is converted to 50p or 60p). # Motion compensation deinterlacing which uses more advanced algorithms to detect motion across fields, switching techniques when necessary. This produces the best quality result, but requires the most processing power. Modern deinterlacing systems therefore buffer several fields and use techniques like
edge detection Edge detection includes a variety of mathematical methods that aim at identifying edges, curves in a digital image at which the image brightness changes sharply or, more formally, has discontinuities. The same problem of finding discontinuitie ...
in an attempt to find the motion between the fields. This is then used to interpolate the missing lines from the original field, reducing the combing effect.


Field combination deinterlacing

These methods take the even and odd fields and combine them into one frame. They retain the full vertical resolution at the expense of the temporal resolution (perceived frame-rate) whereby 50i/60i is converted to 24p/25p/30p which may lose the smooth, fluid feel of the original. However if the interlaced signal was originally produced from a lower frame-rate source such as film, then no information is lost and these methods may suffice. * Weaving is the simplest and most rudimentary method, performed by interleaving ("weaving") the consecutive fields together into a single frame. This method does not cause any problems when the image has not changed between fields, but any motion will result in artifacts known as "combing" when the pixels in one field do not line up with the pixels in the other, forming a jagged edge. * Blending is done by ''blending'', or ''averaging'' consecutive fields to be displayed as one frame. Combing is avoided because the images are on top of each other. This instead leaves an artifact known as ghosting. The image loses both vertical resolution and temporal resolution. Although video produced with this technique only requires half the number of pixels vertically, it is often combined with a vertical resize so that the output has no numerical loss in vertical pixels. When interpolation is used, it can result in an even softer image. Blending also loses half the temporal resolution since two motion fields are combined into one frame. *
Selective blending Selective may refer to: * Selective school, a school that admits students on the basis of some sort of selection criteria ** Selective school (New South Wales) Selective strength: the human body transitions between being weak and strong. This ran ...
, or ''smart blending'' or ''motion adaptive blending'', is a combination of weaving and blending. As areas that haven't changed from frame to frame don't need any processing, the frames are woven and only the areas that need it are blended. This retains the full vertical resolution and half the temporal resolution, and it has fewer artifacts than weaving or blending because of the selective combination of both techniques. *
Inverse Telecine Telecine ( or ) is the process of transferring film into video and is performed in a color suite. The term is also used to refer to the equipment used in the post-production process. Telecine enables a motion picture, captured originally on fi ...
:
Telecine Telecine ( or ) is the process of transferring film into video and is performed in a color suite. The term is also used to refer to the equipment used in the post-production process. Telecine enables a motion picture, captured originally on fi ...
is used to convert a motion picture source at 24 frames per second to interlaced TV video in countries that use NTSC video system at 30 frames per second. Countries which use PAL at 25 frames per second do not require Telecine – motion picture sources are merely sped up 4% to achieve the needed 25 frames per second. If Telecine was used then it is possible to reverse the algorithm to obtain the original non-interlaced footage, which has a slower frame rate. In order for this to work, the exact telecine pattern must be known or guessed. Unlike most other deinterlacing methods, when it works, inverse telecine can perfectly recover the original progressive video stream. * Telecine-style algorithms: If the interlaced footage was generated from progressive frames at a slower frame rate (e.g. "cartoon pulldown"), then the exact original frames can be recovered by copying the missing field from a matching previous/next frame. In cases where there is no match (e.g. brief cartoon sequences with an elevated frame rate), then the filter falls back on another deinterlacing method such as blending or line-doubling. This means that the worst case for Telecine is occasional frames with ghosting or reduced resolution. By contrast, when more sophisticated motion-detection algorithms fail, they can introduce pixel artifacts that are unfaithful to the original material. For
telecine Telecine ( or ) is the process of transferring film into video and is performed in a color suite. The term is also used to refer to the equipment used in the post-production process. Telecine enables a motion picture, captured originally on fi ...
video, decimation can be applied as a post-process to reduce the frame rate, and this combination is generally more robust than a simple inverse telecine, which fails when differently interlaced footage is spliced together.


Field extension deinterlacing

These methods take each field (with only half the lines) and extend it to the entire screen to make a frame. This may halve the vertical resolution of the image but aims to maintain the original field-rate (50i or 60i is converted to 50p or 60p). * Half-sizing displays each interlaced field on its own, resulting in a video with half the vertical resolution of the original, unscaled. While this method retains all original pixels and all temporal resolution it is understandably not used for regular viewing because of its false aspect ratio. However, it can be successfully used to apply video filters which expect a noninterlaced frame, such as those exploiting information from neighbouring pixels (e.g., sharpening). *
Line doubling A line doubler is a device or algorithm used to deinterlace video signals prior to display on a progressive scan display. The main function of a deinterlacer is to take an interlaced video frame which consists of 60 two-field interlaced fields of ...
or "bobbing" takes the lines of each interlaced field (consisting of only even or odd lines) and doubles them, filling the entire frame. This results in the video having a frame rate identical to the original field rate, but each frame having half the vertical resolution, or resolution equal to that of each field that the frame was made from. Line doubling prevents combing artifacts and maintains smooth motion but can cause a noticeable reduction in picture quality from the loss of vertical resolution and visual anomalies whereby stationary objects can appear to bob up and down as the odd and even lines alternate. These techniques are also called ''bob deinterlacing'' and ''linear deinterlacing'' for this reason. A variant of this method discards one field out of each frame, halving temporal resolution. Line doubling is sometimes confused with deinterlacing in general, or with
interpolation In the mathematical field of numerical analysis, interpolation is a type of estimation, a method of constructing (finding) new data points based on the range of a discrete set of known data points. In engineering and science, one often has ...
(image scaling) which uses spatial filtering to generate extra lines and hence reduce the visibility of pixelation on any type of display. The terminology 'line doubler' is used more frequently in high end consumer electronics, while 'deinterlacing' is used more frequently in the computer and digital video arena.


Motion compensation deinterlacing

More advanced deinterlacing algorithms combine the traditional field combination methods (weaving and blending) and frame extension methods (bob or line doubling) to create a high quality progressive video sequence. One of the basic hints to the direction and amount of motion would be the direction and length of combing artifacts in the interlaced signal. The best algorithms also try to predict the direction and the amount of image motion between subsequent fields in order to better blend the two fields together. They may employ algorithms similar to block motion compensation used in video compression. For example, if two fields had a person's face moving to the left, weaving would create combing, and blending would create ghosting. Advanced motion compensation (ideally) would see that the face in several fields is the same image, just moved to a different position, and would try to detect direction and amount of such motion. The algorithm would then try to reconstruct the full detail of the face in both output frames by combining the images together, moving parts of each field along the detected direction by the detected amount of movement. Deinterlacers that use this technique are often superior because they can use information from many fields, as opposed to just one or two, however they require powerful hardware to achieve this in real-time. Motion compensation needs to be combined with scene change detection (which has its own challenges), otherwise it will attempt to find motion between two completely different scenes. A poorly implemented motion compensation algorithm would interfere with natural motion and could lead to visual artifacts which manifest as "jumping" parts in what should be a stationary or a smoothly moving image.


Quality Measurement

Different deinterlacing methods have different quality and speed characteristics. Usually, to measure quality of deinterlacing method, the following approach is used: # A set of progressive videos is composed # All of these videos are interlaced # Each of interlaced videos are deinterlaced with specific deinterlacing method # All of deinterlaced videos are compared with the corresponding source video via objective video quality metric, such as PSNR, SSIM or
VMAF Video Multimethod Assessment Fusion (VMAF) is an objective full-reference video quality metric developed by Netflix in cooperation with the University of Southern California, The IPI/LS2N lab Nantes Université, and the Laboratory for Image and Vi ...
. The main speed measurement metric is frames per second (FPS) - how many frames deinterlacer is able to process per second. Talking about FPS, it is necessary to specify the resolution of all frames and hardware characteristics, because the speed of specific deinterlacing method significantly depends on these two factors.


Benchmarks


Deinterlacing Challenge 2019

This benchmark has compared 8 different deinterlacing methods on a synthetic video. There is a moving 3-dimensional
Lissajous curve A Lissajous curve , also known as Lissajous figure or Bowditch curve , is the graph of a system of parametric equations : x=A\sin(at+\delta),\quad y=B\sin(bt), which describe the superposition of two perpendicular oscillations in x and y dire ...
on the video in order to make it challenging for the modern deinterlacing methods. The authors used MSE and PSNR as objective metrics. Also, they measure processing speed in FPS. For some methods there is only visual comparison, for others - only objective.


MSU Deinterlacer Benchmark

This benchmark has compared more than 20 methods on 40 video sequences. Total length of the sequences is 834 frames. Its authors state that the main feature of this benchmark is the comprehensive comparison of methods with visual comparison tools, performance plots and parameter tuning. Authors used PSNR and SSIM as objective metrics. VapourSynth TDeintMod author states that it is bi-directional motion adaptive deinterlacer. NNEDI method uses a
Neural Network A neural network is a network or circuit of biological neurons, or, in a modern sense, an artificial neural network, composed of artificial neurons or nodes. Thus, a neural network is either a biological neural network, made up of biological ...
to deinterlace video sequences. FFmpeg Bob Weaver Deinterlacing Filter is the part of well-known framework for video and audio processing. Vapoursynth EEDI3 is the abbreviation for "enhanced edge directed interpolation 3", authors of this method state that it works by finding the best non-decreasing warping between two lines according to a cost functional. The authors of Real-Time Deep Video Deinterlacer use Deep CNN to get the best quality of output video.


Where deinterlacing is performed

Deinterlacing of an interlaced video signal can be done at various points in the TV production chain.


Progressive media

Deinterlacing is required for interlaced archive programs when the broadcast format or media format is progressive, as in EDTV 576p or HDTV 720p50 broadcasting, or mobile DVB-H broadcasting; there are two ways to achieve this. *''Production'' – The interlaced video material is converted to progressive scan during program production. This should typically yield the best possible quality, since videographers have access to expensive and powerful deinterlacing equipment and software and can deinterlace at the best possible quality, probably manually choosing the optimal deinterlacing method for each frame. *''Broadcasting'' – Real-time deinterlacing hardware converts interlaced programs to progressive scan immediately prior to broadcasting. Since the processing time is constrained by the frame rate and no human input is available, the quality of conversion is most likely inferior to the pre-production method; however, expensive and high-performance deinterlacing equipment may still yield good results when properly tuned.


Interlaced media

When the broadcast format or media format is interlaced, real-time deinterlacing should be performed by embedded circuitry in a set-top box, television, external video processor, DVD or DVR player, or TV tuner card. Since consumer electronics equipment is typically far cheaper, has considerably less processing power and uses simpler algorithms compared to professional deinterlacing equipment, the quality of deinterlacing may vary broadly and typical results are often poor even on high-end equipment. Using a computer for playback and/or processing potentially allows a broader choice of video players and/or editing software not limited to the quality offered by the embedded consumer electronics device, so at least theoretically higher deinterlacing quality is possible – especially if the user can pre-convert interlaced video to progressive scan before playback and advanced and time-consuming deinterlacing algorithms (i.e. employing the "production" method). However, the quality of both free and commercial consumer-grade software may not be up to the level of professional software and equipment. Also, most users are not trained in video production; this often causes poor results as many people do not know much about deinterlacing and are unaware that the frame rate is half the field rate. Many codecs/players do not even deinterlace by themselves and rely on the graphics card and video acceleration API to do proper deinterlacing.


Concerns over effectiveness

The
European Broadcasting Union The European Broadcasting Union (EBU; french: Union européenne de radio-télévision, links=no, UER) is an alliance of public service media organisations whose countries are within the European Broadcasting Area or who are members of the C ...
has argued against the use of interlaced video in production and broadcasting, recommending 720p 50 fps (frames per second) as current production format and working with the industry to introduce
1080p 1080p (1920×1080 progressively displayed pixels; also known as Full HD or FHD, and BT.709) is a set of HDTV high-definition video modes characterized by 1,920 pixels displayed across the screen horizontally and 1,080 pixels down the screen ve ...
50 as a future-proof production standard which offers higher vertical resolution, better quality at lower bitrates, and easier conversion to other formats such as 720p50 and 1080i50. The main argument is that no matter how complex the deinterlacing algorithm may be, the artifacts in the interlaced signal cannot be completely eliminated because some information is lost between frames.
Yves Faroudja Yves may refer to: * Yves, Charente-Maritime, a commune of the Charente-Maritime department in France * Yves (given name), including a list of people with the name * ''Yves'' (single album), a single album by Loona * ''Yves'' (film), a 2019 Fren ...
, the founder of Faroudja Labs and
Emmy Award The Emmy Awards, or Emmys, are an extensive range of awards for artistic and technical merit for the American and international television industry. A number of annual Emmy Award ceremonies are held throughout the calendar year, each with the ...
winner for his achievements in deinterlacing technology, has stated that "interlace to progressive does not work" and advised against using interlaced signal.


See also

*
Interlaced video Interlaced video (also known as interlaced scan) is a technique for doubling the perceived frame rate of a video display without consuming extra bandwidth. The interlaced signal contains two fields of a video frame captured consecutively. Th ...
*
Progressive segmented frame Progressive segmented Frame (PsF, sF, SF) is a scheme designed to acquire, store, modify, and distribute progressive scan video using interlaced equipment. With PsF, a progressive frame is divided into two ''segments'', with the odd lines in one s ...
: a scheme designed to acquire, store, modify, and distribute progressive-scan video using interlaced equipment and media * DCDi by Faroudja * Display motion blur *
Refresh rate The refresh rate (or "vertical refresh rate", "vertical scan rate", terminology originating with the cathode ray tubes) is the number of times per second that a raster-based display device displays a new image. This is independent from frame rate ...
*
HDTV High-definition television (HD or HDTV) describes a television system which provides a substantially higher image resolution than the previous generation of technologies. The term has been used since 1936; in more recent times, it refers to the g ...


References


External links


3:2 Pulldown and Deinterlacing
(theprojectorpros.com)

(planetmath.org)

– EBU document (with animation demonstrating interlace)

– Deinterlacing and film-to-video conversion with respect to DVDs
Frequently asked questions about deinterlacing

100fps
Facts, solutions and examples of Deinterlacing. * http://wiki.videolan.org/Deinterlace#Appendix:_Technical_summary {{VideoProcessing Video processing