History
The metric is based on initial work from the group of Professor C.-C. Jay Kuo at the University of Southern California. Here, the applicability of fusion of different video quality metrics using support vector machines (SVM) has been investigated, leading to a "FVQA (Fusion-based Video Quality Assessment) Index" that has been shown to outperform existing image quality metrics on a subjective video quality database. The method has been further developed in cooperation with Netflix, using different subjective video datasets, including a Netflix-owned dataset ("NFLX"). Subsequently renamed "Video Multimethod Assessment Fusion", it was announced on the ''Netflix TechBlog'' in June 2016 and version 0.3.1 of the reference implementation was made available under a permissive open-source license. In 2017, the metric was updated to support a custom model that includes an adaptation for cellular phone screen viewing, generating higher quality scores for the same input material. In 2018, a model that predicts the quality of up to 4K resolution content was released. The datasets on which these models were trained have not been made available to the public. In 2021, a Technology and Engineering Emmy Award was awarded to Beamr, Netflix, University of Southern California,Components
VMAF uses existing image quality metrics and other features to predict video quality: * Visual Information Fidelity (VIF): considers information fidelity loss at four different spatial scales * Detail Loss Metric (DLM): measures loss of details, and impairments which distract viewer attention * Mean Co-Located Pixel Difference (MCPD): measures temporal difference between frames on the luminance component The above features are fused using a SVM-based regression to provide a single output score in the range of 0–100 per video frame, with 100 being quality identical to the reference video. These scores are then temporally pooled over the entire video sequence using thePerformance
An early version of VMAF has been shown to outperform other image and video quality metrics such as SSIM, PSNR-HVS and VQM-VFD on three of four datasets in terms of prediction accuracy, when compared to subjective ratings. Its performance has also been analyzed in another paper, which found that VMAF did not perform better than SSIM and MS-SSIM on a video dataset. In 2017, engineers from RealNetworks reported good reproducibility of Netflix' performance findings. ISoftware
ASee also
* Perceptual Evaluation of Video Quality (PEVQ) * VQuad-HDReferences
{{reflist, refs= {{Cite journal, last1=Liu, first1=Tsung-Jung, last2=Lin, first2=Joe Yuchieh, last3=Lin, first3=Weisi, last4=Kuo, first4=C.-C. Jay, date=2013, title=Visual quality assessment: recent developments, coding applications and future trends, journal=APSIPA Transactions on Signal and Information Processing, volume=2, doi=10.1017/atsip.2013.5, issn=2048-7703, doi-access=free {{Cite journal, last1=Lin, first1=Joe Yuchieh, last2=Liu, first2=T. J., last3=Wu, first3=E. C. H., last4=Kuo, first4=C. C. J., date=December 2014, title=A fusion-based video quality assessment (FVQA) index, journal=Signal and Information Processing Association Annual Summit and Conference (APSIPA), 2014 Asia-Pacific, pages=1–5, doi=10.1109/apsipa.2014.7041705, isbn=978-6-1636-1823-8, s2cid=7742774 {{Cite journal, last1=Lin, first1=Joe Yuchieh, last2=Wu, first2=Chi-Hao, last3= Ioannis, first3=Katsavounidis , last4=Li, first4=Zhi, last5= Aaron, first5=Anne, last6=Kuo, first6=C.-C. Jay, date=June 2015, title=EVQA: An ensemble-learning-based video quality assessment index, journal=Multimedia & Expo Workshops (ICMEW), 2015 IEEE International Conference on, pages=1–5, doi=10.1109/ICMEW.2015.7169760, isbn=978-1-4799-7079-7, s2cid=6996075 {{Cite web, url=https://medium.com/netflix-techblog/toward-a-practical-perceptual-video-quality-metric-653f208b9652, title=Toward A Practical Perceptual Video Quality Metric, last=Blog, first=Netflix Technology, date=2016-06-06, website=Netflix TechBlog, access-date=2017-07-15 {{Citation, title=vmaf: Perceptual video quality assessment based on multi-method fusion, date=2017-07-14, url=https://github.com/Netflix/vmaf, publisher=Netflix, Inc., access-date=2017-07-15 {{Cite journal, last1=Li, first1=S., last2=Zhang, first2=F., last3=Ma, first3=L., last4=Ngan, first4=K. N., date=October 2011, title=Image Quality Assessment by Separately Evaluating Detail Losses and Additive Impairments, journal=IEEE Transactions on Multimedia, volume=13, issue=5, pages=935–949, doi=10.1109/tmm.2011.2152382, s2cid=8618041, issn=1520-9210 {{cite arXiv, last1=Bampis, first1=Christos G., last2=Bovik, first2=Alan C., date=2017-03-02, title=Learning to Predict Streaming Video QoE: Distortions, Rebuffering and Memory, eprint=1703.00633, class=cs.MM {{cite journal, last1=Rassool, first1=Reza, title=VMAF reproducibility: Validating a perceptual practical video quality metric, journal=2017 IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), pages=1–2, date=2017, url=https://www.realnetworks.com/sites/default/files/vmaf_reproducibility_ieee.pdf, access-date=2017-11-30, doi=10.1109/BMSB.2017.7986143, isbn=978-1-5090-4937-0, s2cid=5449498 {{cite arXiv, first1=Anastasia, last1=Zvezdakova, first2=Sergey, last2=Zvezdakov, first3= Dmitriy, last3=Kulikov, first4=Dmitriy, last4=Vatolin, date=2019-04-29, title=Hacking VMAF with Video Color and Contrast Distortion, eprint=2107.04510 {{cite arXiv, first1=Maksim, last1=Siniukov, first2=Anastasia, last2=Antsiferova, first3=Dmitriy, last3=Kulikov, first4=Dmitriy, last4=Vatolin, eprint=2107.04510, title=Hacking VMAF and VMAF NEG: vulnerability to different preprocessing methodsExternal links