TY - GEN
T1 - DEMI
T2 - deep video quality estimation model using perceptual video quality dimensions
AU - Zadtootaghaj, Saman
AU - Barman, Nabajeet
AU - Rao, Rakesh
AU - Goering, Steve
AU - Martini, Maria
AU - Raake, Alexander
AU - Möeller, Sebastian
N1 - Note: This work was supported by the European Union's Horizon 2020 research and innovation programme [grant number: No 871793].
Published in: 2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP), ISBN 9781728193236, ISSN 2163-3517
Organising Body: Institute of Electrical and Electronics Engineers
Organising Body: Institute of Electrical and Electronics Engineers
PY - 2020/9/22
Y1 - 2020/9/22
N2 - Existing works in the field of quality assessment focus separately on gaming and non-gaming content. Along with the traditional modeling approaches, deep learning based approaches have been used to develop quality models, due to their high prediction accuracy. In this paper, we present a deep learning based quality estimation model considering both gaming and non-gaming videos. The model is developed in three phases. First, a convolutional neural network (CNN) is trained based on an objective metric which allows the CNN to learn video artifacts such as blurriness and blockiness. Next, the model is fine-tuned based on a small image quality dataset using blockiness and blurriness ratings. Finally, a Random Forest is used to pool frame-level predictions and temporal information of videos in order to predict the overall video quality. The light-weight, low complexity nature of the model makes it suitable for real-time applications considering both gaming and non-gaming content while achieving similar performance to existing state-of-the-art model NDNetGaming. The model implementation for testing is available on GitHub.
AB - Existing works in the field of quality assessment focus separately on gaming and non-gaming content. Along with the traditional modeling approaches, deep learning based approaches have been used to develop quality models, due to their high prediction accuracy. In this paper, we present a deep learning based quality estimation model considering both gaming and non-gaming videos. The model is developed in three phases. First, a convolutional neural network (CNN) is trained based on an objective metric which allows the CNN to learn video artifacts such as blurriness and blockiness. Next, the model is fine-tuned based on a small image quality dataset using blockiness and blurriness ratings. Finally, a Random Forest is used to pool frame-level predictions and temporal information of videos in order to predict the overall video quality. The light-weight, low complexity nature of the model makes it suitable for real-time applications considering both gaming and non-gaming content while achieving similar performance to existing state-of-the-art model NDNetGaming. The model implementation for testing is available on GitHub.
KW - Computer science and informatics
U2 - 10.1109/MMSP48831.2020.9287080
DO - 10.1109/MMSP48831.2020.9287080
M3 - Conference contribution
BT - This work was supported by the European Union's Horizon 2020 research and innovation programme [grant number: No 871793].
Published in: 2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP), ISBN 9781728193236, ISSN 2163-3517
Organising Body: Institute of Electrical and Electronics Engineers
Organising Body: Institute of Electrical and Electronics Engineers
ER -