An HVS-inspired video deinterlacer based on visual saliency
Résumé
Video deinterlacing is a technique wherein the interlaced video format is converted into progressive scan format for nowadays display devices. In this paper, a spatial saliency-guided motion-compensated deinterlacing method is proposed which accounts for the properties of the Human Visual System (HVS): our algorithm classifies the field according to its texture and viewer’s region of interest and adapts the motion estimation and compensation, as well as the saliency-guided interpolation to ensure high-quality frame reconstruction. Two different saliency models, namely the graph-based visual saliency (GBVS) model and the spectral residual visual saliency (SRVS) model, have been studied and compared in terms of visual quality performances as well as computational complexity. The experimental results on a great variety of video test sequences show significant improvement of reconstructed video quality with the GBVS-based proposed method compared to classical motion-compensated and adaptive deinterlacing techniques, with up to 4.5 dB gains in terms of PSNR. Simulations also show that the SRVS-based deinterlacing process can result to significant reductions of complexity (up to 25 times a decrease of the computation time compared with the GBVS-based method) at the expense of a PSNR decrease.
Origine | Fichiers éditeurs autorisés sur une archive ouverte |
---|---|
licence |