2016 IEEE International Conference on Multimedia & Expo Workshops (ICMEW)
Download PDF

Abstract

In this paper, we propose a computational model of visual attention for stereoscopic video. Low-level visual features including color, luminance, texture and depth are used to calculate feature contrast for spatial saliency of stereoscopic video frames. Besides, the proposed model adopts motion features to compute the temporal saliency. Here, we extract the relative planar and depth motion for temporal saliency calculation. The final saliency map is computed by fusing the spatial and temporal saliency together. Experimental results show the promising performance of the proposed method in saliency prediction for stereoscopic video.
Like what you’re reading?
Already a member?
Get this article FREE with a new membership!

Related Articles