Your browser does not support JavaScript!

Video Saliency Detection via Pairwise Interaction

Video Saliency Detection via Pairwise Interaction

For access to this article, please select a purchase option:

Buy article PDF
(plus tax if applicable)
Buy Knowledge Pack
10 articles for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Your details
Why are you recommending this title?
Select reason:
Chinese Journal of Electronics — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

We propose a novel video saliency detection method based on pairwise interaction learning in this paper. Different from the traditional video saliency detection methods, which mostly combine spatial and temporal features, we adopt Least squares Conditional random field (LS-CRF) to capture the interaction information of regions within a frame or between video frames. Specifically, dual graph-connection models are built on superpixels structure of each frame for training and testing, respectively. In order to extract the essential scene structure from video sequences, LS-CRF is introduced to learn the background texture, object components and the various relationships between foreground and background regions through the training set, and each region will be distributed an inferred saliency value in testing phase. Benefitting from the learned diverse relations among scene regions, the proposed approach achieves reliable results especially on multiple objects scenes or under highly complicated scenes. Further, we substitute weak saliency maps for pixel-wise annotations in training phase to verify the expansibility and practicability of the proposed method. Extensive quantitative and qualitative experiments on various video sequences demonstrate that the proposed algorithm outperforms conventional saliency detection algorithms.

Related content

This is a required field
Please enter a valid email address