Spatio-Temporal Saliency Networks for Dynamic Saliency Prediction


Creative Commons License

Bak C., KOÇAK A., ERDEM M. E., Erdem A.

IEEE TRANSACTIONS ON MULTIMEDIA, cilt.20, sa.7, ss.1688-1698, 2018 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 20 Sayı: 7
  • Basım Tarihi: 2018
  • Doi Numarası: 10.1109/tmm.2017.2777665
  • Dergi Adı: IEEE TRANSACTIONS ON MULTIMEDIA
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus
  • Sayfa Sayıları: ss.1688-1698
  • Hacettepe Üniversitesi Adresli: Evet

Özet

Computational saliency models for still images have gained significant popularity in recent years. Saliency prediction from videos, on the other hand, has received relatively little interest from the community. Motivated by this, in this paper, we study the use of deep learning for dynamic saliency prediction and propose the so-called spatio-temporal saliency networks. The key to our models is the architecture of two-stream networks where we investigate different fusion mechanisms to integrate spatial and temporal information. We evaluate our models on the dynamic images and eye movements and University of Central Florida-Sports datasets and present highly competitive results against the existing state-of-the-art models. We also carry out some experiments on a number of still images from the MIT300 dataset by exploiting the optical flow maps predicted from these images. Our results show that considering inherent motion information in this way can be helpful for static saliency estimation.