Learning multi-scale features for foreground segmentation


Creative Commons License

Lim L. A., Keles H.

PATTERN ANALYSIS AND APPLICATIONS, cilt.23, sa.3, ss.1369-1380, 2020 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 23 Sayı: 3
  • Basım Tarihi: 2020
  • Doi Numarası: 10.1007/s10044-019-00845-9
  • Dergi Adı: PATTERN ANALYSIS AND APPLICATIONS
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, PASCAL, Applied Science & Technology Source, Computer & Applied Sciences, Index Islamicus, zbMATH
  • Sayfa Sayıları: ss.1369-1380
  • Anahtar Kelimeler: Foreground segmentation, Convolutional neural networks, Feature pooling module, Background subtraction, Video surveillance, BACKGROUND SUBTRACTION, NEURAL-NETWORKS
  • Hacettepe Üniversitesi Adresli: Hayır

Özet

Foreground segmentation algorithms aim at segmenting moving objects from the background in a robust way under various challenging scenarios. Encoder-decoder-type deep neural networks that are used in this domain recently perform impressive segmentation results. In this work, we propose a variation of our formerly proposed method (Anonymous 2018) that can be trained end-to-end using only a few training examples. The proposed method extends the feature pooling module of FgSegNet by introducing fusion of features inside this module, which is capable of extracting multi-scale features within images, resulting in a robust feature pooling against camera motion, which can alleviate the need of multi-scale inputs to the network. Sample visualizations highlight the regions in the images on which the model is specially focused. It can be seen that these regions are also the most semantically relevant. Our method outperforms all existing state-of-the-art methods in CDnet2014 datasets by an average overall F-measure of 0.9847. We also evaluate the effectiveness of our method on SBI2015 and UCSD Background Subtraction datasets. The source code of the proposed method is made available at.