Learning multi-scale features for foreground segmentation

Creative Commons License

Lim L. A., Keles H.

PATTERN ANALYSIS AND APPLICATIONS, vol.23, no.3, pp.1369-1380, 2020 (SCI-Expanded) identifier identifier

  • Publication Type: Article / Article
  • Volume: 23 Issue: 3
  • Publication Date: 2020
  • Doi Number: 10.1007/s10044-019-00845-9
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, PASCAL, Applied Science & Technology Source, Computer & Applied Sciences, Index Islamicus, zbMATH
  • Page Numbers: pp.1369-1380
  • Keywords: Foreground segmentation, Convolutional neural networks, Feature pooling module, Background subtraction, Video surveillance, BACKGROUND SUBTRACTION, NEURAL-NETWORKS
  • Hacettepe University Affiliated: No


Foreground segmentation algorithms aim at segmenting moving objects from the background in a robust way under various challenging scenarios. Encoder-decoder-type deep neural networks that are used in this domain recently perform impressive segmentation results. In this work, we propose a variation of our formerly proposed method (Anonymous 2018) that can be trained end-to-end using only a few training examples. The proposed method extends the feature pooling module of FgSegNet by introducing fusion of features inside this module, which is capable of extracting multi-scale features within images, resulting in a robust feature pooling against camera motion, which can alleviate the need of multi-scale inputs to the network. Sample visualizations highlight the regions in the images on which the model is specially focused. It can be seen that these regions are also the most semantically relevant. Our method outperforms all existing state-of-the-art methods in CDnet2014 datasets by an average overall F-measure of 0.9847. We also evaluate the effectiveness of our method on SBI2015 and UCSD Background Subtraction datasets. The source code of the proposed method is made available at.