Action Recognition and Localization by Hierarchical Space-Time Segments


Ma S., Zhang J., Ikizler-Cinbis N., Sclaroff S.

IEEE International Conference on Computer Vision (ICCV), Sydney, Avustralya, 1 - 08 Aralık 2013, ss.2744-2751 identifier identifier

  • Yayın Türü: Bildiri / Tam Metin Bildiri
  • Doi Numarası: 10.1109/iccv.2013.341
  • Basıldığı Şehir: Sydney
  • Basıldığı Ülke: Avustralya
  • Sayfa Sayıları: ss.2744-2751
  • Hacettepe Üniversitesi Adresli: Evet

Özet

We propose Hierarchical Space-Time Segments as a new representation for action recognition and localization. This representation has a two-level hierarchy. The first level comprises the root space-time segments that may contain a human body. The second level comprises multi-grained space-time segments that contain parts of the root. We present an unsupervised method to generate this representation from video, which extracts both static and non-static relevant space-time segments, and also preserves their hierarchical and temporal relationships. Using simple linear SVM on the resultant bag of hierarchical space-time segments representation, we attain better than, or comparable to, state-of-the-art action recognition performance on two challenging benchmark datasets and at the same time produce good action localization results.