Learning actionness from action/background discrimination


Yalcinkaya Simsek O., Russakovsky O., DUYGULU ŞAHİN P.

SIGNAL IMAGE AND VIDEO PROCESSING, cilt.17, sa.4, ss.1599-1606, 2023 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 17 Sayı: 4
  • Basım Tarihi: 2023
  • Doi Numarası: 10.1007/s11760-022-02369-y
  • Dergi Adı: SIGNAL IMAGE AND VIDEO PROCESSING
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Compendex, INSPEC, zbMATH
  • Sayfa Sayıları: ss.1599-1606
  • Anahtar Kelimeler: Actionness, Action localization, Action segmentation, Video representation
  • Hacettepe Üniversitesi Adresli: Evet

Özet

Localizing actions in instructional web videos is a complex problem due to background scenes that are unrelated to the task described in the video. Wrong prediction of the action step labels could be reduced by separating backgrounds from actions. Yet, discrimination of actions from backgrounds is challenging due to various styles for the same activity. In this study, we aim to improve the action localization results through learning the actionness of video clips to determine the possibility of a clip having an action. We present a method to learn an actionness score for each video clip to be used for post-processing baseline video clip to step label assignment scores. We propose to use auxiliary representation formed from baseline video to step label assignment scores to reinforce the discrimination of video clips. The experiments on CrossTask and COIN datasets show that our actionness score helps to improve the performance of action step localization and also action segmentation.