Modulating Bottom-Up and Top-Down Visual Processing via Language-Conditional Filters


Creative Commons License

Kesen I., Can O. A., Erdem E., Erdem A., Yuret D.

IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Louisiana, Amerika Birleşik Devletleri, 18 - 24 Haziran 2022, ss.4609-4619 identifier identifier

  • Yayın Türü: Bildiri / Tam Metin Bildiri
  • Doi Numarası: 10.1109/cvprw56347.2022.00507
  • Basıldığı Şehir: Louisiana
  • Basıldığı Ülke: Amerika Birleşik Devletleri
  • Sayfa Sayıları: ss.4609-4619
  • Hacettepe Üniversitesi Adresli: Evet

Özet

How to best integrate linguistic and perceptual processing in multi-modal tasks that involve language and vision is an important open problem. In this work, we argue that the common practice of using language in a topdown manner, to direct visual attention over high-level visual features, may not be optimal. We hypothesize that the use of language to also condition the bottom-up processing from pixels to high-level features can provide benefits to the overall performance. To support our claim, we propose a U-Net-based model and perform experiments on two language-vision dense-prediction tasks: referring expression segmentation and language-guided image colorization. We compare results where either one or both of the top-down and bottom-up visual branches are conditioned on language. Our experiments reveal that using language to control the filters for bottom-up visual processing in addition to top-down attention leads to better results on both tasks and achieves competitive performance. Our linguistic analysis suggests that bottom-up conditioning improves segmentation of objects especially when input text refers to low-level visual concepts.