Gaze-directed and saliency-guided approaches of stereo camera control in interactive virtual reality


Cebeci B., Askin M. B., Çapın T. K., ÇELİKCAN U.

Computers and Graphics (Pergamon), cilt.118, ss.23-32, 2024 (SCI-Expanded) identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 118
  • Basım Tarihi: 2024
  • Doi Numarası: 10.1016/j.cag.2023.10.012
  • Dergi Adı: Computers and Graphics (Pergamon)
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, PASCAL, Aerospace Database, Applied Science & Technology Source, Communication Abstracts, Compendex, Computer & Applied Sciences, INSPEC, Metadex, Civil Engineering Abstracts
  • Sayfa Sayıları: ss.23-32
  • Anahtar Kelimeler: Depth perception, Head mounted displays, Stereoscopic rendering, Virtual reality, Visual comfort
  • Hacettepe Üniversitesi Adresli: Evet

Özet

Despite remarkable advances in virtual reality (VR) technologies, serious challenges remain in making extended VR sessions with head-mounted displays (HMDs) thoroughly comfortable. 3D stereo imagery can cause discomfort and eye fatigue due to poor stereo camera settings that result in extreme disparities and vergence-accommodation conflicts. The default stereoscopic parameters of consumer HMDs produce images with shallow depth to circumvent these issues. In this work, we propose a methodology to utilize the gaze-directed and visual saliency-guided paradigms for automatic stereo camera control in real-time interactive VR by employing the basics of stereo grading. We evaluate these two approaches at different levels of interaction, first through a user study and then through a performance benchmark. The results show that the gaze-directed approach outperforms the saliency-guided approach in the VEs tested and both methods are able to convey a better overall depth feeling than the default HMD setting without hindering visual comfort. It is also shown that both approaches lead to a significant overall enhancement of the VR experience in the more interactive VE.