Analyzing Gaze Behaviors in Interactive VR Scenes
- Others:
- Biologically plausible Integrative mOdels of the Visual system : towards synergIstic Solutions for visually-Impaired people and artificial visiON (BIOVISION) ; Inria Sophia Antipolis - Méditerranée (CRISAM) ; Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)
- Laboratoire Motricité Humaine Expertise Sport Santé (LAMHESS) ; Université Nice Sophia Antipolis (1965 - 2019) (UNS)-Université de Toulon (UTLN)-Université Côte d'Azur (UCA)
- Université Côte d'Azur (UCA)
- Institut de la Vision ; Institut National de la Santé et de la Recherche Médicale (INSERM)-Sorbonne Université (SU)-Centre National de la Recherche Scientifique (CNRS)
- Cognition Behaviour Technology (CobTek) ; Université Nice Sophia Antipolis (1965 - 2019) (UNS)-Centre Hospitalier Universitaire de Nice (CHU Nice)-Institut Claude Pompidou [Nice] (ICP - Nice)-Université Côte d'Azur (UCA)
- Institut Claude Pompidou [Nice] (ICP - Nice)
- ANR-21-CE33-0001,CREATTIVE3D,Création de contextes 3D portés par l'attention pour la basse vision(2021)
- ANR-15-IDEX-0001,UCA JEDI,Idex UCA JEDI(2015)
Citation
Description
Gaze is an excellent metric for understanding human attention. However, research on identifying gaze behaviors saccades, fixations, and smooth pursuits for example for large (i e more than one meter viewing distance), interactive 3D scenes with virtual reality headsets is still in its early stages. The understanding of gaze behaviors is affected by equipment, user, the types of scenarios targeted, etc. There is currently little to no consensus on how to select gaze behavior identification methods, and what the impact of this choice has on the validation of research hypotheses on human attentionThis work investigates the impact of gaze behavior identification approaches on human gaze data analysis by re-implementing six state of the art identification algorithms for VR To underline the potential of the system, we design a 3D scene with various animated and interactive 2D and 3D stimuli with which we collected eye tracking data for 20 participants through a study. We then provide disaggregated analyses on metrics from the literature for various methods and stimuli.From what we observe with a current state of analysis, participants are tend to have longer fixations on dynamic 3D stimulus than on static ones, while some algorithms (based on their nature) fail to detect fixations on specific stimuli for some of the participants. We should also underline the differences in fixation patterns for participants as their fixations differ in quantity and duration across the algorithms.
Additional details
- URL
- https://inria.hal.science/hal-04208129
- URN
- urn:oai:HAL:hal-04208129v1
- Origin repository
- UNICA