Exploring and Modeling the Effects of Eye-Tracking Accuracy and Precision on Gaze-Based Steering in Virtual Environments Article

Xuning Hu, Yichuan Zhang, Yushi Wei, Liangyuting Zhang, Yue Li, Wolfgang Stuerzlinger, Hai-Ning Liang

Abstract:

Recent advances in eye-tracking technology have positioned gaze as an efficient and intuitive input method for Virtual Reality (VR), offering a natural and immersive user experience. As a result, gaze input is now leveraged for fundamental interaction tasks such as selection, manipulation, crossing, and steering. Although several studies have modeled user steering performance across various path characteristics and input methods, our understanding of gaze-based steering in VR remains limited. This gap persists because the unique qualities of eye movements—involving rapid, continuous motions—and the variability in eye-tracking make findings from other input modalities nontransferable to a gaze-based context, underscoring the need for a dedicated investigation into gaze-based steering behaviors and performance. To bridge this gap, we present two user studies to explore and model gaze-based steering. In the first one, user behavior data are collected across various path characteristics and eye-tracking conditions. Based on this data, we propose four refined models that extend the classic Steering Law to predict users’ movement time in gaze-based steering tasks, explicitly incorporating the impact of tracking quality. The best-performing model achieves an adjusted R^2 of 0.956, corresponding to a 16% improvement in movement time prediction. This model also yields a substantial reduction in AIC (from 1550 to 1132) and BIC (from 1555 to 1142), highlighting improved model quality and better balance between goodness of fit and model complexity. Finally, data from a second study with varied settings, such as a different eye-tracking sampling rate, illustrate the strong robustness and predictability of our models. Finally, we present scenarios and applications that demonstrate how our models can be used to design enhanced gaze-based interactions in VR systems.

Date of publication: Oct - 2025
Get Citation