Gaze-based interaction is a common input method in virtual reality (VR). Eye movements, such as fixations and saccades, result in different behaviors compared to other input methods. Previous studies on selection tasks showed that, unlike the mouse, the human gaze is insensitive to target distance and does not fully utilize target width due to the characteristics of saccades and micro-saccades of the eyes. However, its application in steering tasks remains unexplored. Since steering tasks are widely used in VR for menu adjustments and object manipulation, this study examines whether the findings from selection tasks apply to steering tasks. We also model and compare the Steering Law based on eye movement characteristics. To do this, we use data on movement time, average speed, and re-entry count. Our analysis investigates the impact of path width and length on performance. This work proposes three candidate models that incorporate gaze characteristics, which achieve a superior fit (R² > 0.964) compared to the original Steering Law, improving the accuracy of time prediction, AIC, and BIC by 7%, 26%, and 10%, respectively. These models offer valuable insights for game and interface designers who implement gaze-based controls in VR environments.