How do eye movements plan and guide actions in the natural world?
How does gaze support a sequence of actions?
Eye movements in the natural environment have primarily been studied for over-learned and habitual everyday activities (tea-making, sandwich-making, hand-washing) with a fixed sequence of associated actions. In this study, we were interested in how humans plan and execute actions for tasks that do not have an inherent action sequence. To that end, we asked subjects to sort objects based on object features on a life-size shelf in a virtual environment as we recorded their eye and body movements. We investigated the general characteristics of gaze behavior while acting under natural conditions. We provide a data-driven method of analyzing the different action-oriented functions of gaze. The results show that bereft of a predefined action sequence, humans prefer to plan only their immediate actions, where eye movements are used to search for the target object to immediately act on, then to guide the hand towards it and monitor the action until it is terminated. Such a simplistic approach ensures that humans choose sub-optimal behavior over planning under sustained cognitive load.
Paper
Code
What are the spatial biases in gaze behavior while interacting with tools?
Here we investigated active inference processes revealed by eye movements during interaction with familiar and novel tools with two levels of realism of interaction. We presented participants with 3D tool models that were either familiar or unfamiliar, oriented congruent or incongruent to their handedness, and asked participants to interact with them by lifting or using. Importantly, we used the same experimental design in two setups. In the first experiment, participants interacted with a VR controller; in the second, they performed the task with an interaction setup that allowed differentiated hand and finger movements. We investigated the differences in odds of fixations and their eccentricity towards the tool parts before action initiation.
Paper
Code
Can gaze behavior identify the task performed by the user?
Here, we used simple gaze features such as proportion of fixations on regions of interest to classify a simple pick and place task performed by the user. We used SVMs with leave-one-subject out cross validation method to predict the task performed. Our results show that even simple gaze features are a robust signal and can successfully decode a user’s task.
Paper
How do eyes and hand coordinate to plan actions?
Studies of eye-hand coordination are primarily conducted for sedentary tasks that do not require full-body movements. These tasks have shown that eye fixations precede manual action by ~1 second. However, we do not yet know how eyes and hands might coordinate in a larger spatial context where action locations vary and require coordination in different rotational planes.
Preprint
Code
What is the neural basis of anticipatory gaze behavior?
Here, we use Generalized Eigen Decomposition to find the neural sources that are involved in active inference processes that lead to anticipatory gaze behavior.
In Prep
Collab projects
Combining EEG and eye-tracking in VR.
Is human-human collaboration different from human-robot collaboration?
How do humans perceive autonomous driving vehicles?