Projects

How do eye movements plan and guide actions in the natural world?

How does gaze support a sequence of actions?

Eye movements in the natural environment have primarily been studied for over-learned and habitual everyday activities (tea-making, sandwich-making, hand-washing) which have a fixed sequence of actions associated with them. These studies typically categorize eye movements related to low level action plans of locating the object of interest, directing the body or hand to the object, and monitoring the action execution. In this study, we were interested in generalizing these task-oriented eye movements for novel tasks that are not associated with an inherent action sequence. To that end, we asked subjects to sort objects based on object features on a life-size shelf in a virtual environment as we recorded their eye and body movements.
Preprint Code

What are the spatial biases in gaze behavior while interacting with tools?

Here we investigated active inference processes revealed by eye movements during interaction with familiar and novel tools with two levels of realism of interaction. We presented participants with 3D tool models that were either familiar or unfamiliar, oriented congruent or incongruent to their handedness, and asked participants to interact with them by lifting or using. Importantly, we used the same experimental design in two setups. In the first experiment, participants interacted with a VR controller; in the second, they performed the task with an interaction setup that allowed differentiated hand and finger movements. We investigated the differences in odds of fixations and their eccentricity towards the tool parts before action initiation.
Paper Code

Can gaze behavior identify the task performed by the user?

Here, we used simple gaze features such as proportion of fixations on regions of interest to classify a simple pick and place task performed by the user. We used SVMs with leave-one-subject out cross validation method to predict the task performed. Our results show that even simple gaze features are a robust signal and can successfully decode a user’s task.
Paper

How do eyes and hand coordinate to plan actions?

Studies of eye-hand coordination are primarily conducted for sedentary tasks that do not require full-body movements. These tasks have shown that eye fixations precede manual action by ~1 second. However, we do not yet know how eyes and hands might coordinate in a larger spatial context where action locations vary and require coordination in different rotational planes.
Preprint Code

What is the neural basis of anticipatory gaze behavior?

Here, we use Generalized Eigen Decomposition to find the neural sources that are involved in active inference processes that lead to anticipatory gaze behavior.
In Prep

Collab projects

Is human-human collaboration different from human-robot collaboration?


Paper Code

How do humans perceive autonomous driving vehicles?


Paper



How does embodied spatial exploration affect navigation abilities?


Paper Code