Dual View Gaze Estimation

Dual-view gaze estimation aims to improve the accuracy and robustness of gaze tracking by using information from two cameras instead of one. Current research focuses on developing methods that effectively fuse features from these two views, often employing convolutional neural networks and transformers, while addressing challenges like camera calibration and the need for extensive multi-view training data. Unsupervised adaptation techniques and rotation-constrained feature fusion are emerging as promising approaches to enhance generalization across different camera setups and head poses. This research area holds significant potential for improving human-computer interaction applications, particularly those requiring accurate gaze tracking in diverse and uncontrolled environments.

Papers