Paper ID: 2203.04752
Human Gaze Guided Attention for Surgical Activity Recognition
Abdishakour Awale, Duygu Sarikaya
Modeling and automatically recognizing surgical activities are fundamental steps toward automation in surgery and play important roles in providing timely feedback to surgeons. Accurately recognizing surgical activities in video poses a challenging problem that requires an effective means of learning both spatial and temporal dynamics. Human gaze and visual saliency carry important information about visual attention and can be used to extract more relevant features that better reflect these spatial and temporal dynamics. In this study, we propose to use human gaze with a spatio-temporal attention mechanism for activity recognition in surgical videos. Our model consists of an I3D-based architecture, learns spatio-temporal features using 3D convolutions, as well as learns an attention map using human gaze as supervision. We evaluate our model on the Suturing task of JIGSAWS which is a publicly available surgical video understanding dataset. To our knowledge, we are the first to use human gaze for surgical activity recognition. Our results and ablation studies support the contribution of using human gaze to guide attention by outperforming state-of-the art models with an accuracy of 85.4%.
Submitted: Mar 9, 2022