Surgical Scene
Surgical scene analysis focuses on developing computer vision systems to understand and reconstruct the three-dimensional environment of surgical procedures, primarily aiming to improve surgical precision, safety, and training. Current research heavily utilizes neural radiance fields (NeRFs) and Gaussian splatting, often enhanced with techniques like optical flow analysis and deep learning-based point cloud matching, to achieve real-time 3D reconstruction of deformable tissues and instrument tracking, even in challenging conditions like occlusions. These advancements are significant for improving robotic surgery, surgical planning, and training simulations by providing more accurate and efficient representations of the surgical field. Furthermore, research is actively exploring methods to improve robustness to domain shifts and reduce the need for extensive manual annotation of training data.