Mixed Reality
Mixed reality (MR) seamlessly blends real and virtual worlds, aiming to enhance human-computer interaction and improve task performance across diverse fields. Current research heavily focuses on developing robust MR interfaces for applications like robotic surgery, manufacturing, and training, often employing computer vision techniques (e.g., 3D pose estimation, object tracking) and machine learning models (e.g., neural networks, large language models) to achieve real-time interaction and accurate environmental understanding. This interdisciplinary field is significantly impacting various sectors by improving efficiency, safety, and accessibility in tasks ranging from complex surgical procedures to collaborative robot control. The development of open-source platforms and datasets is accelerating progress and fostering collaboration within the research community.
Papers
SurgeoNet: Realtime 3D Pose Estimation of Articulated Surgical Instruments from Stereo Images using a Synthetically-trained Network
Ahmed Tawfik Aboukhadra, Nadia Robertini, Jameel Malik, Ahmed Elhayek, Gerd Reis, Didier Stricker
StraightTrack: Towards Mixed Reality Navigation System for Percutaneous K-wire Insertion
Han Zhang, Benjamin D. Killeen, Yu-Chun Ku, Lalithkumar Seenivasan, Yuxuan Zhao, Mingxu Liu, Yue Yang, Suxi Gu, Alejandro Martin-Gomez, Russell H. Taylor, Greg Osgood, Mathias Unberath