Paper ID: 2310.01735

Learning Expected Appearances for Intraoperative Registration during Neurosurgery

Nazim Haouchine, Reuben Dorent, Parikshit Juvekar, Erickson Torio, William M. Wells, Tina Kapur, Alexandra J. Golby, Sarah Frisken

We present a novel method for intraoperative patient-to-image registration by learning Expected Appearances. Our method uses preoperative imaging to synthesize patient-specific expected views through a surgical microscope for a predicted range of transformations. Our method estimates the camera pose by minimizing the dissimilarity between the intraoperative 2D view through the optical microscope and the synthesized expected texture. In contrast to conventional methods, our approach transfers the processing tasks to the preoperative stage, reducing thereby the impact of low-resolution, distorted, and noisy intraoperative images, that often degrade the registration accuracy. We applied our method in the context of neuronavigation during brain surgery. We evaluated our approach on synthetic data and on retrospective data from 6 clinical cases. Our method outperformed state-of-the-art methods and achieved accuracies that met current clinical standards.

Submitted: Oct 3, 2023