Paper ID: 2206.06813

Learning towards Synchronous Network Memorizability and Generalizability for Continual Segmentation across Multiple Sites

Jingyang Zhang, Peng Xue, Ran Gu, Yuning Gu, Mianxin Liu, Yongsheng Pan, Zhiming Cui, Jiawei Huang, Lei Ma, Dinggang Shen

In clinical practice, a segmentation network is often required to continually learn on a sequential data stream from multiple sites rather than a consolidated set, due to the storage cost and privacy restriction. However, during the continual learning process, existing methods are usually restricted in either network memorizability on previous sites or generalizability on unseen sites. This paper aims to tackle the challenging problem of Synchronous Memorizability and Generalizability (SMG) and to simultaneously improve performance on both previous and unseen sites, with a novel proposed SMG-learning framework. First, we propose a Synchronous Gradient Alignment (SGA) objective, which not only promotes the network memorizability by enforcing coordinated optimization for a small exemplar set from previous sites (called replay buffer), but also enhances the generalizability by facilitating site-invariance under simulated domain shift. Second, to simplify the optimization of SGA objective, we design a Dual-Meta algorithm that approximates the SGA objective as dual meta-objectives for optimization without expensive computation overhead. Third, for efficient rehearsal, we configure the replay buffer comprehensively considering additional inter-site diversity to reduce redundancy. Experiments on prostate MRI data sequentially acquired from six institutes demonstrate that our method can simultaneously achieve higher memorizability and generalizability over state-of-the-art methods. Code is available at https://github.com/jingyzhang/SMG-Learning.

Submitted: Jun 14, 2022