Multi Center
Multi-center research focuses on developing and validating machine learning models that perform reliably across diverse datasets from multiple medical institutions, overcoming data heterogeneity and privacy limitations. Current efforts concentrate on improving model generalizability using techniques like federated learning, transfer learning (including "learning without forgetting"), and data augmentation, often employing architectures such as U-Nets, CycleGANs, and various transformer-based models. This work is crucial for advancing the clinical applicability of AI in healthcare, enabling the development of robust and reliable diagnostic and prognostic tools that can be deployed in various settings without compromising patient data privacy.
Papers
Advancing Italian Biomedical Information Extraction with Transformers-based Models: Methodological Insights and Multicenter Practical Application
Claudio Crema, Tommaso Mario Buonocore, Silvia Fostinelli, Enea Parimbelli, Federico Verde, Cira Fundarò, Marina Manera, Matteo Cotta Ramusino, Marco Capelli, Alfredo Costa, Giuliano Binetti, Riccardo Bellazzi, Alberto Redolfi
Yet Another ICU Benchmark: A Flexible Multi-Center Framework for Clinical ML
Robin van de Water, Hendrik Schmidt, Paul Elbers, Patrick Thoral, Bert Arnrich, Patrick Rockenschaub