Paper ID: 2401.02044
Multi-modal vision-language model for generalizable annotation-free pathological lesions localization and clinical diagnosis
Hao Yang, Hong-Yu Zhou, Zhihuan Li, Yuanxu Gao, Cheng Li, Weijian Huang, Jiarun Liu, Hairong Zheng, Kang Zhang, Shanshan Wang
Defining pathologies automatically from medical images aids the understanding of the emergence and progression of diseases, and such an ability is crucial in clinical diagnostics. However, existing deep learning models heavily rely on expert annotations and lack generalization capabilities in open clinical environments. In this study, we present a generalizable vision-language pre-training model for Annotation-Free pathological lesions Localization (AFLoc). The core strength of AFLoc lies in its extensive multi-level semantic structure-based contrastive learning, which comprehensively aligns multi-granularity medical concepts from reports with abundant image features, to adapt to the diverse expressions of pathologies and unseen pathologies without the reliance on image annotations from experts. We demonstrate the proof of concept on CXR images, with extensive experimental validation across 4 distinct external datasets, encompassing 11 types of chest pathologies. The results demonstrate that AFLoc surpasses state-of-the-art methods in pathological lesions localization and disease classification, and even outperforms the human benchmark in locating 5 different pathologies. Additionally, we further verify its generalization ability by applying it to retinal fundus images. Our approach showcases AFoc versatilities and underscores its suitability for clinical diagnoses in complex clinical environments.
Submitted: Jan 4, 2024