Multi Label Chest X Ray
Multi-label chest X-ray classification aims to automatically detect multiple diseases simultaneously from a single X-ray image, improving diagnostic efficiency and accuracy. Current research focuses on refining deep learning models, including convolutional neural networks (CNNs), vision transformers (ViTs), and hybrid architectures, often employing ensemble methods or incorporating techniques like knowledge distillation to enhance performance and address computational constraints. Addressing biases in model predictions across different patient subgroups and handling noisy or incomplete labels are also significant areas of investigation, with a strong emphasis on improving both accuracy and fairness. These advancements hold considerable promise for assisting radiologists and improving patient care by providing faster, more accurate, and potentially more equitable diagnoses.
Papers
SynthEnsemble: A Fusion of CNN, Vision Transformer, and Hybrid Models for Multi-Label Chest X-Ray Classification
S. M. Nabil Ashraf, Md. Adyelullahil Mamun, Hasnat Md. Abdullah, Md. Golam Rabiul Alam
LT-ViT: A Vision Transformer for multi-label Chest X-ray classification
Umar Marikkar, Sara Atito, Muhammad Awais, Adam Mahdi