Medical Image Datasets
Medical image datasets are crucial for training and evaluating machine learning models used in disease diagnosis and treatment planning. Current research focuses on addressing challenges like data scarcity and imbalance through techniques such as data augmentation (including GANs and image translation), test-time training, and federated learning to improve model performance and generalization across diverse patient populations and imaging modalities. Convolutional neural networks (CNNs), transformers, and large multi-modal models are prominent architectures, often combined with techniques to enhance interpretability and mitigate biases. These advancements hold significant potential for improving the accuracy, efficiency, and accessibility of medical image analysis, ultimately leading to better patient care.
Papers
Robust and Interpretable Medical Image Classifiers via Concept Bottleneck Models
An Yan, Yu Wang, Yiwu Zhong, Zexue He, Petros Karypis, Zihan Wang, Chengyu Dong, Amilcare Gentili, Chun-Nan Hsu, Jingbo Shang, Julian McAuley
Comprehensive Multimodal Segmentation in Medical Imaging: Combining YOLOv8 with SAM and HQ-SAM Models
Sumit Pandey, Kuan-Fu Chen, Erik B. Dam
Self-supervised Few-shot Learning for Semantic Segmentation: An Annotation-free Approach
Sanaz Karimijafarbigloo, Reza Azad, Dorit Merhof
Hybrid Representation-Enhanced Sampling for Bayesian Active Learning in Musculoskeletal Segmentation of Lower Extremities
Ganping Li, Yoshito Otake, Mazen Soufi, Masashi Taniguchi, Masahide Yagi, Noriaki Ichihashi, Keisuke Uemura, Masaki Takao, Nobuhiko Sugano, Yoshinobu Sato