Concept Bottleneck Model
Concept Bottleneck Models (CBMs) aim to enhance the interpretability of deep learning models by incorporating human-understandable concepts into the prediction process, thereby bridging the gap between complex model outputs and human comprehension. Current research focuses on improving CBM accuracy, addressing security vulnerabilities like backdoor attacks, and developing methods for automated concept discovery and selection, often leveraging vision-language models like CLIP. This work is significant because it strives to create more trustworthy and reliable AI systems, particularly in high-stakes domains like medicine, where understanding model decisions is crucial for both performance and user acceptance.
Papers
Stochastic Concept Bottleneck Models
Moritz Vandenhirtz, Sonia Laguna, Ričards Marcinkevičs, Julia E. Vogt
Evidential Concept Embedding Models: Towards Reliable Concept Explanations for Skin Disease Diagnosis
Yibo Gao, Zheyao Gao, Xin Gao, Yuanye Liu, Bomin Wang, Xiahai Zhuang
Semi-supervised Concept Bottleneck Models
Lijie Hu, Tianhao Huang, Huanyi Xie, Chenyang Ren, Zhengyu Hu, Lu Yu, Di Wang
A Textbook Remedy for Domain Shifts: Knowledge Priors for Medical Image Analysis
Yue Yang, Mona Gandhi, Yufei Wang, Yifan Wu, Michael S. Yao, Chris Callison-Burch, James C. Gee, Mark Yatskar
LARS-VSA: A Vector Symbolic Architecture For Learning with Abstract Rules
Mohamed Mejri, Chandramouli Amarnath, Abhijit Chatterjee