Concept Bottleneck Model
Concept Bottleneck Models (CBMs) aim to enhance the interpretability of deep learning models by incorporating human-understandable concepts into the prediction process, thereby bridging the gap between complex model outputs and human comprehension. Current research focuses on improving CBM accuracy, addressing security vulnerabilities like backdoor attacks, and developing methods for automated concept discovery and selection, often leveraging vision-language models like CLIP. This work is significant because it strives to create more trustworthy and reliable AI systems, particularly in high-stakes domains like medicine, where understanding model decisions is crucial for both performance and user acceptance.
Papers
May 23, 2024
May 3, 2024
May 2, 2024
April 13, 2024
April 4, 2024
March 28, 2024
March 21, 2024
March 14, 2024
February 28, 2024
February 3, 2024
February 2, 2024
February 1, 2024
January 25, 2024
January 24, 2024
January 11, 2024
January 2, 2024
November 30, 2023
November 29, 2023