Concept Bottleneck Model
Concept Bottleneck Models (CBMs) aim to enhance the interpretability of deep learning models by incorporating human-understandable concepts into the prediction process, thereby bridging the gap between complex model outputs and human comprehension. Current research focuses on improving CBM accuracy, addressing security vulnerabilities like backdoor attacks, and developing methods for automated concept discovery and selection, often leveraging vision-language models like CLIP. This work is significant because it strives to create more trustworthy and reliable AI systems, particularly in high-stakes domains like medicine, where understanding model decisions is crucial for both performance and user acceptance.
Papers
November 8, 2023
October 30, 2023
October 25, 2023
October 23, 2023
October 11, 2023
October 4, 2023
October 3, 2023
September 29, 2023
September 12, 2023
August 25, 2023
August 24, 2023
August 23, 2023
August 21, 2023
June 2, 2023
May 24, 2023
April 20, 2023
April 12, 2023
April 9, 2023
March 20, 2023