Concept Prediction
Concept prediction research aims to build models that not only make accurate predictions but also provide human-understandable explanations for their decisions, often using intermediate "concept" representations. Current work focuses on improving the faithfulness and interpretability of these concept predictions, particularly within Concept Bottleneck Models (CBMs), often leveraging techniques like vision-language guidance, semi-supervised learning, and probabilistic modeling to address issues such as information leakage and reliance on spurious correlations. This research is significant because it strives to enhance the trustworthiness and transparency of machine learning models, ultimately leading to more reliable and explainable AI systems across various applications.