Concept Identification
Concept identification focuses on understanding how machines can represent, learn, and reason with abstract concepts, mirroring human cognitive abilities. Current research emphasizes developing methods to improve concept representation within various machine learning models, including diffusion models, large language models, and graph neural networks, often incorporating techniques like concept bottleneck models and hierarchical multi-armed bandits to enhance performance and interpretability. This work is significant because it addresses critical challenges in explainable AI, improving the trustworthiness and reliability of AI systems across diverse applications, from malware detection to medical image analysis and autonomous systems. The ultimate goal is to build more robust, reliable, and human-understandable AI systems.
Papers
DimonGen: Diversified Generative Commonsense Reasoning for Explaining Concept Relationships
Chenzhengyi Liu, Jie Huang, Kerui Zhu, Kevin Chen-Chuan Chang
Does CLIP Bind Concepts? Probing Compositionality in Large Image Models
Martha Lewis, Nihal V. Nayak, Peilin Yu, Qinan Yu, Jack Merullo, Stephen H. Bach, Ellie Pavlick