Opaque Machine Learning
Opaque machine learning, characterized by the difficulty of understanding how models arrive at their predictions, presents challenges in trustworthiness, fairness, and accountability. Current research focuses on developing methods for explaining model decisions, including techniques like feature attribution aggregation for improved explanation consistency and counterfactual explanations to understand decision alterations, even without training data. These efforts aim to enhance transparency and build trust in AI systems across various domains, from medicine to autonomous vehicles, by providing insights into model behavior and mitigating potential biases or risks. The ultimate goal is to bridge the gap between powerful predictive models and human understanding, fostering responsible AI development and deployment.
Papers
Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions
Luca Longo, Mario Brcic, Federico Cabitza, Jaesik Choi, Roberto Confalonieri, Javier Del Ser, Riccardo Guidotti, Yoichi Hayashi, Francisco Herrera, Andreas Holzinger, Richard Jiang, Hassan Khosravi, Freddy Lecue, Gianclaudio Malgieri, Andrés Páez, Wojciech Samek, Johannes Schneider, Timo Speith, Simone Stumpf
Label-Only Model Inversion Attacks via Knowledge Transfer
Ngoc-Bao Nguyen, Keshigeyan Chandrasegaran, Milad Abdollahzadeh, Ngai-Man Cheung