Explainable System
Explainable systems aim to make the decision-making processes of artificial intelligence models transparent and understandable, fostering trust and facilitating effective human-AI collaboration. Current research emphasizes developing methods that provide clear, actionable explanations, often using techniques like attribution-based methods and competitive learning algorithms, alongside interactive interfaces that allow users to refine and understand model outputs. This focus on explainability is crucial for deploying AI in high-stakes domains like healthcare and robotics, where understanding the reasoning behind AI decisions is paramount for safety, reliability, and user acceptance.
Papers
November 13, 2024
August 6, 2024
June 13, 2024
June 7, 2024
May 27, 2024
March 25, 2024
September 7, 2023
May 7, 2023
March 30, 2023
January 6, 2023
September 30, 2022
June 7, 2022
June 3, 2022
February 1, 2022
January 25, 2022