Transparent AI
Transparent AI (TAI) focuses on developing and deploying artificial intelligence systems whose decision-making processes are understandable and interpretable by humans, thereby fostering trust and accountability. Current research emphasizes techniques like Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), along with the development of modular, user-centered explainability tools and frameworks that address fairness and privacy concerns, often within specific application domains like healthcare. The ultimate goal is to improve the reliability and societal acceptance of AI systems by providing insights into their internal workings and enabling effective model risk management.
Papers
November 13, 2024
October 14, 2024
June 12, 2024
April 18, 2024
January 29, 2024
December 13, 2023
October 13, 2023
April 28, 2023
March 9, 2023
September 8, 2022
July 27, 2022
June 10, 2022
May 9, 2022
April 27, 2022