Transparent AI

Transparent AI (TAI) focuses on developing and deploying artificial intelligence systems whose decision-making processes are understandable and interpretable by humans, thereby fostering trust and accountability. Current research emphasizes techniques like Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), along with the development of modular, user-centered explainability tools and frameworks that address fairness and privacy concerns, often within specific application domains like healthcare. The ultimate goal is to improve the reliability and societal acceptance of AI systems by providing insights into their internal workings and enabling effective model risk management.

Papers