Explainable Detection
Explainable detection focuses on developing machine learning models that not only accurately identify events or patterns (e.g., AI-generated speech, traffic anomalies, medical conditions, online sexism) but also provide understandable justifications for their predictions. Current research emphasizes using model architectures like part-prototype neural networks and transformer-based models, along with explanation methods such as Shapley values and LIME, to achieve both high accuracy and interpretability. This field is crucial for building trust in AI systems and enabling responsible use in high-stakes applications like healthcare and online safety, where understanding the reasoning behind a model's decision is paramount.
Papers
November 4, 2024
June 14, 2024
April 19, 2024
March 27, 2024
December 30, 2023
August 14, 2023
May 15, 2023
May 7, 2023
April 15, 2023
April 10, 2023
April 7, 2023
March 7, 2023
May 23, 2022
March 20, 2022