Post Hoc
Post-hoc analysis in machine learning refers to techniques applied *after* a model is trained to improve its performance, interpretability, or robustness. Current research focuses on enhancing model explainability using methods like LIME and SHAP, improving out-of-distribution detection by combining existing methods, and addressing issues like data privacy risks associated with explanations. These advancements are crucial for building trustworthy and reliable AI systems, particularly in high-stakes applications like healthcare and autonomous systems, by increasing transparency and mitigating potential biases or vulnerabilities.
Papers
October 7, 2024
September 24, 2024
August 12, 2024
August 5, 2024
July 29, 2024
July 24, 2024
July 9, 2024
July 4, 2024
June 25, 2024
June 17, 2024
ChaosMining: A Benchmark to Evaluate Post-Hoc Local Attribution Methods in Low SNR Environments
Ge Shi, Ziwen Kan, Jason Smucny, Ian Davidson
Not Eliminate but Aggregate: Post-Hoc Control over Mixture-of-Experts to Address Shortcut Shifts in Natural Language Understanding
Ukyo Honda, Tatsushi Oka, Peinan Zhang, Masato Mita
June 11, 2024
June 2, 2024
May 8, 2024
April 29, 2024
April 23, 2024
March 25, 2024
March 24, 2024
November 3, 2023
October 30, 2023