Common Pitfall
Research into common pitfalls in various machine learning applications reveals recurring challenges in model development, evaluation, and deployment. Current efforts focus on improving model robustness against adversarial attacks, addressing biases and limitations in data and evaluation metrics (e.g., in multilingual ASR, medical imaging, and federated learning), and developing more reliable uncertainty quantification methods. These findings are crucial for enhancing the trustworthiness and generalizability of machine learning models across diverse domains, ultimately leading to more reliable and impactful applications.
Papers
SoK: Pitfalls in Evaluating Black-Box Attacks
Fnu Suya, Anshuman Suri, Tingwei Zhang, Jingtao Hong, Yuan Tian, David Evans
Understanding and Addressing the Pitfalls of Bisimulation-based Representations in Offline Reinforcement Learning
Hongyu Zang, Xin Li, Leiji Zhang, Yang Liu, Baigui Sun, Riashat Islam, Remi Tachet des Combes, Romain Laroche