Much Progress
Research on "much progress" spans diverse fields, focusing on improving the reliability and interpretability of machine learning models, particularly in robotics, natural language processing, and computer vision. Current efforts concentrate on developing robust evaluation metrics, addressing data imbalances, and refining model architectures like transformers and generative adversarial networks (GANs) to enhance performance and mitigate biases. This work is crucial for advancing the trustworthiness and practical applicability of AI across various domains, from autonomous systems to biomedical applications.
Papers
Are we making progress in unlearning? Findings from the first NeurIPS unlearning competition
Eleni Triantafillou, Peter Kairouz, Fabian Pedregosa, Jamie Hayes, Meghdad Kurmanji, Kairan Zhao, Vincent Dumoulin, Julio Jacques Junior, Ioannis Mitliagkas, Jun Wan, Lisheng Sun Hosoya, Sergio Escalera, Gintare Karolina Dziugaite, Peter Triantafillou, Isabelle Guyon
Dispelling the Mirage of Progress in Offline MARL through Standardised Baselines and Evaluation
Claude Formanek, Callum Rhys Tilbury, Louise Beyers, Jonathan Shock, Arnu Pretorius