High Quality Distractors
High-quality distractors are crucial for robustly evaluating and improving machine learning models, particularly in vision and language tasks. Current research focuses on generating effective distractors for various applications, including multiple-choice questions, visual object recognition, and reinforcement learning, often employing large language models (LLMs) and advanced neural network architectures like transformers. This work aims to create more challenging and realistic benchmarks by developing methods to generate diverse, plausible, and semantically meaningful distractors that expose model weaknesses and drive improvements in model robustness and generalization capabilities. The resulting advancements have implications for improving the reliability and performance of AI systems across numerous domains.
Papers
DMC-VB: A Benchmark for Representation Learning for Control with Visual Distractors
Joseph Ortiz, Antoine Dedieu, Wolfgang Lehrach, Swaroop Guntupalli, Carter Wendelken, Ahmad Humayun, Guangyao Zhou, Sivaramakrishnan Swaminathan, Miguel Lázaro-Gredilla, Kevin Murphy
DisGeM: Distractor Generation for Multiple Choice Questions with Span Masking
Devrim Cavusoglu, Secil Sen, Ulas Sert
The Hard Positive Truth about Vision-Language Compositionality
Amita Kamath, Cheng-Yu Hsieh, Kai-Wei Chang, Ranjay Krishna
Exploring Automated Distractor Generation for Math Multiple-choice Questions via Large Language Models
Wanyong Feng, Jaewook Lee, Hunter McNichols, Alexander Scarlatos, Digory Smith, Simon Woodhead, Nancy Otero Ornelas, Andrew Lan
Understanding How CodeLLMs (Mis)Predict Types with Activation Steering
Francesca Lucchetti, Arjun Guha