Training Data
Training data is crucial for machine learning model development, with current research focusing on improving data quality, efficiency, and mitigating biases. Active areas include generating synthetic data to address scarcity or privacy concerns, developing algorithms to optimize data selection and usage (e.g., self-paced learning, active learning), and mitigating issues like data contamination and imbalance through techniques such as data augmentation, selective parameter merging, and novel loss functions. The quality and characteristics of training data significantly impact model performance, generalization, and robustness, influencing various applications from natural language processing and image recognition to scientific computing and medical diagnosis.
Papers
On Pretraining Data Diversity for Self-Supervised Learning
Hasan Abed Al Kader Hammoud, Tuhin Das, Fabio Pizzati, Philip Torr, Adel Bibi, Bernard Ghanem
Consistent Diffusion Meets Tweedie: Training Exact Ambient Diffusion Models with Noisy Data
Giannis Daras, Alexandros G. Dimakis, Constantinos Daskalakis
Optimal Transport for Fairness: Archival Data Repair using Small Research Data Sets
Abigail Langbridge, Anthony Quinn, Robert Shorten