Training Data
Training data is crucial for machine learning model development, with current research focusing on improving data quality, efficiency, and mitigating biases. Active areas include generating synthetic data to address scarcity or privacy concerns, developing algorithms to optimize data selection and usage (e.g., self-paced learning, active learning), and mitigating issues like data contamination and imbalance through techniques such as data augmentation, selective parameter merging, and novel loss functions. The quality and characteristics of training data significantly impact model performance, generalization, and robustness, influencing various applications from natural language processing and image recognition to scientific computing and medical diagnosis.
Papers
API-BLEND: A Comprehensive Corpora for Training and Benchmarking API LLMs
Kinjal Basu, Ibrahim Abdelaziz, Subhajit Chaudhury, Soham Dan, Maxwell Crouse, Asim Munawar, Sadhana Kumaravel, Vinod Muthusamy, Pavan Kapanipathi, Luis A. Lastras
Machine Unlearning by Suppressing Sample Contribution
Xinwen Cheng, Zhehao Huang, Xiaolin Huang
Sampling-based Distributed Training with Message Passing Neural Network
Priyesh Kakka, Sheel Nidhan, Rishikesh Ranade, Jonathan F. MacArt
Balanced Data Sampling for Language Model Training with Clustering
Yunfan Shao, Linyang Li, Zhaoye Fei, Hang Yan, Dahua Lin, Xipeng Qiu
Take the Bull by the Horns: Hard Sample-Reweighted Continual Training Improves LLM Generalization
Xuxi Chen, Zhendong Wang, Daouda Sow, Junjie Yang, Tianlong Chen, Yingbin Liang, Mingyuan Zhou, Zhangyang Wang
Integrating kNN with Foundation Models for Adaptable and Privacy-Aware Image Classification
Sebastian Doerrich, Tobias Archut, Francesco Di Salvo, Christian Ledig
Amplifying Training Data Exposure through Fine-Tuning with Pseudo-Labeled Memberships
Myung Gyo Oh, Hong Eun Ahn, Leo Hyun Park, Taekyoung Kwon
The effect of Leaky ReLUs on the training and generalization of overparameterized networks
Yinglong Guo, Shaohan Li, Gilad Lerman