Training Data
Training data is crucial for machine learning model development, with current research focusing on improving data quality, efficiency, and mitigating biases. Active areas include generating synthetic data to address scarcity or privacy concerns, developing algorithms to optimize data selection and usage (e.g., self-paced learning, active learning), and mitigating issues like data contamination and imbalance through techniques such as data augmentation, selective parameter merging, and novel loss functions. The quality and characteristics of training data significantly impact model performance, generalization, and robustness, influencing various applications from natural language processing and image recognition to scientific computing and medical diagnosis.
Papers
Combining Public Human Activity Recognition Datasets to Mitigate Labeled Data Scarcity
Riccardo Presotto, Sannara Ek, Gabriele Civitarese, François Portet, Philippe Lalanda, Claudio Bettini
Minibatch training of neural network ensembles via trajectory sampling
Jamie F. Mair, Luke Causer, Juan P. Garrahan
FFCV: Accelerating Training by Removing Data Bottlenecks
Guillaume Leclerc, Andrew Ilyas, Logan Engstrom, Sung Min Park, Hadi Salman, Aleksander Madry
On the Validation of Gibbs Algorithms: Training Datasets, Test Datasets and their Aggregation
Samir M. Perlaza, Iñaki Esnaola, Gaetan Bisson, H. Vincent Poor
The STOIC2021 COVID-19 AI challenge: applying reusable training methodologies to private data
Luuk H. Boulogne, Julian Lorenz, Daniel Kienzle, Robin Schon, Katja Ludwig, Rainer Lienhart, Simon Jegou, Guang Li, Cong Chen, Qi Wang, Derik Shi, Mayug Maniparambil, Dominik Muller, Silvan Mertes, Niklas Schroter, Fabio Hellmann, Miriam Elia, Ine Dirks, Matias Nicolas Bossa, Abel Diaz Berenguer, Tanmoy Mukherjee, Jef Vandemeulebroucke, Hichem Sahli, Nikos Deligiannis, Panagiotis Gonidakis, Ngoc Dung Huynh, Imran Razzak, Reda Bouadjenek, Mario Verdicchio, Pasquale Borrelli, Marco Aiello, James A. Meakin, Alexander Lemm, Christoph Russ, Razvan Ionasec, Nikos Paragios, Bram van Ginneken, Marie-Pierre Revel Dubois
Stabilizing GANs' Training with Brownian Motion Controller
Tianjiao Luo, Ziyu Zhu, Jianfei Chen, Jun Zhu
Noise-Robust Loss Functions: Enhancing Bounded Losses for Large-Scale Noisy Data Learning
Max Staats, Matthias Thamm, Bernd Rosenow
A framework for dynamically training and adapting deep reinforcement learning models to different, low-compute, and continuously changing radiology deployment environments
Guangyao Zheng, Shuhao Lai, Vladimir Braverman, Michael A. Jacobs, Vishwa S. Parekh
B\"{u}y\"{u}k dil modellerinin T\"{u}rk\c{c}e verisetleri ile e\u{g}itilmesi ve ince ayarlanmas\i
A. Taha Arslan
Evaluating the Effectiveness of Natural Language Inference for Hate Speech Detection in Languages with Limited Labeled Data
Janis Goldzycher, Moritz Preisig, Chantal Amrhein, Gerold Schneider