Model Training
Model training focuses on developing efficient and effective methods for creating accurate and robust machine learning models. Current research emphasizes improving training efficiency through techniques like low-precision computation, optimized memory management (e.g., using recomputation and memory-aware scheduling), and efficient communication strategies in distributed and federated learning settings. These advancements are crucial for scaling model training to larger datasets and more complex architectures, impacting various fields from computer vision and natural language processing to healthcare and industrial applications.
Papers
Demystifying Workload Imbalances in Large Transformer Model Training over Variable-length Sequences
Haoyang Li, Fangcheng Fu, Sheng Lin, Hao Ge, Xuanyu Wang, Jiawen Niu, Jie Jiang, Bin Cui
Tazza: Shuffling Neural Network Parameters for Secure and Private Federated Learning
Kichang Lee, Jaeho Jin, JaeYeon Park, JeongGil Ko
Privacy Drift: Evolving Privacy Concerns in Incremental Learning
Sayyed Farid Ahamed, Soumya Banerjee, Sandip Roy, Aayush Kapoor, Marc Vucovich, Kevin Choi, Abdul Rahman, Edward Bowen, Sachin Shetty
Adaptive Optimization for Enhanced Efficiency in Large-Scale Language Model Training
Jiajing Chen, Bingying Liu, Xiaoxuan Liao, Jia Gao, Hongye Zheng, Yue Li