Parallel Training
Parallel training aims to accelerate the computationally intensive process of training large machine learning models by distributing the workload across multiple processors or devices. Current research focuses on optimizing this process for various model architectures, including large language models (LLMs) and convolutional neural networks (CNNs), through techniques like model and data parallelism, along with strategies to mitigate communication bottlenecks and hardware failures. Efficient parallel training is crucial for advancing the capabilities of AI systems, enabling the development and deployment of larger, more powerful models for diverse applications while reducing training time and costs.
Papers
Merge to Learn: Efficiently Adding Skills to Language Models with Model Merging
Jacob Morrison, Noah A. Smith, Hannaneh Hajishirzi, Pang Wei Koh, Jesse Dodge, Pradeep Dasigi
Rethinking Token Reduction for State Space Models
Zheng Zhan, Yushu Wu, Zhenglun Kong, Changdi Yang, Yifan Gong, Xuan Shen, Xue Lin, Pu Zhao, Yanzhi Wang
Online Frequency Scheduling by Learning Parallel Actions
Anastasios Giovanidis, Mathieu Leconte, Sabrine Aroua, Tor Kvernvik, David Sandberg
Boosting Large-scale Parallel Training Efficiency with C4: A Communication-Driven Approach
Jianbo Dong, Bin Luo, Jun Zhang, Pengcheng Zhang, Fei Feng, Yikai Zhu, Ang Liu, Zian Chen, Yi Shi, Hairong Jiao, Gang Lu, Yu Guan, Ennan Zhai, Wencong Xiao, Hanyu Zhao, Man Yuan, Siran Yang, Xiang Li, Jiamang Wang, Rui Men, Jianwei Zhang, Huang Zhong, Dennis Cai, Yuan Xie, Binzhang Fu