Model Merging
Model merging combines multiple pre-trained or fine-tuned neural networks, often large language models (LLMs) or transformers, into a single, more capable model without retraining on original datasets. Current research focuses on improving merging techniques, particularly addressing parameter conflicts and efficiently handling diverse model architectures and scales, exploring methods like weight averaging, task arithmetic, and parameter competition balancing. This approach offers significant advantages, including reduced storage and computational costs, improved generalization, and the ability to integrate expertise from various sources, impacting both the efficiency of model development and the performance of downstream applications.
Papers
Foldable SuperNets: Scalable Merging of Transformers with Different Initializations and Tasks
Edan Kinderman, Itay Hubara, Haggai Maron, Daniel Soudry
Layer Swapping for Zero-Shot Cross-Lingual Transfer in Large Language Models
Lucas Bandarkar, Benjamin Muller, Pritish Yuvraj, Rui Hou, Nayan Singhal, Hongjiang Lv, Bing Liu
SQL-GEN: Bridging the Dialect Gap for Text-to-SQL Via Synthetic Data And Model Merging
Mohammadreza Pourreza, Ruoxi Sun, Hailong Li, Lesly Miculicich, Tomas Pfister, Sercan O. Arik
Weight Scope Alignment: A Frustratingly Easy Method for Model Merging
Yichu Xu, Xin-Chun Li, Le Gan, De-Chuan Zhan
You Only Merge Once: Learning the Pareto Set of Preference-Aware Model Merging
Weiyu Chen, James Kwok