Paper ID: 2312.02923
MoSA: Mixture of Sparse Adapters for Visual Efficient Tuning
Qizhe Zhang, Bocheng Zou, Ruichuan An, Jiaming Liu, Shanghang Zhang
With the rapid growth in the scale of pre-trained foundation models, parameter-efficient fine-tuning techniques have gained significant attention, among which Adapter Tuning is the most widely used. Despite achieving efficiency, it still underperforms full fine-tuning, and the performance improves at the cost of an increase in parameters. Recent efforts have either focused on training multiple adapter experts to increase model capacity or on pruning adapters to achieve parameter efficiency. However, both approaches introduce more parameters compared to the original adapter, hence are not computationally efficient. Motivated by this, we propose Mixture of Sparse Adapters, or MoSA, as a novel Adapter Tuning method to fully unleash the potential of each parameter in the adapter. We first split the standard adapter into multiple non-overlapping modules, then stochastically activate them for sparse training, and finally merge them to form a complete adapter after tuning. In this way, MoSA can achieve significantly better performance than standard adapters without any additional computational or storage overhead. Furthermore, we propose a hierarchical sparse strategy to better leverage limited training data. Extensive experiments on a series of 27 visual tasks demonstrate that MoSA consistently outperforms other Adapter Tuning methods as well as other baselines by a large margin. Furthermore, MoSA brings consistent improvements across various model scales, architectures, and different PEFT methods. Code will be released.
Submitted: Dec 5, 2023