Paper ID: 2408.17280

Flexible and Effective Mixing of Large Language Models into a Mixture of Domain Experts

Rhui Dih Lee, Laura Wynter, Raghu Kiran Ganti

We present a toolkit for creating low-cost Mixture-of-Domain-Experts (MOE) from trained models. The toolkit can be used for creating a mixture from models or from adapters. We perform extensive tests and offer guidance on defining the architecture of the resulting MOE using the toolkit. A public repository is available.

Submitted: Aug 30, 2024