Mixture of Expert
Mixture-of-Experts (MoE) models aim to improve the efficiency and scalability of large language and other models by using multiple specialized "expert" networks, each handling a subset of the input data. Current research focuses on improving routing algorithms to efficiently assign inputs to experts, developing heterogeneous MoE architectures with experts of varying sizes and capabilities, and optimizing training methods to address challenges like load imbalance and gradient conflicts. This approach holds significant promise for creating larger, more powerful models with reduced computational costs, impacting various fields from natural language processing and computer vision to robotics and scientific discovery.
Papers
Interpretable Cascading Mixture-of-Experts for Urban Traffic Congestion Prediction
Wenzhao Jiang, Jindong Han, Hao Liu, Tao Tao, Naiqiang Tan, Hui Xiong
MoME: Mixture of Multimodal Experts for Cancer Survival Prediction
Conghao Xiong, Hao Chen, Hao Zheng, Dong Wei, Yefeng Zheng, Joseph J. Y. Sung, Irwin King
A Mixture-of-Experts Approach to Few-Shot Task Transfer in Open-Ended Text Worlds
Christopher Z. Cui, Xiangyu Peng, Mark O. Riedl
CuMo: Scaling Multimodal LLM with Co-Upcycled Mixture-of-Experts
Jiachen Li, Xinyao Wang, Sijie Zhu, Chia-Wen Kuo, Lu Xu, Fan Chen, Jitesh Jain, Humphrey Shi, Longyin Wen
EWMoE: An effective model for global weather forecasting with mixture-of-experts
Lihao Gan, Xin Man, Chenghong Zhang, Jie Shao