Mixture Component
Mixture component models are a powerful class of machine learning techniques that combine multiple specialized models (experts) to improve performance and efficiency on complex tasks. Current research focuses on developing novel architectures, such as mixtures of experts (MoE), and applying them to diverse fields including natural language processing, computer vision, and signal processing, often incorporating techniques like low-rank adaptation (LoRA) for parameter efficiency. These advancements are significant because they enable the creation of larger, more capable models while mitigating computational costs and improving generalization across heterogeneous datasets, leading to improved accuracy and efficiency in various applications.
Papers
MixGCN: Scalable GCN Training by Mixture of Parallelism and Mixture of Accelerators
Cheng Wan, Runkao Tao, Zheng Du, Yang Katie Zhao, Yingyan Celine Lin
MoEE: Mixture of Emotion Experts for Audio-Driven Portrait Animation
Huaize Liu, Wenzhang Sun, Donglin Di, Shibo Sun, Jiahui Yang, Changqing Zou, Hujun Bao
MoVE-KD: Knowledge Distillation for VLMs with Mixture of Visual Encoders
Jiajun Cao, Yuan Zhang, Tao Huang, Ming Lu, Qizhe Zhang, Ruichuan An, Ningning MA, Shanghang Zhang
Partition of Unity Physics-Informed Neural Networks (POU-PINNs): An Unsupervised Framework for Physics-Informed Domain Decomposition and Mixtures of Experts
Arturo Rodriguez, Ashesh Chattopadhyay, Piyush Kumar, Luis F. Rodriguez, Vinod Kumar
RSUniVLM: A Unified Vision Language Model for Remote Sensing via Granularity-oriented Mixture of Experts
Xu Liu, Zhouhui Lian
Mixture of Hidden-Dimensions Transformer
Yilong Chen, Junyuan Shang, Zhengyu Zhang, Jiawei Sheng, Tingwen Liu, Shuohuan Wang, Yu Sun, Hua Wu, Haifeng Wang