Expert Knowledge
Expert knowledge integration in machine learning aims to leverage human expertise to improve model performance and interpretability, addressing limitations of purely data-driven approaches. Current research focuses on incorporating expert knowledge through various methods, including Mixture-of-Experts (MoE) architectures that combine specialized models for enhanced efficiency and adaptability, and techniques for upcycling pre-trained models to incorporate domain-specific knowledge. These advancements are significant for improving model accuracy, efficiency, and trustworthiness across diverse applications, from medical image analysis to natural language processing and time series forecasting.
Papers
An Expert is Worth One Token: Synergizing Multiple Expert LLMs as Generalist via Expert Token Routing
Ziwei Chai, Guoyin Wang, Jing Su, Tianjie Zhang, Xuanwen Huang, Xuwu Wang, Jingjing Xu, Jianbo Yuan, Hongxia Yang, Fei Wu, Yang Yang
EDUE: Expert Disagreement-Guided One-Pass Uncertainty Estimation for Medical Image Segmentation
Kudaibergen Abutalip, Numan Saeed, Ikboljon Sobirov, Vincent Andrearczyk, Adrien Depeursinge, Mohammad Yaqub
MoELoRA: Contrastive Learning Guided Mixture of Experts on Parameter-Efficient Fine-Tuning for Large Language Models
Tongxu Luo, Jiahe Lei, Fangyu Lei, Weihao Liu, Shizhu He, Jun Zhao, Kang Liu
Denoising OCT Images Using Steered Mixture of Experts with Multi-Model Inference
Aytaç Özkan, Elena Stoykova, Thomas Sikora, Violeta Madjarova
HyperMoE: Towards Better Mixture of Experts via Transferring Among Experts
Hao Zhao, Zihan Qiu, Huijia Wu, Zili Wang, Zhaofeng He, Jie Fu