Expert Knowledge
Expert knowledge integration in machine learning aims to leverage human expertise to improve model performance and interpretability, addressing limitations of purely data-driven approaches. Current research focuses on incorporating expert knowledge through various methods, including Mixture-of-Experts (MoE) architectures that combine specialized models for enhanced efficiency and adaptability, and techniques for upcycling pre-trained models to incorporate domain-specific knowledge. These advancements are significant for improving model accuracy, efficiency, and trustworthiness across diverse applications, from medical image analysis to natural language processing and time series forecasting.
Papers
Lived Experience Not Found: LLMs Struggle to Align with Experts on Addressing Adverse Drug Reactions from Psychiatric Medication Use
Mohit Chandra, Siddharth Sriraman, Gaurav Verma, Harneet Singh Khanuja, Jose Suarez Campayo, Zihang Li, Michael L. Birnbaum, Munmun De Choudhury
Read-ME: Refactorizing LLMs as Router-Decoupled Mixture of Experts with System Co-Design
Ruisi Cai, Yeonju Ro, Geon-Woo Kim, Peihao Wang, Babak Ehteshami Bejnordi, Aditya Akella, Zhangyang Wang
Mixture of Parrots: Experts improve memorization more than reasoning
Samy Jelassi, Clara Mohri, David Brandfonbrener, Alex Gu, Nikhil Vyas, Nikhil Anand, David Alvarez-Melis, Yuanzhi Li, Sham M. Kakade, Eran Malach
PromptHive: Bringing Subject Matter Experts Back to the Forefront with Collaborative Prompt Engineering for Educational Content Creation
Mohi Reza, Ioannis Anastasopoulos, Shreya Bhandari, Zachary A. Pardos
CartesianMoE: Boosting Knowledge Sharing among Experts via Cartesian Product Routing in Mixture-of-Experts
Zhenpeng Su, Xing Wu, Zijia Lin, Yizhe Xiong, Minxuan Lv, Guangyuan Ma, Hui Chen, Songlin Hu, Guiguang Ding
Generalizing Motion Planners with Mixture of Experts for Autonomous Driving
Qiao Sun, Huimin Wang, Jiahao Zhan, Fan Nie, Xin Wen, Leimeng Xu, Kun Zhan, Peng Jia, Xianpeng Lang, Hang Zhao
InternLM2.5-StepProver: Advancing Automated Theorem Proving via Expert Iteration on Large-Scale LEAN Problems
Zijian Wu, Suozhi Huang, Zhejian Zhou, Huaiyuan Ying, Jiayu Wang, Dahua Lin, Kai Chen
Vital Insight: Assisting Experts' Sensemaking Process of Multi-modal Personal Tracking Data Using Visualization and LLM
Jiachen Li, Justin Steinberg, Xiwen Li, Akshat Choube, Bingsheng Yao, Dakuo Wang, Elizabeth Mynatt, Varun Mishra
MomentumSMoE: Integrating Momentum into Sparse Mixture of Experts
Rachel S.Y. Teo, Tan M. Nguyen
GaVaMoE: Gaussian-Variational Gated Mixture of Experts for Explainable Recommendation
Fei Tang, Yongliang Shen, Hang Zhang, Zeqi Tan, Wenqi Zhang, Guiyang Hou, Kaitao Song, Weiming Lu, Yueting Zhuang
Quadratic Gating Functions in Mixture of Experts: A Statistical Insight
Pedram Akbarian, Huy Nguyen, Xing Han, Nhat Ho
Moirai-MoE: Empowering Time Series Foundation Models with Sparse Mixture of Experts
Xu Liu, Juncheng Liu, Gerald Woo, Taha Aksu, Yuxuan Liang, Roger Zimmermann, Chenghao Liu, Silvio Savarese, Caiming Xiong, Doyen Sahoo
Tighter Risk Bounds for Mixtures of Experts
Wissam Akretche, Frédéric LeBlanc, Mario Marchand
Scalable Multi-Domain Adaptation of Language Models using Modular Experts
Peter Schafhalter, Shun Liao, Yanqi Zhou, Chih-Kuan Yeh, Arun Kandoor, James Laudon
Mixture of Experts Made Personalized: Federated Prompt Learning for Vision-Language Models
Jun Luo, Chen Chen, Shandong Wu