Expert Knowledge
Expert knowledge integration in machine learning aims to leverage human expertise to improve model performance and interpretability, addressing limitations of purely data-driven approaches. Current research focuses on incorporating expert knowledge through various methods, including Mixture-of-Experts (MoE) architectures that combine specialized models for enhanced efficiency and adaptability, and techniques for upcycling pre-trained models to incorporate domain-specific knowledge. These advancements are significant for improving model accuracy, efficiency, and trustworthiness across diverse applications, from medical image analysis to natural language processing and time series forecasting.
Papers
Multilinear Mixture of Experts: Scalable Expert Specialization through Factorization
James Oldfield, Markos Georgopoulos, Grigorios G. Chrysos, Christos Tzelepis, Yannis Panagakis, Mihalis A. Nicolaou, Jiankang Deng, Ioannis Patras
Unraveling Complex Data Diversity in Underwater Acoustic Target Recognition through Convolution-based Mixture of Experts
Yuan Xie, Jiawei Ren, Ji Xu
Approximation Rates and VC-Dimension Bounds for (P)ReLU MLP Mixture of Experts
Anastasis Kratsios, Haitz Sáez de Ocáriz Borde, Takashi Furuya, Marc T. Law
On Least Square Estimation in Softmax Gating Mixture of Experts
Huy Nguyen, Nhat Ho, Alessandro Rinaldo
InterpretCC: Intrinsic User-Centric Interpretability through Global Mixture of Experts
Vinitra Swamy, Syrielle Montariol, Julian Blackwell, Jibril Frej, Martin Jaggi, Tanja Käser
Explainable data-driven modeling via mixture of experts: towards effective blending of grey and black-box models
Jessica Leoni, Valentina Breschi, Simone Formentin, Mara Tanelli
Checkmating One, by Using Many: Combining Mixture of Experts with MCTS to Improve in Chess
Felix Helfenstein, Jannis Blüml, Johannes Czech, Kristian Kersting