Domain Knowledge
Domain knowledge integration into large language models (LLMs) is a crucial area of research aiming to enhance the accuracy, reliability, and explainability of LLMs for domain-specific tasks. Current efforts focus on incorporating domain knowledge through various methods, including knowledge graphs, ontologies, and retrieval-augmented generation (RAG), often employing architectures like mixture-of-experts models and neurosymbolic agents. This research is significant because it addresses the limitations of general-purpose LLMs in specialized fields, leading to improved performance in applications ranging from medical diagnosis to scientific discovery and financial analysis.
Papers
The Impact of Domain Knowledge and Multi-Modality on Intelligent Molecular Property Prediction: A Systematic Survey
Taojie Kuang, Pengfei Liu, Zhixiang Ren
Using Large Language Models to Automate and Expedite Reinforcement Learning with Reward Machine
Shayan Meshkat Alsadat, Jean-Raphael Gaglione, Daniel Neider, Ufuk Topcu, Zhe Xu
Assessing the Portability of Parameter Matrices Trained by Parameter-Efficient Finetuning Methods
Mohammed Sabry, Anya Belz
CMMU: A Benchmark for Chinese Multi-modal Multi-type Question Understanding and Reasoning
Zheqi He, Xinya Wu, Pengfei Zhou, Richeng Xuan, Guang Liu, Xi Yang, Qiannan Zhu, Hua Huang
What the Weight?! A Unified Framework for Zero-Shot Knowledge Composition
Carolin Holtermann, Markus Frohmann, Navid Rekabsaz, Anne Lauscher
Context Matters: Pushing the Boundaries of Open-Ended Answer Generation with Graph-Structured Knowledge Context
Somnath Banerjee, Amruit Sahoo, Sayan Layek, Avik Dutta, Rima Hazra, Animesh Mukherjee