Domain Knowledge
Domain knowledge integration into large language models (LLMs) is a crucial area of research aiming to enhance the accuracy, reliability, and explainability of LLMs for domain-specific tasks. Current efforts focus on incorporating domain knowledge through various methods, including knowledge graphs, ontologies, and retrieval-augmented generation (RAG), often employing architectures like mixture-of-experts models and neurosymbolic agents. This research is significant because it addresses the limitations of general-purpose LLMs in specialized fields, leading to improved performance in applications ranging from medical diagnosis to scientific discovery and financial analysis.
Papers
Enhancing Explainability in Multimodal Large Language Models Using Ontological Context
Jihen Amara, Birgitta König-Ries, Sheeba Samuel
SciDFM: A Large Language Model with Mixture-of-Experts for Science
Liangtai Sun, Danyu Luo, Da Ma, Zihan Zhao, Baocai Chen, Zhennan Shen, Su Zhu, Lu Chen, Xin Chen, Kai Yu
Knowledge Planning in Large Language Models for Domain-Aligned Counseling Summarization
Aseem Srivastava, Smriti Joshi, Tanmoy Chakraborty, Md Shad Akhtar
DSG-KD: Knowledge Distillation from Domain-Specific to General Language Models
Sangyeon Cho, Jangyeong Jeon, Dongjoon Lee, Changhee Lee, Junyeong Kim
Privacy Policy Analysis through Prompt Engineering for LLMs
Arda Goknil, Femke B. Gelderblom, Simeon Tverdal, Shukun Tokas, Hui Song