Domain Knowledge
Domain knowledge integration into large language models (LLMs) is a crucial area of research aiming to enhance the accuracy, reliability, and explainability of LLMs for domain-specific tasks. Current efforts focus on incorporating domain knowledge through various methods, including knowledge graphs, ontologies, and retrieval-augmented generation (RAG), often employing architectures like mixture-of-experts models and neurosymbolic agents. This research is significant because it addresses the limitations of general-purpose LLMs in specialized fields, leading to improved performance in applications ranging from medical diagnosis to scientific discovery and financial analysis.
Papers
ChatGPT-HealthPrompt. Harnessing the Power of XAI in Prompt-Based Healthcare Decision Support using ChatGPT
Fatemeh Nazary, Yashar Deldjoo, Tommaso Di Noia
Knowledge-inspired Subdomain Adaptation for Cross-Domain Knowledge Transfer
Liyue Chen, Linian Wang, Jinyu Xu, Shuai Chen, Weiqiang Wang, Wenbiao Zhao, Qiyu Li, Leye Wang