Domain Knowledge
Domain knowledge integration into large language models (LLMs) is a crucial area of research aiming to enhance the accuracy, reliability, and explainability of LLMs for domain-specific tasks. Current efforts focus on incorporating domain knowledge through various methods, including knowledge graphs, ontologies, and retrieval-augmented generation (RAG), often employing architectures like mixture-of-experts models and neurosymbolic agents. This research is significant because it addresses the limitations of general-purpose LLMs in specialized fields, leading to improved performance in applications ranging from medical diagnosis to scientific discovery and financial analysis.
Papers
TrueDeep: A systematic approach of crack detection with less data
Ram Krishna Pandey, Akshit Achara
Incorporating Domain Knowledge in Deep Neural Networks for Discrete Choice Models
Shadi Haj-Yahia, Omar Mansour, Tomer Toledo
Domain Specialization as the Key to Make Large Language Models Disruptive: A Comprehensive Survey
Chen Ling, Xujiang Zhao, Jiaying Lu, Chengyuan Deng, Can Zheng, Junxiang Wang, Tanmoy Chowdhury, Yun Li, Hejie Cui, Xuchao Zhang, Tianjiao Zhao, Amit Panalkar, Dhagash Mehta, Stefano Pasquali, Wei Cheng, Haoyu Wang, Yanchi Liu, Zhengzhang Chen, Haifeng Chen, Chris White, Quanquan Gu, Jian Pei, Carl Yang, Liang Zhao
MultiTool-CoT: GPT-3 Can Use Multiple External Tools with Chain of Thought Prompting
Tatsuro Inaba, Hirokazu Kiyomaru, Fei Cheng, Sadao Kurohashi
Leveraging Domain Knowledge for Inclusive and Bias-aware Humanitarian Response Entry Classification
Nicolò Tamagnone, Selim Fekih, Ximena Contla, Nayid Orozco, Navid Rekabsaz