Domain Knowledge
Domain knowledge integration into large language models (LLMs) is a crucial area of research aiming to enhance the accuracy, reliability, and explainability of LLMs for domain-specific tasks. Current efforts focus on incorporating domain knowledge through various methods, including knowledge graphs, ontologies, and retrieval-augmented generation (RAG), often employing architectures like mixture-of-experts models and neurosymbolic agents. This research is significant because it addresses the limitations of general-purpose LLMs in specialized fields, leading to improved performance in applications ranging from medical diagnosis to scientific discovery and financial analysis.
Papers
Towards Next-Generation Urban Decision Support Systems through AI-Powered Construction of Scientific Ontology using Large Language Models -- A Case in Optimizing Intermodal Freight Transportation
Jose Tupayachi, Haowen Xu, Olufemi A. Omitaomu, Mustafa Can Camur, Aliza Sharmin, Xueping Li
Auto-selected Knowledge Adapters for Lifelong Person Re-identification
Xuelin Qian, Ruiqi Wu, Gong Cheng, Junwei Han
Automated Real-World Sustainability Data Generation from Images of Buildings
Peter J Bentley, Soo Ling Lim, Rajat Mathur, Sid Narang
More Than Catastrophic Forgetting: Integrating General Capabilities For Domain-Specific LLMs
Chengyuan Liu, Yangyang Kang, Shihang Wang, Lizhi Qing, Fubang Zhao, Changlong Sun, Kun Kuang, Fei Wu
Stochastic Adversarial Networks for Multi-Domain Text Classification
Xu Wang, Yuan Wu
Knowledge-Informed Auto-Penetration Testing Based on Reinforcement Learning with Reward Machine
Yuanliang Li, Hanzheng Dai, Jun Yan
Clustered Retrieved Augmented Generation (CRAG)
Simon Akesson, Frances A. Santos
Learning Beyond Pattern Matching? Assaying Mathematical Understanding in LLMs
Siyuan Guo, Aniket Didolkar, Nan Rosemary Ke, Anirudh Goyal, Ferenc Huszár, Bernhard Schölkopf
Embedding-Aligned Language Models
Guy Tennenholtz, Yinlam Chow, Chih-Wei Hsu, Lior Shani, Ethan Liang, Craig Boutilier