Domain Knowledge
Domain knowledge integration into large language models (LLMs) is a crucial area of research aiming to enhance the accuracy, reliability, and explainability of LLMs for domain-specific tasks. Current efforts focus on incorporating domain knowledge through various methods, including knowledge graphs, ontologies, and retrieval-augmented generation (RAG), often employing architectures like mixture-of-experts models and neurosymbolic agents. This research is significant because it addresses the limitations of general-purpose LLMs in specialized fields, leading to improved performance in applications ranging from medical diagnosis to scientific discovery and financial analysis.
Papers
Learning to Solve Domain-Specific Calculation Problems with Knowledge-Intensive Programs Generator
Chengyuan Liu, Shihang Wang, Lizhi Qing, Jun Lin, Ji Zhang, Fei Wu, Kun Kuang
Speeding up approximate MAP by applying domain knowledge about relevant variables
Johan Kwisthout, Andrew Schroeder
CareBot: A Pioneering Full-Process Open-Source Medical Language Model
Lulu Zhao, Weihao Zeng, Xiaofeng Shi, Hua Zhou
Ontology-Aware RAG for Improved Question-Answering in Cybersecurity Education
Chengshuai Zhao, Garima Agrawal, Tharindu Kumarage, Zhen Tan, Yuli Deng, Ying-Chih Chen, Huan Liu
Performance Evaluation of ROS2-DDS middleware implementations facilitating Cooperative Driving in Autonomous Vehicle
Sumit Paul, Danh Lephuoc, Manfred Hauswirth
FedCoLLM: A Parameter-Efficient Federated Co-tuning Framework for Large and Small Language Models
Tao Fan, Yan Kang, Guoqiang Ma, Lixin Fan, Kai Chen, Qiang Yang
METEOR: Evolutionary Journey of Large Language Models from Guidance to Self-Growth
Jiawei Li, Chong Feng, Yang Gao
SayComply: Grounding Field Robotic Tasks in Operational Compliance through Retrieval-Based Language Models
Muhammad Fadhil Ginting, Dong-Ki Kim, Sung-Kyun Kim, Bandi Jai Krishna, Mykel J. Kochenderfer, Shayegan Omidshafiei, Ali-akbar Agha-mohammadi
VersaTune: An Efficient Data Composition Framework for Training Multi-Capability LLMs
Keer Lu, Keshi Zhao, Zheng Liang, Da Pan, Shusen Zhang, Xin Wu, Weipeng Chen, Zenan Zhou, Guosheng Dong, Bin Cui, Wentao Zhang
Supervised Transfer Learning Framework for Fault Diagnosis in Wind Turbines
Kenan Weber, Christine Preisach
A Comprehensive Survey of Small Language Models in the Era of Large Language Models: Techniques, Enhancements, Applications, Collaboration with LLMs, and Trustworthiness
Fali Wang, Zhiwei Zhang, Xianren Zhang, Zongyu Wu, Tzuhao Mo, Qiuhao Lu, Wanjing Wang, Rui Li, Junjie Xu, Xianfeng Tang, Qi He, Yao Ma, Ming Huang, Suhang Wang
DARD: A Multi-Agent Approach for Task-Oriented Dialog Systems
Aman Gupta, Anirudh Ravichandran, Ziji Zhang, Swair Shah, Anurag Beniwal, Narayanan Sadagopan
Adapting While Learning: Grounding LLMs for Scientific Problems with Intelligent Tool Usage Adaptation
Bohan Lyu, Yadi Cao, Duncan Watson-Parris, Leon Bergen, Taylor Berg-Kirkpatrick, Rose Yu