Knowledge Based
Knowledge-based systems research focuses on effectively integrating and utilizing knowledge within artificial intelligence, primarily aiming to improve the accuracy, reliability, and interpretability of AI models. Current research emphasizes enhancing large language models (LLMs) with external knowledge graphs, employing techniques like retrieval-augmented generation and knowledge distillation to overcome limitations such as hallucinations and catastrophic forgetting. This work is significant because it addresses critical challenges in AI, leading to more robust and trustworthy systems with applications in diverse fields like education, healthcare, and materials science.
Papers
Narrative Analysis of True Crime Podcasts With Knowledge Graph-Augmented Large Language Models
Xinyi Leng, Jason Liang, Jack Mauro, Xu Wang, Andrea L. Bertozzi, James Chapman, Junyuan Lin, Bohan Chen, Chenchen Ye, Temple Daniel, P. Jeffrey Brantingham
Capturing and Anticipating User Intents in Data Analytics via Knowledge Graphs
Gerard Pons, Besim Bilalli, Anna Queralt
Generative Design of Functional Metal Complexes Utilizing the Internal Knowledge of Large Language Models
Jieyu Lu, Zhangde Song, Qiyuan Zhao, Yuanqi Du, Yirui Cao, Haojun Jia, Chenru Duan
Information for Conversation Generation: Proposals Utilising Knowledge Graphs
Alex Clay, Ernesto Jiménez-Ruiz
Analysing the Residual Stream of Language Models Under Knowledge Conflicts
Yu Zhao, Xiaotang Du, Giwon Hong, Aryo Pradipta Gema, Alessio Devoto, Hongru Wang, Xuanli He, Kam-Fai Wong, Pasquale Minervini