Knowledge Based
Knowledge-based systems research focuses on effectively integrating and utilizing knowledge within artificial intelligence, primarily aiming to improve the accuracy, reliability, and interpretability of AI models. Current research emphasizes enhancing large language models (LLMs) with external knowledge graphs, employing techniques like retrieval-augmented generation and knowledge distillation to overcome limitations such as hallucinations and catastrophic forgetting. This work is significant because it addresses critical challenges in AI, leading to more robust and trustworthy systems with applications in diverse fields like education, healthcare, and materials science.
Papers
Tug-of-War Between Knowledge: Exploring and Resolving Knowledge Conflicts in Retrieval-Augmented Language Models
Zhuoran Jin, Pengfei Cao, Yubo Chen, Kang Liu, Xiaojian Jiang, Jiexin Xu, Qiuxia Li, Jun Zhao
Hint-before-Solving Prompting: Guiding LLMs to Effectively Utilize Encoded Knowledge
Jinlan Fu, Shenzhen Huangfu, Hang Yan, See-Kiong Ng, Xipeng Qiu
GRAFFORD: A Benchmark Dataset for Testing the Knowledge of Object Affordances of Language and Vision Models
Sayantan Adak, Daivik Agrawal, Animesh Mukherjee, Somak Aditya
Scalable and reliable deep transfer learning for intelligent fault detection via multi-scale neural processes embedded with knowledge
Zhongzhi Li, Jingqi Tu, Jiacheng Zhu, Jianliang Ai, Yiqun Dong