Knowledge Based
Knowledge-based systems research focuses on effectively integrating and utilizing knowledge within artificial intelligence, primarily aiming to improve the accuracy, reliability, and interpretability of AI models. Current research emphasizes enhancing large language models (LLMs) with external knowledge graphs, employing techniques like retrieval-augmented generation and knowledge distillation to overcome limitations such as hallucinations and catastrophic forgetting. This work is significant because it addresses critical challenges in AI, leading to more robust and trustworthy systems with applications in diverse fields like education, healthcare, and materials science.
Papers
Preserving Knowledge in Large Language Model with Model-Agnostic Self-Decompression
Zilun Zhang, Yutao Sun, Tiancheng Zhao, Leigang Sha, Ruochen Xu, Kyusong Lee, Jianwei Yin
In-Context Editing: Learning Knowledge from Self-Induced Distributions
Siyuan Qi, Bangcheng Yang, Kailin Jiang, Xiaobo Wang, Jiaqi Li, Yifan Zhong, Yaodong Yang, Zilong Zheng