Knowledge Based
Knowledge-based systems research focuses on effectively integrating and utilizing knowledge within artificial intelligence, primarily aiming to improve the accuracy, reliability, and interpretability of AI models. Current research emphasizes enhancing large language models (LLMs) with external knowledge graphs, employing techniques like retrieval-augmented generation and knowledge distillation to overcome limitations such as hallucinations and catastrophic forgetting. This work is significant because it addresses critical challenges in AI, leading to more robust and trustworthy systems with applications in diverse fields like education, healthcare, and materials science.
Papers
Robot Task Planning Based on Large Language Model Representing Knowledge with Directed Graph Structures
Yue Zhen, Sheng Bi, Lu Xing-tong, Pan Wei-qin, Shi Hai-peng, Chen Zi-rui, Fang Yi-shu
DLAMA: A Framework for Curating Culturally Diverse Facts for Probing the Knowledge of Pretrained Language Models
Amr Keleg, Walid Magdy
Injecting Knowledge into Biomedical Pre-trained Models via Polymorphism and Synonymous Substitution
Hongbo Zhang, Xiang Wan, Benyou Wang
Measuring the Knowledge Acquisition-Utilization Gap in Pretrained Language Models
Amirhossein Kazemnejad, Mehdi Rezagholizadeh, Prasanna Parthasarathi, Sarath Chandar