Knowledge Representation
Knowledge representation (KR) focuses on developing methods for computers to store, access, and reason with information, mirroring human cognitive abilities. Current research emphasizes integrating symbolic knowledge graphs (KGs) with the generative power of large language models (LLMs), often using neural-symbolic approaches and reinforcement learning to enhance accuracy and efficiency in tasks like question answering and knowledge editing. This hybrid approach addresses limitations of both KGs (scalability) and LLMs (hallucinations, knowledge manipulation), with significant implications for applications ranging from automated reasoning and decision-making to improved human-computer interaction and cultural knowledge preservation.
Papers
Dialogue Possibilities between a Human Supervisor and UAM Air Traffic Management: Route Alteration
Jeongseok Kim, Kangjin Kim
Large Language Models and Knowledge Graphs: Opportunities and Challenges
Jeff Z. Pan, Simon Razniewski, Jan-Christoph Kalo, Sneha Singhania, Jiaoyan Chen, Stefan Dietze, Hajira Jabeen, Janna Omeliyanenko, Wen Zhang, Matteo Lissandrini, Russa Biswas, Gerard de Melo, Angela Bonifati, Edlira Vakaj, Mauro Dragoni, Damien Graux