New Knowledge
Research on incorporating new knowledge into large language models (LLMs) focuses on improving their ability to learn and adapt to constantly evolving information, addressing limitations in their static knowledge bases. Current efforts explore methods like supervised fine-tuning, retrieval augmented generation, and self-teaching strategies, often employing architectures such as BERT and GPT models, to enhance knowledge acquisition and retention while mitigating issues like catastrophic forgetting. This research is crucial for developing more robust and adaptable AI systems capable of handling real-world complexities and providing accurate, up-to-date information across diverse applications.
Papers
October 18, 2024
October 9, 2024
August 29, 2024
July 29, 2024
June 10, 2024
May 23, 2024
March 30, 2024
March 3, 2024
February 21, 2024
February 3, 2024
January 31, 2024
November 22, 2023
October 23, 2023
October 21, 2023
November 29, 2022
May 23, 2022