Knowledge Neuron
Research on "knowledge neurons" investigates how large language models (LLMs), particularly transformer-based architectures like BERT and GPT, store and retrieve factual knowledge within their intricate parameter spaces. Current work focuses on identifying specific neurons or groups of neurons ("knowledge circuits") responsible for representing specific facts, analyzing how these representations are accessed during reasoning tasks, and developing methods to edit or augment this knowledge. Understanding these mechanisms is crucial for improving LLMs' accuracy, reliability, and interpretability, ultimately leading to more robust and trustworthy AI systems.
Papers
October 15, 2024
August 6, 2024
May 28, 2024
May 23, 2024
May 3, 2024
February 21, 2024
February 16, 2024
December 17, 2023
August 25, 2023
August 18, 2022
July 31, 2022