Eliciting Knowledge
Eliciting knowledge from large language models (LLMs) focuses on extracting and utilizing the implicit knowledge embedded within these models for various tasks, including reasoning, decision-making, and knowledge-grounded conversations. Current research emphasizes techniques like chain-of-thought prompting, strategic knowledge integration, and iterative learning methods to improve the reliability and accuracy of knowledge extraction, often employing transformer architectures. This research is significant because it enhances the understanding and control of LLMs, leading to improved performance in complex reasoning tasks and potentially impacting fields like behavioral economics, human-robot interaction, and fair machine learning.
Papers
Looking Inward: Language Models Can Learn About Themselves by Introspection
Felix J Binder, James Chua, Tomek Korbak, Henry Sleight, John Hughes, Robert Long, Ethan Perez, Miles Turpin, Owain Evans
Eliciting Uncertainty in Chain-of-Thought to Mitigate Bias against Forecasting Harmful User Behaviors
Anthony Sicilia, Malihe Alikhani