Robust Reasoning
Robust reasoning in large language models (LLMs) focuses on enhancing their ability to perform complex, multi-step logical inferences and avoid errors like factual inaccuracies and biases. Current research emphasizes developing frameworks that model reasoning as iterative processes (e.g., directed acyclic graphs), integrating external tools and knowledge bases to augment LLMs' capabilities, and employing techniques like contrastive learning and rule-based reasoning to improve accuracy and robustness. This work is crucial for building trustworthy and reliable LLMs applicable to diverse fields, including fake news detection, video anomaly detection, and robotic manipulation, where reliable reasoning is paramount.
Papers
November 9, 2024
November 8, 2024
October 31, 2024
October 29, 2024
October 15, 2024
September 25, 2024
September 24, 2024
September 16, 2024
July 14, 2024
July 8, 2024
December 24, 2023
December 18, 2023
June 1, 2023
May 17, 2023
December 4, 2022
November 21, 2022
March 19, 2022
February 15, 2022