Rationale Generation
Rationale generation focuses on creating explainable AI by generating textual justifications for model predictions, aiming to improve transparency, trustworthiness, and robustness. Current research emphasizes using large language models (LLMs) within various architectures, often incorporating techniques like chain-of-thought prompting, preference optimization, and contrastive learning to enhance rationale quality and alignment with human reasoning. This field is significant for advancing explainable AI, impacting areas like automated assessment, multimodal question answering, and improving the safety and reliability of AI systems in high-stakes applications.
Papers
November 3, 2024
October 18, 2024
October 12, 2024
September 15, 2024
August 20, 2024
June 28, 2024
May 18, 2024
May 16, 2024
April 23, 2024
February 28, 2024
February 1, 2024
November 23, 2023
November 9, 2023
August 9, 2023
May 27, 2023
May 22, 2023
December 19, 2022
November 15, 2022
May 25, 2022