Reversal Curse
The "reversal curse" describes the failure of large language models (LLMs) and other machine learning systems to generalize knowledge bidirectionally; for example, learning "A is B" but failing to infer "B is A." Current research focuses on mitigating this limitation through techniques like reverse training, bidirectional attention mechanisms, and semantic-aware data permutation, often applied within transformer-based architectures. Overcoming the reversal curse is crucial for improving the reasoning capabilities of AI systems and ensuring their reliable application in knowledge representation, question answering, and other knowledge-intensive tasks.
Papers
May 27, 2023
May 24, 2023
May 20, 2023
April 12, 2023
April 5, 2023
March 10, 2023
February 13, 2023
January 28, 2023
December 5, 2022
November 16, 2022
August 2, 2022
July 20, 2022
July 6, 2022
June 14, 2022
May 28, 2022
May 22, 2022
May 12, 2022
March 10, 2022
January 13, 2022
December 24, 2021