Reversal Curse
The "reversal curse" describes the failure of large language models (LLMs) and other machine learning systems to generalize knowledge bidirectionally; for example, learning "A is B" but failing to infer "B is A." Current research focuses on mitigating this limitation through techniques like reverse training, bidirectional attention mechanisms, and semantic-aware data permutation, often applied within transformer-based architectures. Overcoming the reversal curse is crucial for improving the reasoning capabilities of AI systems and ensuring their reliable application in knowledge representation, question answering, and other knowledge-intensive tasks.
Papers
November 12, 2024
October 24, 2024
October 10, 2024
September 30, 2024
August 27, 2024
August 19, 2024
July 21, 2024
July 8, 2024
July 2, 2024
June 28, 2024
June 18, 2024
June 17, 2024
June 15, 2024
June 10, 2024
June 7, 2024
May 7, 2024