Logical Consistency
Logical consistency, the absence of contradictions and paradoxes in reasoning and decision-making, is a critical area of research, particularly concerning the reliability of large language models (LLMs). Current efforts focus on developing metrics to quantify logical consistency within LLMs, exploring techniques to improve it through data augmentation and adversarial training, and applying these advancements to diverse applications like ontology classification and multi-attribute learning. These improvements are crucial for enhancing the trustworthiness and robustness of AI systems across various domains, from legal analysis to automated decision support.
Papers
October 24, 2024
October 3, 2024
May 3, 2024
February 19, 2024
January 30, 2024
November 19, 2023
May 23, 2023
February 22, 2023
January 20, 2023
September 29, 2022
April 29, 2022
December 1, 2021