Argumentative Explanation
Argumentative explanation focuses on developing computational methods to understand, generate, and evaluate arguments, aiming to improve the explainability and trustworthiness of AI systems and human-computer interaction. Current research emphasizes the use of argumentation frameworks, often integrated with probabilistic models or large language models, to represent and reason with arguments, employing techniques like dynamic programming, graph-based methods, and inductive logic programming. This work has significant implications for enhancing the transparency of AI decision-making, improving human-computer dialogue, and facilitating more effective communication and debate in various domains.
Papers
November 28, 2022
November 21, 2022
September 4, 2022
September 1, 2022
May 23, 2022
May 22, 2022
February 16, 2022
January 18, 2022
November 30, 2021
November 13, 2021