Argumentative Explanation
Argumentative explanation focuses on developing computational methods to understand, generate, and evaluate arguments, aiming to improve the explainability and trustworthiness of AI systems and human-computer interaction. Current research emphasizes the use of argumentation frameworks, often integrated with probabilistic models or large language models, to represent and reason with arguments, employing techniques like dynamic programming, graph-based methods, and inductive logic programming. This work has significant implications for enhancing the transparency of AI decision-making, improving human-computer dialogue, and facilitating more effective communication and debate in various domains.
Papers
October 18, 2024
October 10, 2024
September 6, 2024
July 6, 2024
July 2, 2024
May 13, 2024
May 3, 2024
February 13, 2024
December 31, 2023
November 7, 2023
October 27, 2023
October 18, 2023
October 8, 2023
September 29, 2023
July 31, 2023
July 11, 2023
June 5, 2023
May 26, 2023
January 23, 2023