Counterfactual Reasoning
Counterfactual reasoning, the ability to assess hypothetical scenarios by altering past events or facts, is a burgeoning area of research focusing on enhancing the capabilities of artificial intelligence models, particularly large language models (LLMs). Current research emphasizes developing benchmarks and datasets to rigorously evaluate LLMs' counterfactual reasoning abilities, often employing techniques like chain-of-thought prompting and adversarial training to improve performance across diverse tasks, including visual question answering and reinforcement learning. This work holds significant implications for improving AI explainability, robustness, and fairness, as well as for applications in diverse fields such as education, healthcare, and autonomous systems.
Papers
Counterfactual Reasoning for Bias Evaluation and Detection in a Fairness under Unawareness setting
Giandomenico Cornacchia, Vito Walter Anelli, Fedelucio Narducci, Azzurra Ragone, Eugenio Di Sciascio
Counterfactual Fair Opportunity: Measuring Decision Model Fairness with Counterfactual Reasoning
Giandomenico Cornacchia, Vito Walter Anelli, Fedelucio Narducci, Azzurra Ragone, Eugenio Di Sciascio