Counterfactual Generation

Counterfactual generation focuses on creating hypothetical alternative scenarios by minimally modifying existing data points to change a model's prediction, thereby enhancing model explainability and robustness. Current research emphasizes developing model-agnostic methods, leveraging techniques like diffusion models, normalizing flows, and large language models (LLMs) to generate plausible and diverse counterfactuals across various data types (text, images, time series, tabular data). This work is significant for improving the trustworthiness and fairness of AI systems by providing insights into model decision-making processes and identifying potential biases, ultimately leading to more reliable and responsible AI applications.

Papers