Counterfactual Plan

Counterfactual planning involves generating alternative action sequences to achieve a desired outcome, particularly when facing unexpected situations or adverse decisions from AI systems. Current research focuses on improving the accuracy and robustness of these plans, addressing challenges like extrapolation errors in reinforcement learning and the limitations of existing evaluation metrics that don't always align with human preferences. This field is crucial for enhancing the reliability and explainability of AI systems, particularly in high-stakes applications, by providing insights into how decisions could be altered and improving human-AI interaction. Methods under investigation include adapting existing algorithms like random forests and developing novel approaches for constrained optimization and uncertainty quantification within the planning process.

Papers