Graphical Normalizing Flow
Graphical Normalizing Flows (GNFs) are a class of deep learning models used for causal inference, leveraging the flexibility of neural networks to represent complex probability distributions while incorporating causal relationships represented by directed acyclic graphs (DAGs). Current research focuses on applying GNFs to various problems, including offline reinforcement learning, sensitivity analysis under unobserved confounding, and counterfactual analysis for personalized policy interventions. This approach offers advantages over traditional methods by enabling flexible, semi-parametric estimation of causal effects, including individual-level predictions, and handling complex relationships without strong functional form assumptions, thereby improving the accuracy and applicability of causal inference across diverse fields.