Zero Shot Causal

Zero-shot causal learning aims to predict the effects of interventions, particularly novel ones, without explicit training data for those specific interventions. Current research focuses on leveraging large language models (LLMs), particularly transformer architectures, to infer causal relationships from text or to directly perform causal inference through mechanisms like self-attention. This approach shows promise for applications in diverse fields like medicine and policy by enabling causal reasoning from existing knowledge and limited data, potentially accelerating scientific discovery and improving decision-making. The ability to generalize causal knowledge to unseen interventions is a key challenge and area of active investigation.

Papers