Conditional Shift

Conditional shift, a type of distribution shift in machine learning, refers to discrepancies in the conditional probability of a label given an input between training and testing data. Current research focuses on mitigating this shift's negative impact on model performance across various domains, including graph neural networks, time-series forecasting, and image classification, employing techniques like adversarial training, conformal prediction, and causal structure analysis within models such as DeepONets and GNNs. Addressing conditional shift is crucial for improving the reliability and generalizability of machine learning models in real-world applications where data distributions inevitably differ between training and deployment environments. This research directly impacts the robustness and trustworthiness of AI systems across numerous fields.

Papers