Distribution Discrepancy

Distribution discrepancy, the difference between probability distributions of datasets, is a central challenge across diverse machine learning applications, aiming to quantify and mitigate the impact of data heterogeneity. Current research focuses on developing methods to measure and leverage these discrepancies, employing techniques like Wasserstein distance, R-divergence, and probability discrepancy metrics, often within the context of specific model architectures such as diffusion models and generative adversarial networks. Addressing distribution discrepancy is crucial for improving model robustness, efficiency, and fairness, with applications ranging from active learning and federated learning to anomaly detection and cross-domain adaptation in various fields like computer vision and medical imaging.

Papers