Distribution Alignment
Distribution alignment in machine learning focuses on reducing discrepancies between data distributions from different sources, improving model generalization and robustness to domain shifts. Current research emphasizes techniques like adversarial training, optimal transport, and prototype-based methods to achieve this alignment, often within specific model architectures such as diffusion models, GANs, and various neural networks. These advancements are crucial for addressing challenges in diverse applications, including medical image analysis, natural language processing, and computer vision, where data often exhibits significant distributional variations. The ultimate goal is to build more reliable and adaptable machine learning models that perform well across various unseen data scenarios.
Papers
Distribution Alignment for Fully Test-Time Adaptation with Dynamic Online Data Streams
Ziqiang Wang, Zhixiang Chi, Yanan Wu, Li Gu, Zhi Liu, Konstantinos Plataniotis, Yang Wang
UNIT: Backdoor Mitigation via Automated Neural Distribution Tightening
Siyuan Cheng, Guangyu Shen, Kaiyuan Zhang, Guanhong Tao, Shengwei An, Hanxi Guo, Shiqing Ma, Xiangyu Zhang