Adversarial Transfer
Adversarial transfer learning focuses on improving the robustness and generalizability of machine learning models, particularly in scenarios with limited data or significant domain discrepancies. Current research emphasizes techniques like adversarial training, domain assimilation, and carefully designed model initialization strategies (e.g., robust linear probing) to mitigate the negative effects of adversarial attacks and domain shifts, often within the context of transfer learning from large pre-trained models. This work is crucial for enhancing the reliability and trustworthiness of AI models across diverse applications, including medical image analysis and reinforcement learning, where robustness is paramount. The ultimate goal is to create more reliable and generalizable AI systems that are less susceptible to manipulation and perform well even with limited training data.
Papers
Multi-source adversarial transfer learning for ultrasound image segmentation with limited similarity
Yifu Zhang, Hongru Li, Tao Yang, Rui Tao, Zhengyuan Liu, Shimeng Shi, Jiansong Zhang, Ning Ma, Wujin Feng, Zhanhu Zhang, Xinyu Zhang
Multi-source adversarial transfer learning based on similar source domains with local features
Yifu Zhang, Hongru Li, Shimeng Shi, Youqi Li, Jiansong Zhang