Robust Version
Robustness in machine learning models is a crucial area of research focusing on improving the reliability and resilience of models against various forms of uncertainty, including noisy data, adversarial attacks, and environmental variations. Current research emphasizes developing novel algorithms and architectures, such as transformers, to enhance model performance under these challenging conditions, often incorporating techniques like knowledge distillation, data augmentation, and robust optimization. This work is significant because it directly addresses the limitations of existing models, leading to more reliable and trustworthy AI systems across diverse applications, from medical imaging and autonomous navigation to natural language processing and personalized pricing.
Papers
Decentralised Emergence of Robust and Adaptive Linguistic Conventions in Populations of Autonomous Agents Grounded in Continuous Worlds
Jérôme Botoko Ekila, Jens Nevens, Lara Verheyen, Katrien Beuls, Paul Van Eecke
Robust Tiny Object Detection in Aerial Images amidst Label Noise
Haoran Zhu, Chang Xu, Wen Yang, Ruixiang Zhang, Yan Zhang, Gui-Song Xia