High OOD Robustness
High out-of-distribution (OOD) robustness in machine learning aims to develop models that maintain accurate performance even when encountering data significantly different from their training data. Current research focuses on improving the robustness of convolutional neural networks (CNNs), particularly those with large kernels, and exploring techniques like adversarial training and data augmentation to mitigate vulnerabilities to noisy or manipulated inputs. This research is crucial for deploying reliable AI systems in real-world scenarios where unexpected data variations are inevitable, impacting fields ranging from autonomous driving to medical diagnosis. Addressing OOD robustness is essential for building trustworthy and dependable AI models.
Papers
Ensemble Learning and 3D Pix2Pix for Comprehensive Brain Tumor Analysis in Multimodal MRI
Ramy A. Zeineldin, Franziska Mathis-Ullrich
Seeker: Towards Exception Safety Code Generation with Intermediate Language Agents Framework
Xuanming Zhang, Yuxuan Chen, Yiming Zheng, Zhexin Zhang, Yuan Yuan, Minlie Huang