Robust Version
Robustness in machine learning models is a crucial area of research focusing on improving the reliability and resilience of models against various forms of uncertainty, including noisy data, adversarial attacks, and environmental variations. Current research emphasizes developing novel algorithms and architectures, such as transformers, to enhance model performance under these challenging conditions, often incorporating techniques like knowledge distillation, data augmentation, and robust optimization. This work is significant because it directly addresses the limitations of existing models, leading to more reliable and trustworthy AI systems across diverse applications, from medical imaging and autonomous navigation to natural language processing and personalized pricing.
Papers
Suite-IN: Aggregating Motion Features from Apple Suite for Robust Inertial Navigation
Lan Sun, Songpengcheng Xia, Junyuan Deng, Jiarui Yang, Zengyuan Lai, Qi Wu, Ling Pei
SP-VIO: Robust and Efficient Filter-Based Visual Inertial Odometry with State Transformation Model and Pose-Only Visual Description
Xueyu Du, Chengjun Ji, Lilian Zhang, Xinchan Luo, Huaiyi Zhang, Maosong Wang, Wenqi Wu, Jun Mao
Very Attentive Tacotron: Robust and Unbounded Length Generalization in Autoregressive Transformer-Based Text-to-Speech
Eric Battenberg, RJ Skerry-Ryan, Daisy Stanton, Soroosh Mariooryad, Matt Shannon, Julian Salazar, David Kao
A Dual Adaptive Assignment Approach for Robust Graph-Based Clustering
Yang Xiang, Li Fan, Tulika Saha, Yushan Pan, Haiyang Zhang, Chengtao Ji
Uncertainty Quantification via Hölder Divergence for Multi-View Representation Learning
an Zhang, Ming Li, Chun Li, Zhaoxia Liu, Ye Zhang, Fei Richard Yu