Robust Version
Robustness in machine learning models is a crucial area of research focusing on improving the reliability and resilience of models against various forms of uncertainty, including noisy data, adversarial attacks, and environmental variations. Current research emphasizes developing novel algorithms and architectures, such as transformers, to enhance model performance under these challenging conditions, often incorporating techniques like knowledge distillation, data augmentation, and robust optimization. This work is significant because it directly addresses the limitations of existing models, leading to more reliable and trustworthy AI systems across diverse applications, from medical imaging and autonomous navigation to natural language processing and personalized pricing.
Papers
Towards Robust and Interpretable EMG-based Hand Gesture Recognition using Deep Metric Meta Learning
Simon Tam, Shriram Tallam Puranam Raghu, Étienne Buteau, Erik Scheme, Mounir Boukadoum, Alexandre Campeau-Lecours, Benoit Gosselin
TiNO-Edit: Timestep and Noise Optimization for Robust Diffusion-Based Image Editing
Sherry X. Chen, Yaron Vaxman, Elad Ben Baruch, David Asulin, Aviad Moreshet, Kuo-Chin Lien, Misha Sra, Pradeep Sen
Demonstration of Robust and Efficient Quantum Property Learning with Shallow Shadows
Hong-Ye Hu, Andi Gu, Swarnadeep Majumder, Hang Ren, Yipei Zhang, Derek S. Wang, Yi-Zhuang You, Zlatko Minev, Susanne F. Yelin, Alireza Seif
Towards Robust and Efficient Cloud-Edge Elastic Model Adaptation via Selective Entropy Distillation
Yaofo Chen, Shuaicheng Niu, Yaowei Wang, Shoukai Xu, Hengjie Song, Mingkui Tan