Native Robustness
Native robustness in machine learning focuses on developing models inherently resistant to various forms of input perturbations, including adversarial attacks and noisy data, without relying solely on post-hoc defenses. Current research emphasizes techniques like ensemble methods, reprogramming existing models, and modifying training procedures (e.g., using different learning rates for specific model layers or incorporating regularization methods) to improve robustness across diverse model architectures, including convolutional neural networks, vision transformers, and large language models. This field is crucial for deploying reliable AI systems in safety-critical applications, such as healthcare and autonomous driving, where model resilience to unexpected inputs is paramount.
Papers
Weighted Sampled Split Learning (WSSL): Balancing Privacy, Robustness, and Fairness in Distributed Learning Environments
Manish Osti, Aashray Thakuri, Basheer Qolomany, Aos Mulahuwaish
Robustness of Algorithms for Causal Structure Learning to Hyperparameter Choice
Damian Machlanski, Spyridon Samothrakis, Paul Clarke
How Robust is Federated Learning to Communication Error? A Comparison Study Between Uplink and Downlink Channels
Linping Qu, Shenghui Song, Chi-Ying Tsui, Yuyi Mao
Back Transcription as a Method for Evaluating Robustness of Natural Language Understanding Models to Speech Recognition Errors
Marek Kubis, Paweł Skórzewski, Marcin Sowański, Tomasz Ziętkiewicz
Hierarchical Randomized Smoothing
Yan Scholten, Jan Schuchardt, Aleksandar Bojchevski, Stephan Günnemann
ELM Ridge Regression Boosting
M. Andrecut
Fine tuning Pre trained Models for Robustness Under Noisy Labels
Sumyeong Ahn, Sihyeon Kim, Jongwoo Ko, Se-Young Yun
Improving Robustness and Reliability in Medical Image Classification with Latent-Guided Diffusion and Nested-Ensembles
Xing Shen, Hengguan Huang, Brennan Nichyporuk, Tal Arbel
Algorithmic Robustness
David Jensen, Brian LaMacchia, Ufuk Topcu, Pamela Wisniewski
Non-ergodicity in reinforcement learning: robustness via ergodicity transformations
Dominik Baumann, Erfaun Noorani, James Price, Ole Peters, Colm Connaughton, Thomas B. Schön
Understanding Contrastive Learning via Distributionally Robust Optimization
Junkang Wu, Jiawei Chen, Jiancan Wu, Wentao Shi, Xiang Wang, Xiangnan He
Spoofing Attack Detection in the Physical Layer with Robustness to User Movement
Daniel Romero, Tien Ngoc Ha, Peter Gerstoft
Quantifying Assistive Robustness Via the Natural-Adversarial Frontier
Jerry Zhi-Yang He, Zackory Erickson, Daniel S. Brown, Anca D. Dragan
Matching the Neuronal Representations of V1 is Necessary to Improve Robustness in CNNs with V1-like Front-ends
Ruxandra Barbulescu, Tiago Marques, Arlindo L. Oliveira