Randomized Smoothing
Randomized smoothing is a technique for enhancing the robustness of machine learning models, particularly deep neural networks, against adversarial attacks—small, carefully crafted input perturbations designed to mislead the model. Current research focuses on improving the efficiency and effectiveness of smoothing methods, exploring various noise distributions and model architectures (including vision transformers and diffusion models), and extending its application to diverse data types like time series and medical images. This work is significant because it provides provable guarantees of robustness, a crucial step towards deploying reliable machine learning systems in safety-critical applications, and is actively advancing the theoretical understanding of model robustness and certification.
Papers
General Lipschitz: Certified Robustness Against Resolvable Semantic Transformations via Transformation-Dependent Randomized Smoothing
Dmitrii Korzh, Mikhail Pautov, Olga Tsymboi, Ivan Oseledets
Towards a Practical Defense against Adversarial Attacks on Deep Learning-based Malware Detectors via Randomized Smoothing
Daniel Gibert, Giulio Zizzo, Quan Le