Corruption Robust Algorithm

Corruption-robust algorithms aim to develop machine learning models that maintain high performance even when training or input data is corrupted by noise or adversarial attacks. Current research focuses on techniques like adaptive normalization, uncertainty weighting, and hierarchical contrastive learning, applied to various model architectures including generalized linear models and reinforcement learning agents. These advancements are crucial for building reliable systems in safety-critical applications like robotics and autonomous driving, where robustness to real-world imperfections is paramount. The development of provably robust algorithms with strong theoretical guarantees is a key area of ongoing investigation.

Papers