Model Resilience

Model resilience focuses on developing machine learning systems that maintain performance and integrity despite various challenges, such as noisy data, adversarial attacks, or faulty components in distributed systems. Current research emphasizes techniques like adversarial training, data augmentation (including implicit methods), and novel algorithms for federated unlearning and distributed optimization that incorporate redundancy or competition to improve robustness. These advancements are crucial for deploying reliable and secure machine learning models in real-world applications, particularly in resource-constrained or unreliable environments, and for ensuring the trustworthiness of AI systems.

Papers