Model Resilience
Model resilience focuses on developing machine learning systems that maintain performance and integrity despite various challenges, such as noisy data, adversarial attacks, or faulty components in distributed systems. Current research emphasizes techniques like adversarial training, data augmentation (including implicit methods), and novel algorithms for federated unlearning and distributed optimization that incorporate redundancy or competition to improve robustness. These advancements are crucial for deploying reliable and secure machine learning models in real-world applications, particularly in resource-constrained or unreliable environments, and for ensuring the trustworthiness of AI systems.
Papers
December 9, 2024
November 24, 2024
November 21, 2024
May 28, 2024
May 18, 2024
April 25, 2024
April 15, 2024
December 21, 2023
November 16, 2022
April 24, 2022
March 26, 2022