Adversarial Corruption
Adversarial corruption studies the impact of malicious data manipulation on machine learning models, aiming to develop robust algorithms that maintain accuracy despite such attacks. Current research focuses on developing corruption-tolerant algorithms for various models, including gradient descent, maximum likelihood estimation, and contextual bandits, often employing techniques like robust regression, mirror descent, and weighted averaging to mitigate the effects of corrupted data. This field is crucial for enhancing the reliability and security of machine learning systems across diverse applications, from healthcare and finance to autonomous systems, where data integrity is paramount.
Papers
November 12, 2024
August 8, 2024
July 26, 2024
July 19, 2024
July 16, 2024
June 10, 2024
April 15, 2024
March 31, 2024
March 2, 2024
February 14, 2024
December 27, 2023
November 28, 2023
October 29, 2023
May 29, 2023
May 17, 2023
January 28, 2023
December 11, 2022
July 25, 2022
June 9, 2022