Gradient Obfuscation
Gradient obfuscation encompasses techniques designed to hinder the recovery of sensitive information from gradients used in machine learning, particularly within federated learning and adversarial defense contexts. Current research focuses on developing both offensive (attacks exploiting obfuscated gradients) and defensive methods (e.g., modifying model architectures or employing novel optimization algorithms) to improve privacy and robustness against adversarial examples. This area is crucial for advancing privacy-preserving machine learning and enhancing the security of deployed models, impacting fields ranging from healthcare to finance.
Papers
July 17, 2024
December 7, 2023
October 27, 2023
July 25, 2023
April 28, 2023
August 1, 2022
June 8, 2022
June 3, 2022
May 21, 2022
May 8, 2022