Gradient Obfuscation

Gradient obfuscation encompasses techniques designed to hinder the recovery of sensitive information from gradients used in machine learning, particularly within federated learning and adversarial defense contexts. Current research focuses on developing both offensive (attacks exploiting obfuscated gradients) and defensive methods (e.g., modifying model architectures or employing novel optimization algorithms) to improve privacy and robustness against adversarial examples. This area is crucial for advancing privacy-preserving machine learning and enhancing the security of deployed models, impacting fields ranging from healthcare to finance.

Papers