Perturbation Defense
Perturbation defense encompasses techniques designed to enhance the robustness of machine learning models against various attacks, including adversarial examples, gradient inversion attacks, and website fingerprinting. Current research focuses on developing efficient and low-overhead perturbation methods, often employing techniques like random noise addition, gradient pruning, and adaptive gain control, to achieve a favorable trade-off between model accuracy and resilience to attacks. These advancements are crucial for securing applications relying on machine learning, particularly in sensitive areas like privacy-preserving collaborative learning and copyright protection of deep learning-based applications.
Papers
October 31, 2024
March 19, 2024
January 30, 2024
December 15, 2023
April 20, 2023
February 27, 2023
April 8, 2022