Perturbation Defense

Perturbation defense encompasses techniques designed to enhance the robustness of machine learning models against various attacks, including adversarial examples, gradient inversion attacks, and website fingerprinting. Current research focuses on developing efficient and low-overhead perturbation methods, often employing techniques like random noise addition, gradient pruning, and adaptive gain control, to achieve a favorable trade-off between model accuracy and resilience to attacks. These advancements are crucial for securing applications relying on machine learning, particularly in sensitive areas like privacy-preserving collaborative learning and copyright protection of deep learning-based applications.

Papers