Ensemble Defense
Ensemble defense methods aim to improve the robustness and security of machine learning models by combining predictions from multiple individual models. Current research focuses on enhancing the diversity and effectiveness of these ensembles, exploring techniques like median-based weighting, novel aggregation strategies (e.g., two-round voting), and methods to reduce the transferability of adversarial attacks between ensemble members. These advancements are crucial for mitigating vulnerabilities to adversarial examples and data poisoning, thereby increasing the reliability and trustworthiness of machine learning systems in various security-sensitive applications.
Papers
October 15, 2024
June 20, 2024
January 20, 2024
February 5, 2023
November 22, 2022
November 15, 2022
October 6, 2022
September 28, 2022
August 18, 2022
June 11, 2022