Backdoor Effect
Backdoor attacks exploit vulnerabilities in machine learning models, surreptitiously embedding malicious behavior triggered by specific inputs. Current research focuses on developing robust defenses against these attacks, particularly within diffusion models, contrastive learning frameworks, and vision transformers, employing techniques like unlearning, neural distribution tightening, and structural pruning to mitigate the backdoor effect. This is a critical area of study due to the increasing reliance on machine learning in security-sensitive applications, with effective defenses crucial for maintaining model integrity and trustworthiness.
Papers
October 17, 2024
September 29, 2024
July 31, 2024
July 16, 2024
May 30, 2024
May 17, 2024
April 26, 2024
January 20, 2024
November 27, 2023
October 9, 2023
April 24, 2023
December 9, 2022