Natural Backdoor
Natural backdoors represent a critical vulnerability in machine learning models, where unintended backdoor behaviors emerge either through data poisoning or inherent model biases during standard training. Current research focuses on identifying and characterizing these vulnerabilities, exploring their presence in various model architectures and datasets (including image, video, and even multi-modal data), and developing robust detection and mitigation techniques. Understanding and addressing natural backdoors is crucial for ensuring the trustworthiness and security of deployed machine learning systems across diverse applications, as their existence poses a significant threat to reliable model performance.
Papers
August 31, 2023
January 3, 2023
November 29, 2022
June 21, 2022