Poison Detection
Poisoning attacks, where malicious actors contaminate training data to compromise machine learning models, are a significant threat to AI security. Current research focuses on developing robust detection and mitigation techniques, exploring methods like outlier detection, hyperparameter optimization for unlearning poisoned data, and proactive approaches that leverage the inherent characteristics of poisoned models to improve detection. These efforts are crucial for ensuring the reliability and trustworthiness of machine learning systems across various applications, from image recognition to federated learning, and are driving advancements in model robustness and security.
Papers
June 13, 2024
February 19, 2024
January 5, 2023
September 30, 2022
May 26, 2022