Malicious Training
Malicious training, a significant threat to machine learning, involves manipulating training data to compromise the integrity and performance of trained models. Current research focuses on developing robust defenses against various attack vectors, including data poisoning (introducing malicious samples) and adversarial examples (subtly altered inputs), employing techniques like temporal analysis of data timestamps, influence graphs to identify malicious samples, and data augmentation to improve model robustness. Understanding and mitigating these attacks is crucial for ensuring the reliability and security of machine learning systems across diverse applications, from malware detection to critical infrastructure.
Papers
October 9, 2024
April 19, 2024
October 25, 2023
February 7, 2023
October 18, 2022
June 7, 2022
February 21, 2022