Label Flipping
Label flipping, the deliberate alteration of training data labels, is a technique explored for both malicious attacks and beneficial data pre-processing. Research focuses on understanding the vulnerability of various machine learning models, including deep neural networks and ensemble methods like AdaBoost and Random Forests, to label-flipping attacks across diverse data types such as audio, EEG signals, and general tabular data. This research is significant because it highlights security vulnerabilities in machine learning systems and also reveals the potential of label flipping as a method for improving fairness in machine learning models by mitigating inconsistencies in the training data.
Papers
March 29, 2024
February 8, 2023