Label Attack
Label attacks target machine learning models by manipulating their input data or training process to cause misclassifications or reveal sensitive information about the training data. Current research focuses on developing increasingly effective attacks, particularly in scenarios with limited access to model internals (black-box attacks) or only label information, employing techniques like adversarial perturbations, data poisoning, and similarity-based inference. These attacks highlight vulnerabilities in various model architectures, from image classifiers and vision-language models to graph neural networks, underscoring the critical need for robust defenses to ensure the security and privacy of machine learning systems in real-world applications.