Clean Label
Clean-label attacks represent a sophisticated form of data poisoning targeting machine learning models, where malicious actors subtly manipulate training data without altering labels, making detection challenging. Current research focuses on developing increasingly stealthy clean-label attacks across various domains, including image classification, natural language processing, and graph neural networks, often employing generative adversarial networks or genetic algorithms to create realistic poisoned samples. These attacks highlight vulnerabilities in model robustness and training procedures, underscoring the need for improved defenses and more rigorous evaluation methodologies to ensure the reliability and security of machine learning systems in real-world applications.