Untargeted Adversarial Attack
Untargeted adversarial attacks aim to degrade the overall performance of machine learning models without targeting specific inputs, focusing instead on reducing global accuracy or robustness. Current research explores these attacks across various model types, including deep neural networks, knowledge graph embeddings, and graph convolutional networks, employing techniques like rule-based perturbations, gradient-based methods, and object-attentional strategies to generate adversarial examples. This research is crucial for evaluating the security and reliability of machine learning systems in diverse applications, from network intrusion detection to robotic systems, highlighting vulnerabilities and informing the development of more robust defenses.