Untargeted Attack
Untargeted attacks aim to degrade the performance of machine learning models without targeting a specific outcome, posing a significant threat to various applications. Current research focuses on developing and analyzing these attacks across diverse model architectures, including deep reinforcement learning agents, large language models, and hypergraph neural networks, often employing gradient-based methods or data poisoning techniques. Understanding the vulnerabilities exposed by these attacks is crucial for improving the robustness and security of AI systems in domains ranging from cybersecurity and autonomous driving to biometric authentication and federated learning. The development of effective defenses against untargeted attacks is a key area of ongoing investigation.