Imperceptible Adversarial

Imperceptible adversarial attacks involve subtly altering input data—images, audio, 3D point clouds, or time series—to deceive machine learning models without noticeably changing the original data. Current research focuses on developing efficient algorithms to craft these attacks, often leveraging gradient-based methods, invertible neural networks, or multi-objective optimization, and exploring their effectiveness across various model architectures, including CNNs, RNNs, and graph neural networks. This field is crucial because such attacks can compromise the reliability of AI systems in critical applications like medical diagnosis, autonomous driving, and security, highlighting the need for robust and resilient models.

Papers