Box Attack
Box attacks, encompassing both white-box and black-box variations, and the more challenging "no-box" attacks, investigate the vulnerability of deep neural networks (DNNs) to adversarial perturbations. Current research focuses on developing efficient attack algorithms, such as variations of projected gradient descent (PGD), and exploring novel defense mechanisms like vector quantization. These studies are crucial for understanding and mitigating the risks posed by adversarial examples to the reliability and security of DNNs across diverse applications, including image classification, human action recognition, and 3D point cloud analysis. The development of effective no-box attacks, requiring no knowledge of the target model, highlights the need for robust and generalizable defenses.