Optical Adversarial
Optical adversarial attacks exploit the vulnerabilities of computer vision systems by subtly manipulating the light reaching a camera sensor, causing misclassification or inaccurate depth estimation. Current research focuses on developing increasingly stealthy attacks using techniques like modulated LEDs, strategically placed lenses, and even naturally occurring phenomena like shadows, targeting applications such as autonomous driving and facial recognition. These attacks highlight critical security risks in deploying deep learning models in real-world scenarios, prompting investigations into robust countermeasures at both the hardware and software levels. The ultimate goal is to improve the resilience of computer vision systems against these sophisticated, often imperceptible, manipulations.