Adversarial Disturbance
Adversarial disturbances involve subtly manipulating input data to disrupt the performance of machine learning models, particularly in safety-critical applications like autonomous vehicles and power grids. Current research focuses on developing robust models and algorithms, such as those employing adversarial training and game-theoretic approaches, to mitigate these attacks, which can range from data poisoning during model training to real-time manipulation of sensor inputs. Understanding and defending against these attacks is crucial for ensuring the reliability and security of increasingly data-driven systems, impacting fields from transportation to energy infrastructure.
Papers
July 19, 2024
July 10, 2024
October 11, 2023
October 4, 2023
April 20, 2022