Injection Attack
Injection attacks exploit vulnerabilities in systems by introducing malicious data, impacting diverse areas from vehicular networks and federated learning to large language models (LLMs). Current research focuses on developing robust detection methods, employing techniques like unsupervised learning (e.g., autoencoders), one-class classification, and activation analysis of LLMs to identify both known and zero-day attacks. These efforts are crucial for enhancing the security and reliability of increasingly interconnected systems, with implications for various sectors including automotive, transportation, and AI-driven applications.
Papers
Continual Adversarial Reinforcement Learning (CARL) of False Data Injection detection: forgetting and explainability
Pooja Aslami, Kejun Chen, Timothy M. Hansen, Malik Hassanaly
MDHP-Net: Detecting Injection Attacks on In-vehicle Network using Multi-Dimensional Hawkes Process and Temporal Model
Qi Liu, Yanchen Liu, Ruifeng Li, Chenhong Cao, Yufeng Li, Xingyu Li, Peng Wang, Runhan Feng