Paper ID: 2411.10367 • Published Nov 15, 2024
Continual Adversarial Reinforcement Learning (CARL) of False Data Injection detection: forgetting and explainability
Pooja Aslami, Kejun Chen, Timothy M. Hansen, Malik Hassanaly
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
False data injection attacks (FDIAs) on smart inverters are a growing concern
linked to increased renewable energy production. While data-based FDIA
detection methods are also actively developed, we show that they remain
vulnerable to impactful and stealthy adversarial examples that can be crafted
using Reinforcement Learning (RL). We propose to include such adversarial
examples in data-based detection training procedure via a continual adversarial
RL (CARL) approach. This way, one can pinpoint the deficiencies of data-based
detection, thereby offering explainability during their incremental
improvement. We show that a continual learning implementation is subject to
catastrophic forgetting, and additionally show that forgetting can be addressed
by employing a joint training strategy on all generated FDIA scenarios.