Paper ID: 2206.10259
R2-AD2: Detecting Anomalies by Analysing the Raw Gradient
Jan-Philipp Schulze, Philip Sperl, Ana Răduţoiu, Carla Sagebiel, Konstantin Böttinger
Neural networks follow a gradient-based learning scheme, adapting their mapping parameters by back-propagating the output loss. Samples unlike the ones seen during training cause a different gradient distribution. Based on this intuition, we design a novel semi-supervised anomaly detection method called R2-AD2. By analysing the temporal distribution of the gradient over multiple training steps, we reliably detect point anomalies in strict semi-supervised settings. Instead of domain dependent features, we input the raw gradient caused by the sample under test to an end-to-end recurrent neural network architecture. R2-AD2 works in a purely data-driven way, thus is readily applicable in a variety of important use cases of anomaly detection.
Submitted: Jun 21, 2022