Paper ID: 2304.06575
Adversarial Examples from Dimensional Invariance
Benjamin L. Badger
Adversarial examples have been found for various deep as well as shallow learning models, and have at various times been suggested to be either fixable model-specific bugs, or else inherent dataset feature, or both. We present theoretical and empirical results to show that adversarial examples are approximate discontinuities resulting from models that specify approximately bijective maps $f: \Bbb R^n \to \Bbb R^m; n \neq m$ over their inputs, and this discontinuity follows from the topological invariance of dimension.
Submitted: Apr 13, 2023