Paper ID: 2309.15669
On the Computational Entanglement of Distant Features in Adversarial Machine Learning
YenLung Lai, Xingbo Dong, Zhe Jin
In this research, we introduce 'computational entanglement', a phenomenon in overparameterized neural networks where the model exploits noise patterns in ways conceptually linked to the effects of length contraction. More specific, our findings demonstrate that overparameterized feedforward linear networks can easily achieve zero loss by fitting random noise, even with test samples that were never encountered during training. This phenomenon accompanies length contraction, where trained and test samples converge at the same point within a spacetime diagram. Unlike most models that rely on supervised learning, our method operates unsupervised, without the need for labels or gradient-based optimization. Additionally, we show a novel application of computational entanglement: transforming adversarial examples-highly non-robuts inputs imperceptible to human observers-into outputs that are recognizable and robust. This challenges conventional views on non-robust features in adversarial example generation, providing new insights into the underlying mechanisms. Our results emphasize the importance of computational entanglement for enhancing model robustness and understanding neural networks in adversarial contexts.
Submitted: Sep 27, 2023