Bit Flip

Bit-flip attacks (BFAs) exploit the vulnerability of deep neural networks (DNNs) by subtly altering their parameters, specifically targeting bits within model weights or even executable code. Current research focuses on developing more effective BFAs, including those tailored to specific DNN architectures like graph neural networks, and exploring attack strategies that require minimal bit modifications, even down to single-bit flips. These attacks highlight significant security risks in deploying DNNs, prompting investigations into robust defense mechanisms and secure compilation techniques for improved resilience against such manipulations.

Papers