DNN Fingerprint Removal Attack

DNN fingerprint removal attacks target methods used to verify ownership of deep learning models, aiming to eliminate embedded "fingerprints" that identify the model's origin. Current research focuses on developing both robust fingerprinting techniques (e.g., leveraging neuron functionality analysis, sample correlations, or universal adversarial perturbations) and effective removal attacks (e.g., min-max bilevel optimization). This area is crucial for protecting intellectual property in the rapidly expanding field of deep learning and has significant implications for the security and trustworthiness of AI systems.

Papers