Suspect Model

"Suspect model" research focuses on verifying the provenance of machine learning models, primarily addressing concerns about model theft and unauthorized data usage. Current research explores techniques like watermarking, sample correlation analysis, and the use of universal adversarial perturbations to identify whether a suspect model was derived from a specific source model, often leveraging deep learning architectures for both attack and defense. This field is crucial for protecting intellectual property rights in the rapidly evolving landscape of AI and has implications for legal frameworks surrounding model ownership and data privacy.

Papers