Suspect Model
"Suspect model" research focuses on verifying the provenance of machine learning models, primarily addressing concerns about model theft and unauthorized data usage. Current research explores techniques like watermarking, sample correlation analysis, and the use of universal adversarial perturbations to identify whether a suspect model was derived from a specific source model, often leveraging deep learning architectures for both attack and defense. This field is crucial for protecting intellectual property rights in the rapidly evolving landscape of AI and has implications for legal frameworks surrounding model ownership and data privacy.
Papers
October 18, 2024
September 24, 2024
September 29, 2023
April 23, 2023
April 13, 2023
February 21, 2023
October 21, 2022
February 17, 2022
December 7, 2021
November 24, 2021