Model Stealing Attack
Model stealing attacks aim to replicate the functionality of a machine learning model by querying its outputs, thereby compromising intellectual property and potentially revealing sensitive training data. Current research focuses on developing and evaluating these attacks against various model architectures, including convolutional neural networks (CNNs), graph neural networks (GNNs), and quantum neural networks (QNNs), employing techniques like knowledge distillation and active learning to improve efficiency. This area is significant due to the increasing prevalence of machine learning as a service (MLaaS), highlighting the need for robust defenses to protect valuable models and prevent unauthorized replication.
Papers
May 20, 2024
March 8, 2024
February 18, 2024
January 29, 2024
December 18, 2023
December 10, 2023
November 8, 2023
September 29, 2023
September 4, 2023
August 2, 2023
May 23, 2023
April 23, 2023
February 23, 2023
November 24, 2022
June 28, 2022
June 16, 2022
May 30, 2022
March 21, 2022