Model Stealing Attack
Model stealing attacks aim to replicate the functionality of a machine learning model by querying its outputs, thereby compromising intellectual property and potentially revealing sensitive training data. Current research focuses on developing and evaluating these attacks against various model architectures, including convolutional neural networks (CNNs), graph neural networks (GNNs), and quantum neural networks (QNNs), employing techniques like knowledge distillation and active learning to improve efficiency. This area is significant due to the increasing prevalence of machine learning as a service (MLaaS), highlighting the need for robust defenses to protect valuable models and prevent unauthorized replication.
Papers
January 18, 2022