Paper ID: 2112.08806
Correlation inference attacks against machine learning models
Ana-Maria Creţu, Florent Guépin, Yves-Alexandre de Montjoye
Machine learning models are often trained on sensitive and proprietary datasets. Yet what -- and under which conditions -- a model leaks about its dataset, is not well understood. Most previous works study the leakage of information about an individual record. Yet in many situations, global dataset information such as its underlying distribution, e.g. $k$-way marginals or correlations are similarly sensitive or secret. We here explore for the first time whether a model leaks information about the correlations between the input variables of its training dataset, something we name correlation inference attack. We first propose a model-less attack, showing how an attacker can exploit the spherical parametrization of correlation matrices to make an informed guess based on the correlations between the input variables and the target variable alone. Second, we propose a model-based attack, showing how an attacker can exploit black-box access to the model to infer the correlations using shadow models trained on synthetic datasets. Our synthetic data generation approach combines Gaussian copula-based generative modeling with a carefully adapted procedure for sampling correlation matrices under constraints. Third, we evaluate our model-based attack against Logistic Regression and Multilayer Perceptron models and show it to strongly outperform the model-less attack on three real-world tabular datasets, indicating that the models leak information about the correlations. We also propose a novel correlation inference-based attribute inference attack (CI-AIA), and show it to obtain state-of-the-art performance. Taken together, our results show how attackers can use the model to extract information about the dataset distribution, and use it to improve their prior on sensitive attributes of individual records.
Submitted: Dec 16, 2021