Paper ID: 2203.08559

Learning to Generate Synthetic Training Data using Gradient Matching and Implicit Differentiation

Dmitry Medvedev, Alexander D'yakonov

Using huge training datasets can be costly and inconvenient. This article explores various data distillation techniques that can reduce the amount of data required to successfully train deep networks. Inspired by recent ideas, we suggest new data distillation techniques based on generative teaching networks, gradient matching, and the Implicit Function Theorem. Experiments with the MNIST image classification problem show that the new methods are computationally more efficient than previous ones and allow to increase the performance of models trained on distilled data.

Submitted: Mar 16, 2022