Paper ID: 2401.05126
Efficient Fine-Tuning with Domain Adaptation for Privacy-Preserving Vision Transformer
Teru Nagamori, Sayaka Shiota, Hitoshi Kiya
We propose a novel method for privacy-preserving deep neural networks (DNNs) with the Vision Transformer (ViT). The method allows us not only to train models and test with visually protected images but to also avoid the performance degradation caused from the use of encrypted images, whereas conventional methods cannot avoid the influence of image encryption. A domain adaptation method is used to efficiently fine-tune ViT with encrypted images. In experiments, the method is demonstrated to outperform conventional methods in an image classification task on the CIFAR-10 and ImageNet datasets in terms of classification accuracy.
Submitted: Jan 10, 2024