Vertical Federated Learning
Vertical Federated Learning (VFL) is a privacy-preserving machine learning approach enabling collaborative model training across multiple parties holding different features of the same data samples, without directly sharing raw data. Current research emphasizes developing efficient algorithms and model architectures, such as split neural networks and gradient boosting decision trees, to address challenges like communication overhead, security vulnerabilities (including backdoor attacks and data reconstruction attacks), and ensuring fairness in contribution evaluation. VFL's significance lies in its potential to unlock the value of siloed datasets in various sectors (finance, healthcare, IoT) while upholding data privacy regulations, fostering trust, and facilitating collaborative data analysis.
Papers
Robust and IP-Protecting Vertical Federated Learning against Unexpected Quitting of Parties
Jingwei Sun, Zhixu Du, Anna Dai, Saleh Baghersalimi, Alireza Amirshahi, David Atienza, Yiran Chen
Communication-Efficient Vertical Federated Learning with Limited Overlapping Samples
Jingwei Sun, Ziyue Xu, Dong Yang, Vishwesh Nath, Wenqi Li, Can Zhao, Daguang Xu, Yiran Chen, Holger R. Roth
Vertical Federated Learning: A Structured Literature Review
Afsana Khan, Marijn ten Thij, Anna Wilbik
HashVFL: Defending Against Data Reconstruction Attacks in Vertical Federated Learning
Pengyu Qiu, Xuhong Zhang, Shouling Ji, Chong Fu, Xing Yang, Ting Wang
Hijack Vertical Federated Learning Models As One Party
Pengyu Qiu, Xuhong Zhang, Shouling Ji, Changjiang Li, Yuwen Pu, Xing Yang, Ting Wang