Secure Deep
Secure deep learning focuses on developing methods to protect the privacy and integrity of deep neural networks (DNNs) during training and inference, addressing concerns about data breaches and model manipulation. Current research emphasizes techniques like multi-party computation (MPC), differential privacy (DP), and the use of trusted execution environments (TEEs) to secure DNNs, often employing lightweight cryptographic protocols and optimized model architectures (e.g., ResNets, Spiking Neural Networks) to mitigate performance overhead. This field is crucial for enabling the safe and responsible deployment of DNNs in sensitive applications like healthcare and finance, where data privacy and model security are paramount.
Papers
July 31, 2024
June 4, 2024
March 21, 2024
March 19, 2024
November 20, 2023
July 4, 2023
April 20, 2023
January 12, 2023
September 20, 2022
June 30, 2022
March 5, 2022