Secure Deep

Secure deep learning focuses on developing methods to protect the privacy and integrity of deep neural networks (DNNs) during training and inference, addressing concerns about data breaches and model manipulation. Current research emphasizes techniques like multi-party computation (MPC), differential privacy (DP), and the use of trusted execution environments (TEEs) to secure DNNs, often employing lightweight cryptographic protocols and optimized model architectures (e.g., ResNets, Spiking Neural Networks) to mitigate performance overhead. This field is crucial for enabling the safe and responsible deployment of DNNs in sensitive applications like healthcare and finance, where data privacy and model security are paramount.

Papers