Protection Mechanism
Protection mechanisms in various machine learning applications aim to mitigate vulnerabilities stemming from adversarial attacks, privacy breaches, and hardware failures. Current research focuses on developing and optimizing these mechanisms, including techniques like data unlearning, parameter distortion, and the use of trusted execution environments, often employing meta-learning and generative adversarial networks to improve their effectiveness. This work is crucial for ensuring the reliability and security of AI systems across diverse sectors, from industrial control and federated learning to autonomous vehicles and social networks, ultimately enhancing the trustworthiness and widespread adoption of these technologies.
Papers
March 20, 2024
May 28, 2023
May 18, 2023
May 7, 2023
May 3, 2023
February 8, 2023
September 8, 2022
September 1, 2022
August 11, 2022
July 5, 2022
April 26, 2022