Structural Attack
Structural attacks target the underlying structure of data or models, aiming to compromise their performance or security. Current research focuses on developing robust defenses against these attacks, particularly for graph neural networks (GNNs), using techniques like contrastive learning, negative sampling, and low-rank approximations to improve model resilience. These efforts are crucial for securing various applications, from network intrusion detection systems and data privacy to the safety and reliability of AI models, especially large language models (LLMs) and autonomous systems. The overarching goal is to enhance the robustness and trustworthiness of these systems in the face of malicious manipulation.
Papers
August 29, 2024
August 13, 2024
June 13, 2024
March 18, 2024
March 14, 2024
September 26, 2023
September 18, 2023
August 15, 2023
July 24, 2023
November 19, 2022
September 2, 2022
December 22, 2021