Structural Attack

Structural attacks target the underlying structure of data or models, aiming to compromise their performance or security. Current research focuses on developing robust defenses against these attacks, particularly for graph neural networks (GNNs), using techniques like contrastive learning, negative sampling, and low-rank approximations to improve model resilience. These efforts are crucial for securing various applications, from network intrusion detection systems and data privacy to the safety and reliability of AI models, especially large language models (LLMs) and autonomous systems. The overarching goal is to enhance the robustness and trustworthiness of these systems in the face of malicious manipulation.

Papers