Malicious Sample

Malicious samples, encompassing malware, adversarial examples, and backdoored models, pose significant threats to the security and reliability of machine learning systems. Current research focuses on detecting and mitigating these threats through various techniques, including novel dataset creation for improved model training, the development of robust detection algorithms (e.g., those leveraging graph contrastive learning or parameter-oriented scaling consistency), and the design of defenses against attacks targeting test-time adaptation or federated learning. Understanding and addressing these vulnerabilities is crucial for ensuring the trustworthiness and safety of AI systems across diverse applications, from cybersecurity to healthcare.

Papers