Novel Attack
Novel attacks on machine learning systems and related technologies are a growing area of research, focusing on identifying vulnerabilities and developing robust defenses. Current efforts concentrate on developing sophisticated attack methodologies, including those leveraging adversarial examples, model poisoning, and jailbreaking techniques, often employing reinforcement learning, generative adversarial networks, and digital twin technologies for both attack generation and security assessment. This research is crucial for ensuring the safety and reliability of increasingly prevalent AI systems across various sectors, from autonomous vehicles to large language models and federated learning applications.
Papers
Discovering Command and Control (C2) Channels on Tor and Public Networks Using Reinforcement Learning
Cheng Wang, Christopher Redino, Abdul Rahman, Ryan Clark, Daniel Radke, Tyler Cody, Dhruv Nandakumar, Edward Bowen
Review-Incorporated Model-Agnostic Profile Injection Attacks on Recommender Systems
Shiyi Yang, Lina Yao, Chen Wang, Xiwei Xu, Liming Zhu