Attack Vector

Attack vectors represent the methods by which malicious actors compromise the security and functionality of machine learning systems and other technologies. Current research focuses on identifying and characterizing these vectors across diverse applications, including autonomous vehicles, smart grids, and large language models (LLMs), often leveraging techniques like fault injection, adversarial examples, and prompt injection to exploit vulnerabilities in model architectures and training processes. Understanding these attack vectors is crucial for developing robust defenses and ensuring the safe and reliable deployment of AI and other advanced technologies in various sectors.

Papers