Artificial Intelligence Security

Artificial intelligence (AI) security focuses on protecting AI systems and their applications from various threats, aiming to ensure the reliability, trustworthiness, and safety of AI-driven technologies. Current research emphasizes vulnerabilities in specific AI architectures like graph neural networks and large language models, exploring adversarial attacks (e.g., data poisoning, prompt injection) and developing robust defenses such as signed prompts and improved model training techniques. This field is crucial for mitigating risks across diverse sectors, from healthcare and autonomous systems to cybersecurity itself, requiring interdisciplinary collaboration to address both technical and ethical challenges.

Papers