Adversarial Prediction

Adversarial prediction research explores how to create and defend against inputs designed to mislead machine learning models, focusing on improving model robustness and understanding the underlying vulnerabilities. Current research investigates diverse approaches, including generative models (like GANs and those employing vector quantization), novel activation functions and network architectures, and methods leveraging causal inference to analyze adversarial examples. This field is crucial for enhancing the reliability and security of AI systems across various applications, from medical image analysis and cybersecurity to face recognition and other critical domains where model trustworthiness is paramount.

Papers