Robust Agent

Robust agent research focuses on developing artificial intelligence agents resilient to unexpected situations, including adversarial attacks and environmental changes, ensuring reliable performance in real-world deployments. Current efforts concentrate on improving agent robustness through techniques like input transformations (e.g., vector quantization), adversarial training, and the development of verifiable robustness guarantees using set-based reinforcement learning and smoothed deep reinforcement learning algorithms. This work is crucial for deploying AI agents in safety-critical applications, such as autonomous driving and robotics, where reliability and adaptability are paramount.

Papers