Security Threat

The security of increasingly autonomous systems, particularly those employing artificial intelligence like large language models (LLMs) and machine learning (ML), is a critical research area. Current efforts focus on identifying and mitigating vulnerabilities stemming from unpredictable user inputs, complex internal operations, and interactions with untrusted entities, employing techniques such as reinforcement learning from human feedback (RLHF), dialectical alignment, and various ML algorithms for threat detection (e.g., gradient boosting, random forests). Understanding and addressing these security threats is crucial for ensuring the safe and reliable deployment of AI in diverse applications, from robotics and power systems to vehicular networks and federated learning environments.

Papers