Risk Sensitive
Risk-sensitive research focuses on developing and analyzing methods to quantify and manage risks associated with various AI systems, particularly in high-stakes applications like healthcare, finance, and autonomous systems. Current research emphasizes robust model architectures and algorithms, including Bayesian methods, active learning, and risk-aware generative models, to improve prediction accuracy while controlling for uncertainty and potential harms. This field is crucial for ensuring the safe and responsible deployment of AI, impacting both the development of trustworthy AI systems and the mitigation of potential negative consequences in diverse real-world scenarios.
Papers
Identifying the Risks of LM Agents with an LM-Emulated Sandbox
Yangjun Ruan, Honghua Dong, Andrew Wang, Silviu Pitis, Yongchao Zhou, Jimmy Ba, Yann Dubois, Chris J. Maddison, Tatsunori Hashimoto
Autonomous Vehicles an overview on system, cyber security, risks, issues, and a way forward
Md Aminul Islam, Sarah Alqahtani