Risk Sensitive
Risk-sensitive research focuses on developing and analyzing methods to quantify and manage risks associated with various AI systems, particularly in high-stakes applications like healthcare, finance, and autonomous systems. Current research emphasizes robust model architectures and algorithms, including Bayesian methods, active learning, and risk-aware generative models, to improve prediction accuracy while controlling for uncertainty and potential harms. This field is crucial for ensuring the safe and responsible deployment of AI, impacting both the development of trustworthy AI systems and the mitigation of potential negative consequences in diverse real-world scenarios.
Papers
Risk Sources and Risk Management Measures in Support of Standards for General-Purpose AI Systems
Rokas Gipiškis, Ayrton San Joaquin, Ze Shen Chin, Adrian Regenfuß, Ariel Gil, Koen Holtman
Planning and Learning in Risk-Aware Restless Multi-Arm Bandit Problem
Nima Akbarzadeh, Erick Delage, Yossiri Adulyasak