Risk Sensitive
Risk-sensitive research focuses on developing and analyzing methods to quantify and manage risks associated with various AI systems, particularly in high-stakes applications like healthcare, finance, and autonomous systems. Current research emphasizes robust model architectures and algorithms, including Bayesian methods, active learning, and risk-aware generative models, to improve prediction accuracy while controlling for uncertainty and potential harms. This field is crucial for ensuring the safe and responsible deployment of AI, impacting both the development of trustworthy AI systems and the mitigation of potential negative consequences in diverse real-world scenarios.
Papers
Is Risk-Sensitive Reinforcement Learning Properly Resolved?
Ruiwen Zhou, Minghuan Liu, Kan Ren, Xufang Luo, Weinan Zhang, Dongsheng Li
New intelligent defense systems to reduce the risks of Selfish Mining and Double-Spending attacks using Learning Automata
Seyed Ardalan Ghoreishi, Mohammad Reza Meybodi