Uncertainty Representation
Uncertainty representation in machine learning aims to quantify and model the inherent uncertainty in predictions, improving model reliability and robustness. Current research focuses on developing methods to represent uncertainty within various model architectures, including Bayesian neural networks, variational autoencoders, and specialized layers like Kalman filters for sequential data, often employing techniques like heteroscedastic regression and stochastic weight averaging. This work is crucial for building trustworthy AI systems across diverse applications, from robotics and autonomous navigation to medical diagnosis and financial modeling, where understanding and managing uncertainty is paramount for safe and effective deployment.
Papers
Uncertainty in latent representations of variational autoencoders optimized for visual tasks
Josefina Catoni, Enzo Ferrante, Diego H. Milone, Rodrigo Echeveste
Uncertainty Quantification on Graph Learning: A Survey
Chao Chen, Chenghua Guo, Rui Xu, Xiangwen Liao, Xi Zhang, Sihong Xie, Hui Xiong, Philip Yu