Probabilistic Model
Probabilistic models are mathematical frameworks used to represent and reason under uncertainty, aiming to quantify the likelihood of different outcomes. Current research focuses on improving the efficiency and accuracy of these models across diverse applications, including generative AI (e.g., diffusion models, sum-product networks), uncertainty quantification in large language models, and robust inference in Bayesian networks. This work is significant because it enhances the reliability and interpretability of AI systems, leading to improved decision-making in various fields such as healthcare, finance, and scientific discovery.
Papers
Semirings for Probabilistic and Neuro-Symbolic Logic Programming
Vincent Derkinderen, Robin Manhaeve, Pedro Zuidberg Dos Martires, Luc De Raedt
Measurement Uncertainty: Relating the uncertainties of physical and virtual measurements
Simon Cramer, Tobias Müller, Robert H. Schmitt
SimPro: A Simple Probabilistic Framework Towards Realistic Long-Tailed Semi-Supervised Learning
Chaoqun Du, Yizeng Han, Gao Huang
Bridging Associative Memory and Probabilistic Modeling
Rylan Schaeffer, Nika Zahedi, Mikail Khona, Dhruv Pai, Sang Truong, Yilun Du, Mitchell Ostrow, Sarthak Chandra, Andres Carranza, Ila Rani Fiete, Andrey Gromov, Sanmi Koyejo
Explaining Probabilistic Models with Distributional Values
Luca Franceschi, Michele Donini, Cédric Archambeau, Matthias Seeger