Scientific Inference
Scientific inference, the process of drawing conclusions from data, is a core challenge across numerous scientific fields, with current research focusing on improving efficiency and accuracy. This involves developing novel algorithms and architectures, such as those based on Bayesian networks, diffusion transformers, and autoregressive models, to optimize inference processes in various contexts, including large language models and image processing. These advancements are crucial for accelerating scientific discovery and enabling real-world applications in areas like personalized medicine, legal tech, and industrial automation, where efficient and reliable inference is paramount. The emphasis is on addressing computational bottlenecks and improving the reliability of inferences, particularly in scenarios with limited data or complex models.
Papers
Hybrid summary statistics: neural weak lensing inference beyond the power spectrum
T. Lucas Makinen, Alan Heavens, Natalia Porqueres, Tom Charnock, Axel Lapel, Benjamin D. Wandelt
Patched MOA: optimizing inference for diverse software development tasks
Asankhaya Sharma
A Model for Combinatorial Dictionary Learning and Inference
Avrim Blum, Kavya Ravichandran
Causal modelling without introducing counterfactuals or abstract distributions
Benedikt Höltgen, Robert C. Williamson
Gradient-based inference of abstract task representations for generalization in neural networks
Ali Hummos, Felipe del Río, Brabeeba Mien Wang, Julio Hurtado, Cristian B. Calderon, Guangyu Robert Yang