Scientific Inference
Scientific inference, the process of drawing conclusions from data, is a core challenge across numerous scientific fields, with current research focusing on improving efficiency and accuracy. This involves developing novel algorithms and architectures, such as those based on Bayesian networks, diffusion transformers, and autoregressive models, to optimize inference processes in various contexts, including large language models and image processing. These advancements are crucial for accelerating scientific discovery and enabling real-world applications in areas like personalized medicine, legal tech, and industrial automation, where efficient and reliable inference is paramount. The emphasis is on addressing computational bottlenecks and improving the reliability of inferences, particularly in scenarios with limited data or complex models.
Papers
A Scalable Model Specialization Framework for Training and Inference using Submodels and its Application to Speech Model Personalization
Fadi Biadsy, Youzheng Chen, Xia Zhang, Oleg Rybakov, Andrew Rosenberg, Pedro J. Moreno
AbductionRules: Training Transformers to Explain Unexpected Inputs
Nathan Young, Qiming Bao, Joshua Bensemann, Michael Witbrock