Scientific Inference
Scientific inference, the process of drawing conclusions from data, is a core challenge across numerous scientific fields, with current research focusing on improving efficiency and accuracy. This involves developing novel algorithms and architectures, such as those based on Bayesian networks, diffusion transformers, and autoregressive models, to optimize inference processes in various contexts, including large language models and image processing. These advancements are crucial for accelerating scientific discovery and enabling real-world applications in areas like personalized medicine, legal tech, and industrial automation, where efficient and reliable inference is paramount. The emphasis is on addressing computational bottlenecks and improving the reliability of inferences, particularly in scenarios with limited data or complex models.
Papers
NarrowBERT: Accelerating Masked Language Model Pretraining and Inference
Haoxin Li, Phillip Keung, Daniel Cheng, Jungo Kasai, Noah A. Smith
The Role of Interactive Visualization in Explaining (Large) NLP Models: from Data to Inference
Richard Brath, Daniel Keim, Johannes Knittel, Shimei Pan, Pia Sommerauer, Hendrik Strobelt
Vicious Classifiers: Assessing Inference-time Data Reconstruction Risk in Edge Computing
Mohammad Malekzadeh, Deniz Gunduz
Federated Learning for Inference at Anytime and Anywhere
Zicheng Liu, Da Li, Javier Fernandez-Marques, Stefanos Laskaridis, Yan Gao, Łukasz Dudziak, Stan Z. Li, Shell Xu Hu, Timothy Hospedales