Novel Regression
Novel regression research focuses on improving the accuracy, efficiency, and interpretability of regression models, particularly in challenging scenarios with limited data, high dimensionality, or complex relationships. Current efforts explore diverse model architectures, including deep neural networks (DNNs), recurrent neural networks (RNNs), and large language models (LLMs), alongside innovative techniques like active learning, Bayesian methods, and post-processing for fairness and uncertainty quantification. These advancements are significant for various fields, enabling more robust predictions in applications ranging from materials science and medical imaging to robotics and environmental modeling. The development of efficient and reliable regression methods continues to be a crucial area of research with broad practical implications.
Papers
Diffusion-based Semi-supervised Spectral Algorithm for Regression on Manifolds
Weichun Xia, Jiaxin Jiang, Lei Shi
Interpreting Microbiome Relative Abundance Data Using Symbolic Regression
Swagatam Haldar, Christoph Stein-Thoeringer, Vadim Borisov
Solving the 2D Advection-Diffusion Equation using Fixed-Depth Symbolic Regression and Symbolic Differentiation without Expression Trees
Edward Finkelstein
Ranking over Regression for Bayesian Optimization and Molecule Selection
Gary Tom, Stanley Lo, Samantha Corapi, Alan Aspuru-Guzik, Benjamin Sanchez-Lengeling
Scalable Signature-Based Distribution Regression via Reference Sets
Andrew Alden, Carmine Ventre, Blanka Horvath
Lifted Coefficient of Determination: Fast model-free prediction intervals and likelihood-free model comparison
Daniel Salnikov, Kevin Michalewicz, Dan Leonte