Novel Regression
Novel regression research focuses on improving the accuracy, efficiency, and interpretability of regression models, particularly in challenging scenarios with limited data, high dimensionality, or complex relationships. Current efforts explore diverse model architectures, including deep neural networks (DNNs), recurrent neural networks (RNNs), and large language models (LLMs), alongside innovative techniques like active learning, Bayesian methods, and post-processing for fairness and uncertainty quantification. These advancements are significant for various fields, enabling more robust predictions in applications ranging from materials science and medical imaging to robotics and environmental modeling. The development of efficient and reliable regression methods continues to be a crucial area of research with broad practical implications.
Papers
PyDTS: A Python Package for Discrete-Time Survival (Regularized) Regression with Competing Risks
Tomer Meir, Rom Gutman, Malka Gorfine
Regression or Classification? Reflection on BP prediction from PPG data using Deep Neural Networks in the scope of practical applications
Fabian Schrumpf, Paul Rudi Serdack, Mirco Fuchs
Remember to correct the bias when using deep learning for regression!
Christian Igel, Stefan Oehmcke
Region of Interest focused MRI to Synthetic CT Translation using Regression and Classification Multi-task Network
Sandeep Kaushik, Mikael Bylund, Cristina Cozzini, Dattesh Shanbhag, Steven F Petit, Jonathan J Wyatt, Marion I Menzel, Carolin Pirkl, Bhairav Mehta, Vikas Chauhan, Kesavadas Chandrasekharan, Joakim Jonsson, Tufve Nyholm, Florian Wiesinger, Bjoern Menze