Inductive Bias
Inductive bias refers to the assumptions built into machine learning models that guide their learning process, influencing what types of solutions they find and how well they generalize to unseen data. Current research focuses on understanding and controlling inductive biases in various model architectures, including neural networks (particularly transformers and graph neural networks), and exploring how biases affect model performance, fairness, and robustness across diverse tasks such as image classification, natural language processing, and reinforcement learning. This research is crucial for improving model generalization, mitigating biases, and developing more efficient and reliable machine learning systems across numerous scientific and practical applications.
Papers
Character-level Tokenizations as Powerful Inductive Biases for RNA Foundational Models
Adrián Morales-Pastor, Raquel Vázquez-Reza, Miłosz Wieczór, Clàudia Valverde, Manel Gil-Sorribes, Bertran Miquel-Oliver, Álvaro Ciudad, Alexis Molina
Interpretable Predictive Models for Healthcare via Rational Logistic Regression
Thiti Suttaket, L Vivek Harsha Vardhan, Stanley Kok