Inductive Bias
Inductive bias refers to the assumptions built into machine learning models that guide their learning process, influencing what types of solutions they find and how well they generalize to unseen data. Current research focuses on understanding and controlling inductive biases in various model architectures, including neural networks (particularly transformers and graph neural networks), and exploring how biases affect model performance, fairness, and robustness across diverse tasks such as image classification, natural language processing, and reinforcement learning. This research is crucial for improving model generalization, mitigating biases, and developing more efficient and reliable machine learning systems across numerous scientific and practical applications.
Papers
Multi-Excitation Projective Simulation with a Many-Body Physics Inspired Inductive Bias
Philip A. LeMaitre, Marius Krumm, Hans J. Briegel
Why are Sensitive Functions Hard for Transformers?
Michael Hahn, Mark Rofin
Diffusion Model with Cross Attention as an Inductive Bias for Disentanglement
Tao Yang, Cuiling Lan, Yan Lu, Nanning zheng
On Time-Indexing as Inductive Bias in Deep RL for Sequential Manipulation Tasks
M. Nomaan Qureshi, Ben Eisner, David Held
Dataset Difficulty and the Role of Inductive Bias
Devin Kwok, Nikhil Anand, Jonathan Frankle, Gintare Karolina Dziugaite, David Rolnick
Evaluating Fairness in Self-supervised and Supervised Models for Sequential Data
Sofia Yfantidou, Dimitris Spathis, Marios Constantinides, Athena Vakali, Daniele Quercia, Fahim Kawsar