Inductive Bias
Inductive bias refers to the assumptions built into machine learning models that guide their learning process, influencing what types of solutions they find and how well they generalize to unseen data. Current research focuses on understanding and controlling inductive biases in various model architectures, including neural networks (particularly transformers and graph neural networks), and exploring how biases affect model performance, fairness, and robustness across diverse tasks such as image classification, natural language processing, and reinforcement learning. This research is crucial for improving model generalization, mitigating biases, and developing more efficient and reliable machine learning systems across numerous scientific and practical applications.
Papers
Inductive Bias for Emergent Communication in a Continuous Setting
John Isak Fjellvang Villanger, Troels Arnfred Bojesen
Mega-TTS: Zero-Shot Text-to-Speech at Scale with Intrinsic Inductive Bias
Ziyue Jiang, Yi Ren, Zhenhui Ye, Jinglin Liu, Chen Zhang, Qian Yang, Shengpeng Ji, Rongjie Huang, Chunfeng Wang, Xiang Yin, Zejun Ma, Zhou Zhao
Spectal Harmonics: Bridging Spectral Embedding and Matrix Completion in Self-Supervised Learning
Marina Munkhoeva, Ivan Oseledets
VIPriors 3: Visual Inductive Priors for Data-Efficient Deep Learning Challenges
Robert-Jan Bruintjes, Attila Lengyel, Marcos Baptista Rios, Osman Semih Kayhan, Davide Zambrano, Nergis Tomen, Jan van Gemert
Exploring Automatically Perturbed Natural Language Explanations in Relation Extraction
Wanyun Cui, Xingran Chen
Learning high-level visual representations from a child's perspective without strong inductive biases
A. Emin Orhan, Brenden M. Lake
Modeling rapid language learning by distilling Bayesian priors into artificial neural networks
R. Thomas McCoy, Thomas L. Griffiths