Modern Deep Learning
Modern deep learning focuses on developing and improving algorithms and architectures for training large-scale neural networks, aiming to enhance efficiency, accuracy, and generalizability. Current research emphasizes efficient training methods under time constraints, scalable optimization techniques for massive models (including transformers and convolutional networks), and improved uncertainty quantification through Bayesian approaches and novel regularization methods like weight decay. These advancements are crucial for deploying deep learning in resource-limited environments (e.g., edge devices, federated learning) and for addressing challenges in areas such as space weather prediction and personalized medicine.
Papers
Adapting the Linearised Laplace Model Evidence for Modern Deep Learning
Javier Antorán, David Janz, James Urquhart Allingham, Erik Daxberger, Riccardo Barbano, Eric Nalisnick, José Miguel Hernández-Lobato
FiT: Parameter Efficient Few-shot Transfer Learning for Personalized and Federated Image Classification
Aliaksandra Shysheya, John Bronskill, Massimiliano Patacchiola, Sebastian Nowozin, Richard E Turner