Learning Dynamic
Learning dynamics research investigates how models evolve during training, aiming to understand and optimize the process of acquiring knowledge and skills. Current efforts focus on characterizing learning trajectories across various model architectures, including neural networks (both deep and shallow), and algorithms like stochastic gradient descent, analyzing the influence of hyperparameters and initialization strategies on performance and stability. This research is crucial for improving model efficiency, generalization, and robustness in diverse applications, from robotics and reinforcement learning to understanding fundamental aspects of neural network behavior and biological learning.
Papers
Scaling Opponent Shaping to High Dimensional Games
Akbir Khan, Timon Willi, Newton Kwan, Andrea Tacchetti, Chris Lu, Edward Grefenstette, Tim Rocktäschel, Jakob Foerster
Initializing Services in Interactive ML Systems for Diverse Users
Avinandan Bose, Mihaela Curmei, Daniel L. Jiang, Jamie Morgenstern, Sarah Dean, Lillian J. Ratliff, Maryam Fazel