Neural Architecture Search
Neural Architecture Search (NAS) automates the design of optimal neural network architectures, aiming to replace the time-consuming and often suboptimal process of manual design. Current research focuses on improving efficiency, exploring various search algorithms (including reinforcement learning, evolutionary algorithms, and gradient-based methods), and developing effective zero-cost proxies to reduce computational demands. This field is significant because it promises to accelerate the development of high-performing models across diverse applications, from image recognition and natural language processing to resource-constrained environments like microcontrollers and in-memory computing.
Papers
NAS-Bench-x11 and the Power of Learning Curves
Shen Yan, Colin White, Yash Savani, Frank Hutter
AUTOKD: Automatic Knowledge Distillation Into A Student Architecture Family
Roy Henha Eyono, Fabio Maria Carlucci, Pedro M Esperança, Binxin Ru, Phillip Torr
A Data-driven Approach to Neural Architecture Search Initialization
Kalifou René Traoré, Andrés Camero, Xiao Xiang Zhu