Neural Architecture Search
Neural Architecture Search (NAS) automates the design of optimal neural network architectures, aiming to replace the time-consuming and often suboptimal process of manual design. Current research focuses on improving efficiency, exploring various search algorithms (including reinforcement learning, evolutionary algorithms, and gradient-based methods), and developing effective zero-cost proxies to reduce computational demands. This field is significant because it promises to accelerate the development of high-performing models across diverse applications, from image recognition and natural language processing to resource-constrained environments like microcontrollers and in-memory computing.
Papers
Differentiable Multi-Fidelity Fusion: Efficient Learning of Physics Simulations with Neural Architecture Search and Transfer Learning
Yuwen Deng, Wang Kang, Wei W. Xing
Small Temperature is All You Need for Differentiable Architecture Search
Jiuling Zhang, Zhiming Ding
Rethink DARTS Search Space and Renovate a New Benchmark
Jiuling Zhang, Zhiming Ding