Neural Architecture Search
Neural Architecture Search (NAS) automates the design of optimal neural network architectures, aiming to replace the time-consuming and often suboptimal process of manual design. Current research focuses on improving efficiency, exploring various search algorithms (including reinforcement learning, evolutionary algorithms, and gradient-based methods), and developing effective zero-cost proxies to reduce computational demands. This field is significant because it promises to accelerate the development of high-performing models across diverse applications, from image recognition and natural language processing to resource-constrained environments like microcontrollers and in-memory computing.
Papers
Hard Work Does Not Always Pay Off: Poisoning Attacks on Neural Architecture Search
Zachary Coalson, Huazheng Wang, Qingyun Wu, Sanghyun Hong
Aux-NAS: Exploiting Auxiliary Labels with Negligibly Extra Inference Cost
Yuan Gao, Weizhong Zhang, Wenhan Luo, Lin Ma, Jin-Gang Yu, Gui-Song Xia, Jiayi Ma
Towards Accurate and Robust Architectures via Neural Architecture Search
Yuwei Ou, Yuqi Feng, Yanan Sun
A Lightweight Neural Architecture Search Model for Medical Image Classification
Lunchen Xie, Eugenio Lomurno, Matteo Gambella, Danilo Ardagna, Manuel Roveri, Matteo Matteucci, Qingjiang Shi
Implantable Adaptive Cells: differentiable architecture search to improve the performance of any trained U-shaped network
Emil Benedykciuk, Marcin Denkowski, Grzegorz Wójcik