Neural Architecture Search
Neural Architecture Search (NAS) automates the design of optimal neural network architectures, aiming to replace the time-consuming and often suboptimal process of manual design. Current research focuses on improving efficiency, exploring various search algorithms (including reinforcement learning, evolutionary algorithms, and gradient-based methods), and developing effective zero-cost proxies to reduce computational demands. This field is significant because it promises to accelerate the development of high-performing models across diverse applications, from image recognition and natural language processing to resource-constrained environments like microcontrollers and in-memory computing.
Papers
Towards Neural Architecture Search for Transfer Learning in 6G Networks
Adam Orucu, Farnaz Moradi, Masoumeh Ebrahimi, Andreas Johnsson
CAP: A Context-Aware Neural Predictor for NAS
Han Ji, Yuqi Feng, Yanan Sun
Can Dense Connectivity Benefit Outlier Detection? An Odyssey with NAS
Hao Fu, Tunhou Zhang, Hai Li, Yiran Chen
Fruit Classification System with Deep Learning and Neural Architecture Search
Christine Dewi, Dhananjay Thiruvady, Nayyar Zaidi
einspace: Searching for Neural Architectures from Fundamental Operations
Linus Ericsson, Miguel Espinosa, Chenhongyi Yang, Antreas Antoniou, Amos Storkey, Shay B. Cohen, Steven McDonagh, Elliot J. Crowley
GI-NAS: Boosting Gradient Inversion Attacks through Adaptive Neural Architecture Search
Wenbo Yu, Hao Fang, Bin Chen, Xiaohang Sui, Chuan Chen, Hao Wu, Shu-Tao Xia, Ke Xu