Neural Architecture Search
Neural Architecture Search (NAS) automates the design of optimal neural network architectures, aiming to replace the time-consuming and often suboptimal process of manual design. Current research focuses on improving efficiency, exploring various search algorithms (including reinforcement learning, evolutionary algorithms, and gradient-based methods), and developing effective zero-cost proxies to reduce computational demands. This field is significant because it promises to accelerate the development of high-performing models across diverse applications, from image recognition and natural language processing to resource-constrained environments like microcontrollers and in-memory computing.
Papers
Meta-prediction Model for Distillation-Aware NAS on Unseen Datasets
Hayeon Lee, Sohyun An, Minseon Kim, Sung Ju Hwang
FSD: Fully-Specialized Detector via Neural Architecture Search
Zhe Huang, Yudian Li
Neural Architecture Search for Parameter-Efficient Fine-tuning of Large Pre-trained Language Models
Neal Lawton, Anoop Kumar, Govind Thattai, Aram Galstyan, Greg Ver Steeg
Combining Multi-Objective Bayesian Optimization with Reinforcement Learning for TinyML
Mark Deutel, Georgios Kontes, Christopher Mutschler, Jürgen Teich
Do Not Train It: A Linear Neural Architecture Search of Graph Neural Networks
Peng Xu, Lin Zhang, Xuanzhou Liu, Jiaqi Sun, Yue Zhao, Haiqin Yang, Bei Yu
Enhancing Speech Emotion Recognition Through Differentiable Architecture Search
Thejan Rajapakshe, Rajib Rana, Sara Khalifa, Berrak Sisman, Björn Schuller
Divide-and-Conquer the NAS puzzle in Resource Constrained Federated Learning Systems
Yeshwanth Venkatesha, Youngeun Kim, Hyoungseob Park, Priyadarshini Panda
Backpropagation-Free 4D Continuous Ant-Based Neural Topology Search
AbdElRahman ElSaid, Karl Ricanek, Zeming Lyu, Alexander Ororbia, Travis Desell