Neural Architecture Search
Neural Architecture Search (NAS) automates the design of optimal neural network architectures, aiming to replace the time-consuming and often suboptimal process of manual design. Current research focuses on improving efficiency, exploring various search algorithms (including reinforcement learning, evolutionary algorithms, and gradient-based methods), and developing effective zero-cost proxies to reduce computational demands. This field is significant because it promises to accelerate the development of high-performing models across diverse applications, from image recognition and natural language processing to resource-constrained environments like microcontrollers and in-memory computing.
Papers
NAS-PRNet: Neural Architecture Search generated Phase Retrieval Net for Off-axis Quantitative Phase Imaging
Xin Shu, Mengxuan Niu, Yi Zhang, Renjie Zhou
Shortest Edit Path Crossover: A Theory-driven Solution to the Permutation Problem in Evolutionary Neural Architecture Search
Xin Qiu, Risto Miikkulainen
HQNAS: Auto CNN deployment framework for joint quantization and architecture search
Hongjiang Chen, Yang Wang, Leibo Liu, Shaojun Wei, Shouyi Yin
FAQS: Communication-efficient Federate DNN Architecture and Quantization Co-Search for personalized Hardware-aware Preferences
Hongjiang Chen, Yang Wang, Leibo Liu, Shaojun Wei, Shouyi Yin
Analyzing the Expected Hitting Time of Evolutionary Computation-based Neural Architecture Search Algorithms
Zeqiong Lv, Chao Qian, Gary G. Yen, Yanan Sun
RoHNAS: A Neural Architecture Search Framework with Conjoint Optimization for Adversarial Robustness and Hardware Efficiency of Convolutional and Capsule Networks
Alberto Marchisio, Vojtech Mrazek, Andrea Massa, Beatrice Bussolino, Maurizio Martina, Muhammad Shafique
NAS-Bench-Suite-Zero: Accelerating Research on Zero Cost Proxies
Arjun Krishnakumar, Colin White, Arber Zela, Renbo Tu, Mahmoud Safari, Frank Hutter
POPNASv2: An Efficient Multi-Objective Neural Architecture Search Technique
Andrea Falanti, Eugenio Lomurno, Stefano Samele, Danilo Ardagna, Matteo Matteucci
Energy Consumption of Neural Networks on NVIDIA Edge Boards: an Empirical Model
Seyyidahmed Lahmer, Aria Khoshsirat, Michele Rossi, Andrea Zanella
Toward Edge-Efficient Dense Predictions with Synergistic Multi-Task Neural Architecture Search
Thanh Vu, Yanqi Zhou, Chunfeng Wen, Yueqi Li, Jan-Michael Frahm