Optimal Neural Network
Finding optimal neural networks involves designing architectures and hyperparameters that maximize performance while minimizing computational cost and complexity. Current research focuses on automating this process through techniques like neural architecture search (NAS), often employing evolutionary algorithms or Bayesian optimization, and exploring efficient model architectures such as spiking neural networks (SNNs) and graph neural networks (GNNs). These efforts aim to improve the efficiency and accuracy of neural networks across various applications, from image classification and natural language processing to resource-constrained environments like TinyML. The ultimate goal is to develop methods for reliably and efficiently creating high-performing neural networks tailored to specific tasks and resource limitations.
Papers
Improving Stability and Performance of Spiking Neural Networks through Enhancing Temporal Consistency
Dongcheng Zhao, Guobin Shen, Yiting Dong, Yang Li, Yi Zeng
Combining Multi-Objective Bayesian Optimization with Reinforcement Learning for TinyML
Mark Deutel, Georgios Kontes, Christopher Mutschler, Jürgen Teich