Fast Neural Network

Fast neural networks aim to accelerate neural network training and inference, addressing computational bottlenecks in various applications. Research focuses on developing lightweight architectures with minimal parameters, employing efficient algorithms like disentangled diffusion models for dataset distillation and novel convolution methods to improve FLOPS, and exploring alternative hardware implementations such as silicon photonics and metamaterials for faster, more energy-efficient computation. These advancements are crucial for deploying deep learning in resource-constrained environments like edge computing and IoT devices, and for handling increasingly large datasets in fields such as particle physics and medical image analysis.

Papers