Fast Neural Network
Fast neural networks aim to accelerate neural network training and inference, addressing computational bottlenecks in various applications. Research focuses on developing lightweight architectures with minimal parameters, employing efficient algorithms like disentangled diffusion models for dataset distillation and novel convolution methods to improve FLOPS, and exploring alternative hardware implementations such as silicon photonics and metamaterials for faster, more energy-efficient computation. These advancements are crucial for deploying deep learning in resource-constrained environments like edge computing and IoT devices, and for handling increasingly large datasets in fields such as particle physics and medical image analysis.
Papers
July 21, 2024
October 24, 2023
August 1, 2023
July 21, 2023
March 7, 2023
March 1, 2023
November 25, 2022
May 18, 2022
November 12, 2021