Layer by Layer
Layer-by-layer processing, a fundamental approach in many deep learning models, involves sequentially processing information through network layers. Current research focuses on optimizing this process, exploring asynchronous execution to improve efficiency and robustness in spiking neural networks and vision transformers, as well as developing novel training methods to accommodate these changes. These efforts aim to reduce computational costs, improve energy efficiency, and enhance the resilience of deep learning models to hardware limitations or unexpected disruptions, impacting both the performance of AI systems and the design of efficient hardware accelerators.
Papers
A Two-Scale Complexity Measure for Deep Learning Models
Massimiliano Datres, Gian Paolo Leonardi, Alessio Figalli, David Sutter
MorphGrower: A Synchronized Layer-by-layer Growing Approach for Plausible Neuronal Morphology Generation
Nianzu Yang, Kaipeng Zeng, Haotian Lu, Yexin Wu, Zexin Yuan, Danni Chen, Shengdian Jiang, Jiaxiang Wu, Yimin Wang, Junchi Yan