Multi Layer
Multi-layer architectures are a central theme in contemporary machine learning, aiming to improve the efficiency and accuracy of various models by strategically organizing computational units into multiple layers. Current research focuses on optimizing these architectures, exploring alternatives to traditional multilayer perceptrons (MLPs) such as Kolmogorov-Arnold Networks (KANs) and Fourier Analysis Networks (FANs), and investigating techniques like layer distillation and frequency shifting for improved performance and reduced computational cost. These advancements have significant implications for diverse applications, including music generation, image processing, natural language processing, and scientific computing, by enabling faster, more accurate, and more efficient models.
Papers
When Layers Play the Lottery, all Tickets Win at Initialization
Artur Jordao, George Correa de Araujo, Helena de Almeida Maia, Helio Pedrini
MLPGradientFlow: going with the flow of multilayer perceptrons (and finding minima fast and accurately)
Johanni Brea, Flavio Martinelli, Berfin Şimşek, Wulfram Gerstner
Functional Neural Networks: Shift invariant models for functional data with applications to EEG classification
Florian Heinrichs, Mavin Heim, Corinna Weber
A Novel Multi-Layer Framework for BVLoS Drone Operation: A Preliminary Study
Francesco Betti Sorbelli, Punyasha Chatterjee, Federico Corò, Lorenzo Palazzetti, Cristina M. Pinotti