Neural Architecture
Neural architecture research focuses on designing and optimizing the structure of artificial neural networks to improve efficiency, accuracy, and interpretability. Current efforts concentrate on developing novel architectures like Kolmogorov-Arnold Networks and transformers, employing efficient search algorithms (e.g., evolutionary algorithms, generative flows) to explore vast design spaces, and analyzing the representational similarity and training efficiency of different models. These advancements are crucial for deploying deep learning in resource-constrained environments and for gaining a deeper understanding of how neural networks learn and generalize, impacting fields ranging from computer vision and natural language processing to scientific computing and edge devices.
Papers
A Systematic Review of Generalization Research in Medical Image Classification
Sarah Matta, Mathieu Lamard, Philippe Zhang, Alexandre Le Guilcher, Laurent Borderie, Béatrice Cochener, Gwenolé Quellec
PARMESAN: Parameter-Free Memory Search and Transduction for Dense Prediction Tasks
Philip Matthias Winter, Maria Wimmer, David Major, Dimitrios Lenis, Astrid Berg, Theresa Neubauer, Gaia Romana De Paolis, Johannes Novotny, Sophia Ulonska, Katja Bühler
Towards Decoding Brain Activity During Passive Listening of Speech
Milán András Fodor, Tamás Gábor Csapó, Frigyes Viktor Arthur
CARTE: Pretraining and Transfer for Tabular Learning
Myung Jun Kim, Léo Grinsztajn, Gaël Varoquaux
m2mKD: Module-to-Module Knowledge Distillation for Modular Transformers
Ka Man Lo, Yiming Liang, Wenyu Du, Yuantao Fan, Zili Wang, Wenhao Huang, Lei Ma, Jie Fu