Deep Neural Network
Deep neural networks (DNNs) are complex computational models aiming to mimic the human brain's learning capabilities, primarily focusing on achieving high accuracy and efficiency in various tasks. Current research emphasizes understanding DNN training dynamics, including phenomena like neural collapse and the impact of architectural choices (e.g., convolutional, transformer, and operator networks) and training strategies (e.g., weight decay, knowledge distillation, active learning). This understanding is crucial for improving DNN performance, robustness (including against adversarial attacks and noisy data), and resource efficiency in diverse applications ranging from image recognition and natural language processing to scientific modeling and edge computing.
Papers
A Framework to Enable Algorithmic Design Choice Exploration in DNNs
Timothy L. Cronin IV, Sanmukh Kuppannagari
BA-Net: Bridge Attention in Deep Neural Networks
Ronghui Zhang, Runzong Zou, Yue Zhao, Zirui Zhang, Junzhou Chen, Yue Cao, Chuan Hu, Houbing Song
Learning to Compress: Local Rank and Information Compression in Deep Neural Networks
Niket Patel, Ravid Shwartz-Ziv
Explainability of Deep Neural Networks for Brain Tumor Detection
S.Park, J.Kim
Black Boxes and Looking Glasses: Multilevel Symmetries, Reflection Planes, and Convex Optimization in Deep Networks
Emi Zeger, Mert Pilanci
Visualising Feature Learning in Deep Neural Networks by Diagonalizing the Forward Feature Map
Yoonsoo Nam, Chris Mingard, Seok Hyeong Lee, Soufiane Hayou, Ard Louis
Equivariant Neural Functional Networks for Transformers
Viet-Hoang Tran, Thieu N. Vo, An Nguyen The, Tho Tran Huu, Minh-Khoi Nguyen-Nhat, Thanh Tran, Duy-Tung Pham, Tan Minh Nguyen