Deep Neural Network
Deep neural networks (DNNs) are complex computational models aiming to mimic the human brain's learning capabilities, primarily focusing on achieving high accuracy and efficiency in various tasks. Current research emphasizes understanding DNN training dynamics, including phenomena like neural collapse and the impact of architectural choices (e.g., convolutional, transformer, and operator networks) and training strategies (e.g., weight decay, knowledge distillation, active learning). This understanding is crucial for improving DNN performance, robustness (including against adversarial attacks and noisy data), and resource efficiency in diverse applications ranging from image recognition and natural language processing to scientific modeling and edge computing.
Papers - Page 16
UnLearning from Experience to Avoid Spurious Correlations
Jeff Mitchell, Jesús Martínez del Rincón, Niall McLaughlinForeCal: Random Forest-based Calibration for DNNs
Dhruv NigamAdaptive Class Emergence Training: Enhancing Neural Network Stability and Generalization through Progressive Target Evolution
Jaouad Dabounou
Optimization and Deployment of Deep Neural Networks for PPG-based Blood Pressure Estimation Targeting Low-power Wearables
Alessio Burrello, Francesco Carlucci, Giovanni Pollo, Xiaying Wang, Massimo Poncino, Enrico Macii, Luca Benini, Daniele Jahier PagliariConvolutional Networks as Extremely Small Foundation Models: Visual Prompting and Theoretical Perspective
Jianqiao WangniBeyond Unconstrained Features: Neural Collapse for Shallow Neural Networks with General Data
Wanli Hong, Shuyang Ling
CARIn: Constraint-Aware and Responsive Inference on Heterogeneous Devices for Single- and Multi-DNN Workloads
Ioannis Panopoulos, Stylianos I. Venieris, Iakovos S. VenierisDNN-GDITD: Out-of-distribution detection via Deep Neural Network based Gaussian Descriptor for Imbalanced Tabular Data
Priyanka Chudasama, Anil Surisetty, Aakarsh Malhotra, Alok Singh
Trust And Balance: Few Trusted Samples Pseudo-Labeling and Temperature Scaled Loss for Effective Source-Free Unsupervised Domain Adaptation
Andrea Maracani, Lorenzo Rosasco, Lorenzo NataleStreamlined optical training of large-scale modern deep learning architectures with direct feedback alignment
Ziao Wang, Kilian Müller, Matthew Filipovich, Julien Launay, Ruben Ohana, Gustave Pariente, Safa Mokaadi, Charles Brossollet, Fabien Moreau+6