Deep Neural Network
Deep neural networks (DNNs) are complex computational models aiming to mimic the human brain's learning capabilities, primarily focusing on achieving high accuracy and efficiency in various tasks. Current research emphasizes understanding DNN training dynamics, including phenomena like neural collapse and the impact of architectural choices (e.g., convolutional, transformer, and operator networks) and training strategies (e.g., weight decay, knowledge distillation, active learning). This understanding is crucial for improving DNN performance, robustness (including against adversarial attacks and noisy data), and resource efficiency in diverse applications ranging from image recognition and natural language processing to scientific modeling and edge computing.
Papers - Page 60
Overcoming the Stability Gap in Continual Learning
Md Yousuf Harun, Christopher KananExploring Robustness of Image Recognition Models on Hardware Accelerators
Nikolaos Louloudakis, Perry Gibson, José Cano, Ajitha RajanUniform Convergence of Deep Neural Networks with Lipschitz Continuous Activation Functions and Variable Widths
Yuesheng Xu, Haizhang ZhangErfReLU: Adaptive Activation Function for Deep Neural Network
Ashish Rajanand, Pradeep SinghNetwork Degeneracy as an Indicator of Training Performance: Comparing Finite and Infinite Width Angle Predictions
Cameron Jakub, Mihai NicaDVFO: Learning-Based DVFS for Energy-Efficient Edge-Cloud Collaborative Inference
Ziyang Zhang, Yang Zhao, Huan Li, Changyao Lin, Jie Liu
Overview of Deep Learning Methods for Retinal Vessel Segmentation
Gorana Gojić, Ognjen Kundačina, Dragiša Mišković, Dinu DraganVersatile Backdoor Attack with Visible, Semantic, Sample-Specific, and Compatible Triggers
Ruotong Wang, Hongrui Chen, Zihao Zhu, Li Liu, Baoyuan WuInitial Guessing Bias: How Untrained Networks Favor Some Classes
Emanuele Francazi, Aurelien Lucchi, Marco Baity-JesiA New PHO-rmula for Improved Performance of Semi-Structured Networks
David Rügamer(Almost) Provable Error Bounds Under Distribution Shift via Disagreement Discrepancy
Elan Rosenfeld, Saurabh Garg
Inconsistency, Instability, and Generalization Gap of Deep Neural Network Training
Rie Johnson, Tong ZhangSpecial Session: Approximation and Fault Resiliency of DNN Accelerators
Mohammad Hasan Ahmadilivani, Mario Barbareschi, Salvatore Barone, Alberto Bosio, Masoud Daneshtalab, Salvatore Della Torca, Gabriele Gavarini+5Power Control with QoS Guarantees: A Differentiable Projection-based Unsupervised Learning Framework
Mehrazin Alizadeh, Hina TabassumThe Tunnel Effect: Building Data Representations in Deep Neural Networks
Wojciech Masarczyk, Mateusz Ostaszewski, Ehsan Imani, Razvan Pascanu, Piotr Miłoś, Tomasz Trzciński
Benign Overfitting in Deep Neural Networks under Lazy Training
Zhenyu Zhu, Fanghui Liu, Grigorios G Chrysos, Francesco Locatello, Volkan CevherWhat Can We Learn from Unlearnable Datasets?
Pedro Sandoval-Segura, Vasu Singla, Jonas Geiping, Micah Goldblum, Tom GoldsteinImproving Generalization of Complex Models under Unbounded Loss Using PAC-Bayes Bounds
Xitong Zhang, Avrajit Ghosh, Guangliang Liu, Rongrong Wang