Deep Neural Network
Deep neural networks (DNNs) are complex computational models aiming to mimic the human brain's learning capabilities, primarily focusing on achieving high accuracy and efficiency in various tasks. Current research emphasizes understanding DNN training dynamics, including phenomena like neural collapse and the impact of architectural choices (e.g., convolutional, transformer, and operator networks) and training strategies (e.g., weight decay, knowledge distillation, active learning). This understanding is crucial for improving DNN performance, robustness (including against adversarial attacks and noisy data), and resource efficiency in diverse applications ranging from image recognition and natural language processing to scientific modeling and edge computing.
Papers
LLS: Local Learning Rule for Deep Neural Networks Inspired by Neural Activity Synchronization
Marco Paul E. Apolinario, Arani Roy, Kaushik Roy
PriCE: Privacy-Preserving and Cost-Effective Scheduling for Parallelizing the Large Medical Image Processing Workflow over Hybrid Clouds
Yuandou Wang, Neel Kanwal, Kjersti Engan, Chunming Rong, Paola Grosso, Zhiming Zhao
Conditional Variational Auto Encoder Based Dynamic Motion for Multi-task Imitation Learning
Binzhao Xu, Muhayy Ud Din, Irfan Hussain
From Frege to chatGPT: Compositionality in language, cognition, and deep neural networks
Jacob Russin, Sam Whitman McGrath, Danielle J. Williams, Lotem Elber-Dorozko
Overcoming the Challenges of Batch Normalization in Federated Learning
Rachid Guerraoui, Rafael Pinot, Geovani Rizk, John Stephan, François Taiani
PrivCirNet: Efficient Private Inference via Block Circulant Transformation
Tianshi Xu, Lemeng Wu, Runsheng Wang, Meng Li
Neural Collapse versus Low-rank Bias: Is Deep Neural Collapse Really Optimal?
Peter Súkeník, Marco Mondelli, Christoph Lampert
Improving Generalization of Deep Neural Networks by Optimum Shifting
Yuyan Zhou, Ye Li, Lei Feng, Sheng-Jun Huang
Progress Measures for Grokking on Real-world Tasks
Satvik Golechha
Visualizing, Rethinking, and Mining the Loss Landscape of Deep Neural Networks
Xin-Chun Li, Lan Li, De-Chuan Zhan
Exploring and Exploiting the Asymmetric Valley of Deep Neural Networks
Xin-Chun Li, Jin-Lin Tang, Bo Zhang, Lan Li, De-Chuan Zhan