Deep Neural Network
Deep neural networks (DNNs) are complex computational models aiming to mimic the human brain's learning capabilities, primarily focusing on achieving high accuracy and efficiency in various tasks. Current research emphasizes understanding DNN training dynamics, including phenomena like neural collapse and the impact of architectural choices (e.g., convolutional, transformer, and operator networks) and training strategies (e.g., weight decay, knowledge distillation, active learning). This understanding is crucial for improving DNN performance, robustness (including against adversarial attacks and noisy data), and resource efficiency in diverse applications ranging from image recognition and natural language processing to scientific modeling and edge computing.
Papers
Length-scale study in deep learning prediction for non-small cell lung cancer brain metastasis
Haowen Zhou, Steven, Lin, Mark Watson, Cory T. Bernadt, Oumeng Zhang, Ramaswamy Govindan, Richard J. Cote, Changhuei Yang
On the Use of Anchoring for Training Vision Models
Vivek Narayanaswamy, Kowshik Thopalli, Rushil Anirudh, Yamen Mubarka, Wesam Sakla, Jayaraman J. Thiagarajan
SAM-VMNet: Deep Neural Networks For Coronary Angiography Vessel Segmentation
Xueying Zeng, Baixiang Huang, Yu Luo, Guangyu Wei, Songyan He, Yushuang Shao
Stochastic Resetting Mitigates Latent Gradient Bias of SGD from Label Noise
Youngkyoung Bae, Yeongwoo Song, Hawoong Jeong
subMFL: Compatiple subModel Generation for Federated Learning in Device Heterogenous Environment
Zeyneddin Oz, Ceylan Soygul Oz, Abdollah Malekjafarian, Nima Afraz, Fatemeh Golpayegani
Accurate and Reliable Predictions with Mutual-Transport Ensemble
Han Liu, Peng Cui, Bingning Wang, Jun Zhu, Xiaolin Hu
Exploiting Chaotic Dynamics as Deep Neural Networks
Shuhong Liu, Nozomi Akashi, Qingyao Huang, Yasuo Kuniyoshi, Kohei Nakajima
State Space Models are Comparable to Transformers in Estimating Functions with Dynamic Smoothness
Naoki Nishikawa, Taiji Suzuki
EntProp: High Entropy Propagation for Improving Accuracy and Robustness
Shohei Enomoto
Towards a theory of how the structure of language is acquired by deep neural networks
Francesco Cagnetta, Matthieu Wyart
2BP: 2-Stage Backpropagation
Christopher Rae, Joseph K. L. Lee, James Richings
Trustworthy DNN Partition for Blockchain-enabled Digital Twin in Wireless IIoT Networks
Xiumei Deng, Jun Li, Long Shi, Kang Wei, Ming Ding, Yumeng Shao, Wen Chen, Shi Jin
Understanding Forgetting in Continual Learning with Linear Regression
Meng Ding, Kaiyi Ji, Di Wang, Jinhui Xu
How Does Perfect Fitting Affect Representation Learning? On the Training Dynamics of Representations in Deep Neural Networks
Yuval Sharon, Yehuda Dar
Large Deviations of Gaussian Neural Networks with ReLU activation
Quirin Vogel
Front-propagation Algorithm: Explainable AI Technique for Extracting Linear Function Approximations from Neural Networks
Javier Viaña
Maintaining and Managing Road Quality:Using MLP and DNN
Makgotso Jacqueline Maotwana
Method and Software Tool for Generating Artificial Databases of Biomedical Images Based on Deep Neural Networks
Oleh Berezsky, Petro Liashchynskyi, Oleh Pitsun, Grygoriy Melnyk