Deep Neural Network
Deep neural networks (DNNs) are complex computational models aiming to mimic the human brain's learning capabilities, primarily focusing on achieving high accuracy and efficiency in various tasks. Current research emphasizes understanding DNN training dynamics, including phenomena like neural collapse and the impact of architectural choices (e.g., convolutional, transformer, and operator networks) and training strategies (e.g., weight decay, knowledge distillation, active learning). This understanding is crucial for improving DNN performance, robustness (including against adversarial attacks and noisy data), and resource efficiency in diverse applications ranging from image recognition and natural language processing to scientific modeling and edge computing.
Papers
A2-DIDM: Privacy-preserving Accumulator-enabled Auditing for Distributed Identity of DNN Model
Tianxiu Xie, Keke Gai, Jing Yu, Liehuang Zhu, Kim-Kwang Raymond Choo
A simple theory for training response of deep neural networks
Kenichi Nakazato
Philosophy of Cognitive Science in the Age of Deep Learning
Raphaël Millière
An Improved Finite-time Analysis of Temporal Difference Learning with Deep Neural Networks
Zhifa Ke, Zaiwen Wen, Junyu Zhang
Classification of Breast Cancer Histopathology Images using a Modified Supervised Contrastive Learning Method
Matina Mahdizadeh Sani, Ali Royat, Mahdieh Soleymani Baghshah
Tilt your Head: Activating the Hidden Spatial-Invariance of Classifiers
Johann Schmidt, Sebastian Stober
The Role of Predictive Uncertainty and Diversity in Embodied AI and Robot Learning
Ransalu Senanayake
Structure-Preserving Network Compression Via Low-Rank Induced Training Through Linear Layers Composition
Xitong Zhang, Ismail R. Alkhouri, Rongrong Wang
Development of Skip Connection in Deep Neural Networks for Computer Vision and Medical Image Analysis: A Survey
Guoping Xu, Xiaxia Wang, Xinglong Wu, Xuesong Leng, Yongchao Xu
Decoupling Feature Extraction and Classification Layers for Calibrated Neural Networks
Mikkel Jordahn, Pablo M. Olmos
Potential Energy based Mixture Model for Noisy Label Learning
Zijia Wang, Wenbin Yang, Zhisong Liu, Zhen Jia
Progressive Feedforward Collapse of ResNet Training
Sicong Wang, Kuo Gai, Shihua Zhang