Deep Neural Network
Deep neural networks (DNNs) are complex computational models aiming to mimic the human brain's learning capabilities, primarily focusing on achieving high accuracy and efficiency in various tasks. Current research emphasizes understanding DNN training dynamics, including phenomena like neural collapse and the impact of architectural choices (e.g., convolutional, transformer, and operator networks) and training strategies (e.g., weight decay, knowledge distillation, active learning). This understanding is crucial for improving DNN performance, robustness (including against adversarial attacks and noisy data), and resource efficiency in diverse applications ranging from image recognition and natural language processing to scientific modeling and edge computing.
Papers
RNC: Efficient RRAM-aware NAS and Compilation for DNNs on Resource-Constrained Edge Devices
Kam Chi Loong, Shihao Han, Sishuo Liu, Ning Lin, Zhongrui Wang
Understanding the Benefits of SimCLR Pre-Training in Two-Layer Convolutional Neural Networks
Han Zhang, Yuan Cao
Efficient Noise Mitigation for Enhancing Inference Accuracy in DNNs on Mixed-Signal Accelerators
Seyedarmin Azizi, Mohammad Erfan Sadeghi, Mehdi Kamal, Massoud Pedram
The Effect of Lossy Compression on 3D Medical Images Segmentation with Deep Learning
Anvar Kurmukov, Bogdan Zavolovich, Aleksandra Dalechina, Vladislav Proskurov, Boris Shirokikh
Verified Relative Safety Margins for Neural Network Twins
Anahita Baninajjar, Kamran Hosseini, Ahmed Rezine, Amir Aminifar
Stochastic Subsampling With Average Pooling
Bum Jun Kim, Sang Woo Kim