Deep Neural Network
Deep neural networks (DNNs) are complex computational models aiming to mimic the human brain's learning capabilities, primarily focusing on achieving high accuracy and efficiency in various tasks. Current research emphasizes understanding DNN training dynamics, including phenomena like neural collapse and the impact of architectural choices (e.g., convolutional, transformer, and operator networks) and training strategies (e.g., weight decay, knowledge distillation, active learning). This understanding is crucial for improving DNN performance, robustness (including against adversarial attacks and noisy data), and resource efficiency in diverse applications ranging from image recognition and natural language processing to scientific modeling and edge computing.
Papers
RedTest: Towards Measuring Redundancy in Deep Neural Networks Effectively
Yao Lu, Peixin Zhang, Jingyi Wang, Lei Ma, Xiaoniu Yang, Qi Xuan
A Hard-Label Cryptanalytic Extraction of Non-Fully Connected Deep Neural Networks using Side-Channel Attacks
Benoit Coqueret, Mathieu Carbone, Olivier Sentieys, Gabriel Zaid
Model Inversion Attacks: A Survey of Approaches and Countermeasures
Zhanke Zhou, Jianing Zhu, Fengfei Yu, Xuan Li, Xiong Peng, Tongliang Liu, Bo Han
MicroCrackAttentionNeXt: Advancing Microcrack Detection in Wave Field Analysis Using Deep Neural Networks through Feature Visualization
Fatahlla Moreh (Christian Albrechts University, Kiel, Germany), Yusuf Hasan (Aligarh Muslim University, Aligarh, India), Bilal Zahid Hussain (Texas A&M University, College Station, USA), Mohammad Ammar (Aligarh Muslim University, Aligarh, India), Sven Tomforde (Christian Albrechts University, Kiel, Germany)
Local vs distributed representations: What is the right basis for interpretability?
Julien Colin, Lore Goetschalckx, Thomas Fel, Victor Boutin, Jay Gopal, Thomas Serre, Nuria Oliver
RTify: Aligning Deep Neural Networks with Human Behavioral Decisions
Yu-Ang Cheng, Ivan Felipe Rodriguez, Sixuan Chen, Kohitij Kar, Takeo Watanabe, Thomas Serre
A Subsampling Based Neural Network for Spatial Data
Debjoy Thakur
Theoretical characterisation of the Gauss-Newton conditioning in Neural Networks
Jim Zhao, Sidak Pal Singh, Aurelien Lucchi
Typicalness-Aware Learning for Failure Detection
Yijun Liu, Jiequan Cui, Zhuotao Tian, Senqiao Yang, Qingdong He, Xiaoling Wang, Jingyong Su
Fairness-Utilization Trade-off in Wireless Networks with Explainable Kolmogorov-Arnold Networks
Masoud Shokrnezhad, Hamidreza Mazandarani, Tarik Taleb