Deep Neural Network
Deep neural networks (DNNs) are complex computational models aiming to mimic the human brain's learning capabilities, primarily focusing on achieving high accuracy and efficiency in various tasks. Current research emphasizes understanding DNN training dynamics, including phenomena like neural collapse and the impact of architectural choices (e.g., convolutional, transformer, and operator networks) and training strategies (e.g., weight decay, knowledge distillation, active learning). This understanding is crucial for improving DNN performance, robustness (including against adversarial attacks and noisy data), and resource efficiency in diverse applications ranging from image recognition and natural language processing to scientific modeling and edge computing.
Papers
Variational Stochastic Gradient Descent for Deep Neural Networks
Haotian Chen, Anna Kuzina, Babak Esmaeili, Jakub M Tomczak
A singular Riemannian Geometry Approach to Deep Neural Networks III. Piecewise Differentiable Layers and Random Walks on $n$-dimensional Classes
Alessandro Benfenati, Alessio Marta
MindSet: Vision. A toolbox for testing DNNs on key psychological experiments
Valerio Biscione, Dong Yin, Gaurav Malhotra, Marin Dujmovic, Milton L. Montero, Guillermo Puebla, Federico Adolfi, Rachel F. Heaton, John E. Hummel, Benjamin D. Evans, Karim Habashy, Jeffrey S. Bowers
Cellular automata, many-valued logic, and deep neural networks
Yani Zhang, Helmut Bölcskei
Lightweight Inference for Forward-Forward Algorithm
Amin Aminifar, Baichuan Huang, Azra Abtahi, Amir Aminifar
ECLipsE: Efficient Compositional Lipschitz Constant Estimation for Deep Neural Networks
Yuezhu Xu, S. Sivaranjani
Evaluating Adversarial Robustness: A Comparison Of FGSM, Carlini-Wagner Attacks, And The Role of Distillation as Defense Mechanism
Trilokesh Ranjan Sarkar, Nilanjan Das, Pralay Sankar Maitra, Bijoy Some, Ritwik Saha, Orijita Adhikary, Bishal Bose, Jaydip Sen
Rolling the dice for better deep learning performance: A study of randomness techniques in deep neural networks
Mohammed Ghaith Altarabichi, Sławomir Nowaczyk, Sepideh Pashami, Peyman Sheikholharam Mashhadi, Julia Handl
Learning smooth functions in high dimensions: from sparse polynomials to deep neural networks
Ben Adcock, Simone Brugiapaglia, Nick Dexter, Sebastian Moraga
How Much Data are Enough? Investigating Dataset Requirements for Patch-Based Brain MRI Segmentation Tasks
Dongang Wang, Peilin Liu, Hengrui Wang, Heidi Beadnall, Kain Kyle, Linda Ly, Mariano Cabezas, Geng Zhan, Ryan Sullivan, Weidong Cai, Wanli Ouyang, Fernando Calamante, Michael Barnett, Chenyu Wang
Information-Theoretic Generalization Bounds for Deep Neural Networks
Haiyun He, Christina Lee Yu, Ziv Goldfeld
Guarantees of confidentiality via Hammersley-Chapman-Robbins bounds
Kamalika Chaudhuri, Chuan Guo, Laurens van der Maaten, Saeed Mahloujifar, Mark Tygert
Investigation of Energy-efficient AI Model Architectures and Compression Techniques for "Green" Fetal Brain Segmentation
Szymon Mazurek, Monika Pytlarz, Sylwia Malec, Alessandro Crimi
DNN Memory Footprint Reduction via Post-Training Intra-Layer Multi-Precision Quantization
Behnam Ghavami, Amin Kamjoo, Lesley Shannon, Steve Wilton
Domain Generalization through Meta-Learning: A Survey
Arsham Gholamzadeh Khoee, Yinan Yu, Robert Feldt
CAPE: CAM as a Probabilistic Ensemble for Enhanced DNN Interpretation
Townim Faisal Chowdhury, Kewen Liao, Vu Minh Hieu Phan, Minh-Son To, Yutong Xie, Kevin Hung, David Ross, Anton van den Hengel, Johan W. Verjans, Zhibin Liao