DNN Framework
Deep neural network (DNN) frameworks are the foundation of modern artificial intelligence, aiming to improve model accuracy, efficiency, and robustness. Current research focuses on optimizing DNN training and inference, including techniques like efficient parallelization strategies, mixed-precision training, and adaptive model selection based on resource constraints and carbon footprint. These advancements are crucial for deploying DNNs on resource-limited devices (e.g., edge computing) and mitigating challenges like adversarial attacks, noise, and data scarcity, ultimately impacting various fields from computer vision to natural language processing.
Papers
Layer-Wise Partitioning and Merging for Efficient and Scalable Deep Learning
Samson B. Akintoye, Liangxiu Han, Huw Lloyd, Xin Zhang, Darren Dancey, Haoming Chen, Daoqiang Zhang
Aries: Efficient Testing of Deep Neural Networks via Labeling-Free Accuracy Estimation
Qiang Hu, Yuejun Guo, Xiaofei Xie, Maxime Cordy, Lei Ma, Mike Papadakis, Yves Le Traon
Impact of RoCE Congestion Control Policies on Distributed Training of DNNs
Tarannum Khan, Saeed Rashidi, Srinivas Sridharan, Pallavi Shurpali, Aditya Akella, Tushar Krishna
CASSOCK: Viable Backdoor Attacks against DNN in The Wall of Source-Specific Backdoor Defences
Shang Wang, Yansong Gao, Anmin Fu, Zhi Zhang, Yuqing Zhang, Willy Susilo, Dongxi Liu
Feature Learning in $L_{2}$-regularized DNNs: Attraction/Repulsion and Sparsity
Arthur Jacot, Eugene Golikov, Clément Hongler, Franck Gabriel
Exact Feature Collisions in Neural Networks
Utku Ozbulak, Manvel Gasparyan, Shodhan Rao, Wesley De Neve, Arnout Van Messem