DNN Framework
Deep neural network (DNN) frameworks are the foundation of modern artificial intelligence, aiming to improve model accuracy, efficiency, and robustness. Current research focuses on optimizing DNN training and inference, including techniques like efficient parallelization strategies, mixed-precision training, and adaptive model selection based on resource constraints and carbon footprint. These advancements are crucial for deploying DNNs on resource-limited devices (e.g., edge computing) and mitigating challenges like adversarial attacks, noise, and data scarcity, ultimately impacting various fields from computer vision to natural language processing.
Papers
CASSOCK: Viable Backdoor Attacks against DNN in The Wall of Source-Specific Backdoor Defences
Shang Wang, Yansong Gao, Anmin Fu, Zhi Zhang, Yuqing Zhang, Willy Susilo, Dongxi Liu
Feature Learning in $L_{2}$-regularized DNNs: Attraction/Repulsion and Sparsity
Arthur Jacot, Eugene Golikov, Clément Hongler, Franck Gabriel
Exact Feature Collisions in Neural Networks
Utku Ozbulak, Manvel Gasparyan, Shodhan Rao, Wesley De Neve, Arnout Van Messem
HW-Aware Initialization of DNN Auto-Tuning to Improve Exploration Time and Robustness
Dennis Rieber, Moritz Reiber, Oliver Bringmann, Holger Fröning