DNN Framework
Deep neural network (DNN) frameworks are the foundation of modern artificial intelligence, aiming to improve model accuracy, efficiency, and robustness. Current research focuses on optimizing DNN training and inference, including techniques like efficient parallelization strategies, mixed-precision training, and adaptive model selection based on resource constraints and carbon footprint. These advancements are crucial for deploying DNNs on resource-limited devices (e.g., edge computing) and mitigating challenges like adversarial attacks, noise, and data scarcity, ultimately impacting various fields from computer vision to natural language processing.
Papers
Layer-Wise Partitioning and Merging for Efficient and Scalable Deep Learning
Samson B. Akintoye, Liangxiu Han, Huw Lloyd, Xin Zhang, Darren Dancey, Haoming Chen, Daoqiang Zhang
Aries: Efficient Testing of Deep Neural Networks via Labeling-Free Accuracy Estimation
Qiang Hu, Yuejun Guo, Xiaofei Xie, Maxime Cordy, Lei Ma, Mike Papadakis, Yves Le Traon
Impact of RoCE Congestion Control Policies on Distributed Training of DNNs
Tarannum Khan, Saeed Rashidi, Srinivas Sridharan, Pallavi Shurpali, Aditya Akella, Tushar Krishna