Paper ID: 2208.02808

Leveraging the HW/SW Optimizations and Ecosystems that Drive the AI Revolution

Humberto Carvalho, Pavel Zaykov, Asim Ukaye

This paper presents a state-of-the-art overview on how to architect, design, and optimize Deep Neural Networks (DNNs) such that performance is improved and accuracy is preserved. The paper covers a set of optimizations that span the entire Machine Learning processing pipeline. We introduce two types of optimizations. The first alters the DNN model and requires NN re-training, while the second does not. We focus on GPU optimizations, but we believe the presented techniques can be used with other AI inference platforms. To demonstrate the DNN model optimizations, we improve one of the most advanced deep network architectures for optical flow, RAFT arXiv:2003.12039, on a popular edge AI inference platform (Nvidia Jetson AGX Xavier).

Submitted: Aug 4, 2022