DNN Model
Deep neural networks (DNNs) are complex computational models designed to learn intricate patterns from data, achieving state-of-the-art performance in various applications. Current research focuses on improving DNN efficiency and robustness, including optimizing training algorithms (e.g., exploring dynamical systems for hyperparameter-agnostic training), developing adaptive normalization techniques for non-stationary data, and designing efficient parallelization strategies for distributed inference. These advancements are crucial for deploying DNNs in resource-constrained environments and enhancing their reliability and trustworthiness across diverse domains, impacting fields ranging from medical image analysis to autonomous driving.
Papers
DiffGAN: A Test Generation Approach for Differential Testing of Deep Neural Networks
Zohreh Aghababaeyan, Manel Abdellatif, Lionel Briand, Ramesh S
Representation Similarity: A Better Guidance of DNN Layer Sharing for Edge Computing without Training
Bryan Bo Cao, Abhinav Sharma, Manavjeet Singh, Anshul Gandhi, Samir Das, Shubham Jain