DNN Model
Deep neural networks (DNNs) are complex computational models designed to learn intricate patterns from data, achieving state-of-the-art performance in various applications. Current research focuses on improving DNN efficiency and robustness, including optimizing training algorithms (e.g., exploring dynamical systems for hyperparameter-agnostic training), developing adaptive normalization techniques for non-stationary data, and designing efficient parallelization strategies for distributed inference. These advancements are crucial for deploying DNNs in resource-constrained environments and enhancing their reliability and trustworthiness across diverse domains, impacting fields ranging from medical image analysis to autonomous driving.
Papers
Characterizing and Understanding the Behavior of Quantized Models for Reliable Deployment
Qiang Hu, Yuejun Guo, Maxime Cordy, Xiaofei Xie, Wei Ma, Mike Papadakis, Yves Le Traon
LaF: Labeling-Free Model Selection for Automated Deep Neural Network Reusing
Qiang Hu, Yuejun Guo, Maxime Cordy, Xiaofei Xie, Mike Papadakis, Yves Le Traon