DNN Model
Deep neural networks (DNNs) are complex computational models designed to learn intricate patterns from data, achieving state-of-the-art performance in various applications. Current research focuses on improving DNN efficiency and robustness, including optimizing training algorithms (e.g., exploring dynamical systems for hyperparameter-agnostic training), developing adaptive normalization techniques for non-stationary data, and designing efficient parallelization strategies for distributed inference. These advancements are crucial for deploying DNNs in resource-constrained environments and enhancing their reliability and trustworthiness across diverse domains, impacting fields ranging from medical image analysis to autonomous driving.
Papers
DNNShifter: An Efficient DNN Pruning System for Edge Computing
Bailey J. Eccles, Philip Rodgers, Peter Kilpatrick, Ivor Spence, Blesson Varghese
FedDIP: Federated Learning with Extreme Dynamic Pruning and Incremental Regularization
Qianyu Long, Christos Anagnostopoulos, Shameem Puthiya Parambath, Daning Bi