DNN Model
Deep neural networks (DNNs) are complex computational models designed to learn intricate patterns from data, achieving state-of-the-art performance in various applications. Current research focuses on improving DNN efficiency and robustness, including optimizing training algorithms (e.g., exploring dynamical systems for hyperparameter-agnostic training), developing adaptive normalization techniques for non-stationary data, and designing efficient parallelization strategies for distributed inference. These advancements are crucial for deploying DNNs in resource-constrained environments and enhancing their reliability and trustworthiness across diverse domains, impacting fields ranging from medical image analysis to autonomous driving.
Papers
Egeria: Efficient DNN Training with Knowledge-Guided Layer Freezing
Yiding Wang, Decang Sun, Kai Chen, Fan Lai, Mosharaf Chowdhury
Fooling the Eyes of Autonomous Vehicles: Robust Physical Adversarial Examples Against Traffic Sign Recognition Systems
Wei Jia, Zhaojun Lu, Haichun Zhang, Zhenglin Liu, Jie Wang, Gang Qu