Pre Trained Deep Neural Network
Pre-trained deep neural networks (DNNs) leverage massive datasets to learn generalizable feature representations, significantly accelerating downstream task training and reducing computational demands. Current research focuses on improving their fairness, efficiency (including backpropagation-free training and pruning techniques), and out-of-distribution generalization, often employing techniques like dropout, gradient boosting, and evolutionary algorithms to optimize model architectures and training processes. These advancements are crucial for deploying DNNs in resource-constrained environments and mitigating biases, thereby enhancing the reliability and applicability of deep learning across diverse scientific and practical domains.
Papers
October 18, 2024
August 10, 2024
July 5, 2024
May 25, 2024
May 23, 2024
May 6, 2024
March 9, 2024
October 23, 2023
July 18, 2023
May 31, 2023
March 5, 2023
December 1, 2022
November 3, 2022
July 7, 2022
May 22, 2022
March 20, 2022
January 14, 2022