Well Trained
Research on "well-trained" deep neural networks (DNNs) focuses on understanding their internal representations, improving their robustness, and leveraging their learned knowledge for various tasks. Current efforts explore techniques like weight-space learning and feature reconstruction to analyze and reprogram DNNs without altering their parameters, investigating model architectures ranging from simple linear models to complex convolutional and recurrent networks. These advancements contribute to a deeper understanding of DNN behavior, leading to improved model generalization, robustness against adversarial attacks and distribution shifts, and more efficient model initialization and training strategies.
Papers
November 12, 2024
October 6, 2024
June 14, 2024
May 16, 2024
April 25, 2024
March 15, 2024
November 8, 2023
August 18, 2023
July 28, 2023
February 25, 2023
November 25, 2022
October 27, 2022