Robust Pre
Robust pre-training focuses on developing machine learning models that generalize well to diverse downstream tasks and maintain performance under various conditions, including adversarial attacks or data shifts. Current research emphasizes improving the robustness of pre-trained models through techniques like adversarial training, carefully designed initialization strategies (e.g., robust linear initialization), and minimax loss functions for pre-training that optimize for worst-case scenarios. These advancements are significant because they enhance the reliability and applicability of pre-trained models across various domains, leading to more robust and dependable AI systems in practical applications.
Papers
March 12, 2024
December 10, 2023
June 21, 2023
May 23, 2023
April 3, 2023
March 20, 2023
February 22, 2023
June 9, 2022