Model Weight
Model weights, the numerical parameters within machine learning models, are a central focus of current research, with objectives ranging from improving model performance and security to understanding their legal and ethical implications. Research emphasizes efficient training and aggregation techniques, particularly within federated learning and for large language models (LLMs), often employing methods like weight averaging, sparse formats, and parameter-efficient fine-tuning. The study of model weights is crucial for advancing model interpretability, security (detecting hidden malware), and responsible development and deployment of AI systems, impacting both scientific understanding and practical applications.
Papers
November 14, 2024
November 1, 2024
October 17, 2024
October 5, 2024
September 28, 2024
September 25, 2024
July 18, 2024
July 9, 2024
July 1, 2024
June 19, 2024
June 17, 2024
June 5, 2024
May 28, 2024
May 8, 2024
April 30, 2024
April 24, 2024
April 2, 2024
February 1, 2024
December 8, 2023
November 29, 2023