Model Weight
Model weights, the numerical parameters within machine learning models, are a central focus of current research, with objectives ranging from improving model performance and security to understanding their legal and ethical implications. Research emphasizes efficient training and aggregation techniques, particularly within federated learning and for large language models (LLMs), often employing methods like weight averaging, sparse formats, and parameter-efficient fine-tuning. The study of model weights is crucial for advancing model interpretability, security (detecting hidden malware), and responsible development and deployment of AI systems, impacting both scientific understanding and practical applications.
Papers
November 27, 2023
November 23, 2023
October 31, 2023
October 30, 2023
October 20, 2023
October 10, 2023
August 28, 2023
August 19, 2023
August 17, 2023
May 23, 2023
April 16, 2023
December 6, 2022
October 5, 2022
October 4, 2022
July 20, 2022
June 10, 2022
May 2, 2022
April 28, 2022
April 14, 2022