Layer Weight
Layer weight optimization is a crucial area of research in deep learning, focusing on improving model efficiency, performance, and robustness. Current efforts concentrate on developing algorithms that dynamically adjust layer weights during training or inference, including methods that leverage gradient information, learn to compose super-weights for parameter sharing, or balance learning rates across layers. These advancements aim to address issues like hallucinations in large language models, reduce computational costs in test-time adaptation, and enable model compression without significant performance loss, ultimately impacting various applications from image classification to natural language processing.
Papers
October 30, 2024
July 21, 2024
December 3, 2023
December 1, 2023
November 19, 2023
November 10, 2023
December 5, 2022
October 31, 2022
September 17, 2022