Trainable Parameter
Trainable parameters in machine learning models are the adjustable components that are learned during the training process, and minimizing their number is a key focus of current research. Researchers are exploring techniques like low-rank adaptation (LoRA), adapter modules, and selective parameter freezing to reduce the number of trainable parameters while maintaining or improving model performance across various architectures, including transformers and convolutional neural networks. This pursuit of parameter efficiency is crucial for deploying large models on resource-constrained devices and for improving training speed and reducing computational costs, impacting fields ranging from speech recognition to medical image analysis.
Papers
October 9, 2024
October 2, 2024
August 20, 2024
May 4, 2024
February 24, 2024
February 15, 2024
February 1, 2024
June 12, 2023
May 15, 2023
November 4, 2022