Trainable Parameter

Trainable parameters in machine learning models are the adjustable components that are learned during the training process, and minimizing their number is a key focus of current research. Researchers are exploring techniques like low-rank adaptation (LoRA), adapter modules, and selective parameter freezing to reduce the number of trainable parameters while maintaining or improving model performance across various architectures, including transformers and convolutional neural networks. This pursuit of parameter efficiency is crucial for deploying large models on resource-constrained devices and for improving training speed and reducing computational costs, impacting fields ranging from speech recognition to medical image analysis.

Papers