Large Scale Model

Large-scale models, encompassing massive neural networks like large language models and vision transformers, aim to achieve superior performance by leveraging immense datasets and computational resources. Current research focuses on mitigating the substantial computational costs and environmental impact of these models through techniques such as model quantization, efficient fine-tuning methods (e.g., LoRA, prompt engineering), and resource-aware training strategies. These advancements are crucial for making large-scale models more accessible and sustainable, impacting various fields from natural language processing and computer vision to resource-constrained applications and improving the efficiency of cloud computing infrastructure.

Papers