Large Scale Model
Large-scale models, encompassing massive neural networks like large language models and vision transformers, aim to achieve superior performance by leveraging immense datasets and computational resources. Current research focuses on mitigating the substantial computational costs and environmental impact of these models through techniques such as model quantization, efficient fine-tuning methods (e.g., LoRA, prompt engineering), and resource-aware training strategies. These advancements are crucial for making large-scale models more accessible and sustainable, impacting various fields from natural language processing and computer vision to resource-constrained applications and improving the efficiency of cloud computing infrastructure.
Papers
November 10, 2024
October 29, 2024
October 24, 2024
October 11, 2024
September 18, 2024
August 2, 2024
July 16, 2024
July 4, 2024
June 24, 2024
June 19, 2024
June 17, 2024
May 30, 2024
May 25, 2024
May 24, 2024
May 21, 2024
March 20, 2024
March 19, 2024
March 15, 2024
February 27, 2024
February 20, 2024