Large Scale Model
Large-scale models, encompassing massive neural networks like large language models and vision transformers, aim to achieve superior performance by leveraging immense datasets and computational resources. Current research focuses on mitigating the substantial computational costs and environmental impact of these models through techniques such as model quantization, efficient fine-tuning methods (e.g., LoRA, prompt engineering), and resource-aware training strategies. These advancements are crucial for making large-scale models more accessible and sustainable, impacting various fields from natural language processing and computer vision to resource-constrained applications and improving the efficiency of cloud computing infrastructure.
Papers
February 19, 2024
January 30, 2024
December 22, 2023
September 14, 2023
August 16, 2023
August 12, 2023
June 28, 2023
June 15, 2023
May 24, 2023
May 6, 2023
April 11, 2023
March 7, 2023
March 6, 2023
February 6, 2023
December 26, 2022
December 14, 2022
August 30, 2022
March 31, 2022