Scale Pre Trained Language Model
Scale pre-trained language models (PLMs) are massive neural networks trained on enormous text datasets, aiming to achieve human-level performance on various natural language processing tasks. Current research focuses on improving efficiency (e.g., through techniques like pruning, quantization, and dynamic planning), enhancing alignment with human values (e.g., via fine-grained supervision and differentially private training), and exploring parameter-efficient fine-tuning methods (e.g., adapters and prompt tuning). These advancements are significant because they address the computational cost and ethical concerns associated with deploying these powerful models, while also expanding their applicability across diverse domains.
Papers
July 6, 2024
June 4, 2024
May 31, 2024
May 29, 2024
May 9, 2024
April 5, 2024
February 13, 2024
January 25, 2024
October 24, 2023
October 8, 2023
October 7, 2023
September 8, 2023
July 26, 2023
July 19, 2023
June 27, 2023
June 21, 2023
June 12, 2023
June 11, 2023
May 29, 2023