Scale Pre Trained Language Model
Scale pre-trained language models (PLMs) are massive neural networks trained on enormous text datasets, aiming to achieve human-level performance on various natural language processing tasks. Current research focuses on improving efficiency (e.g., through techniques like pruning, quantization, and dynamic planning), enhancing alignment with human values (e.g., via fine-grained supervision and differentially private training), and exploring parameter-efficient fine-tuning methods (e.g., adapters and prompt tuning). These advancements are significant because they address the computational cost and ethical concerns associated with deploying these powerful models, while also expanding their applicability across diverse domains.
Papers
May 11, 2023
May 8, 2023
March 31, 2023
March 18, 2023
February 7, 2023
January 30, 2023
January 22, 2023
January 11, 2023
December 27, 2022
November 3, 2022
October 24, 2022
October 23, 2022
October 19, 2022
October 8, 2022
June 24, 2022
June 16, 2022
June 12, 2022
June 11, 2022
May 16, 2022