Scale Pre Trained Language Model
Scale pre-trained language models (PLMs) are massive neural networks trained on enormous text datasets, aiming to achieve human-level performance on various natural language processing tasks. Current research focuses on improving efficiency (e.g., through techniques like pruning, quantization, and dynamic planning), enhancing alignment with human values (e.g., via fine-grained supervision and differentially private training), and exploring parameter-efficient fine-tuning methods (e.g., adapters and prompt tuning). These advancements are significant because they address the computational cost and ethical concerns associated with deploying these powerful models, while also expanding their applicability across diverse domains.
Papers
April 28, 2022
April 16, 2022
April 13, 2022
March 19, 2022
February 16, 2022