Pre Trained Language Model
Pre-trained language models (PLMs) are large neural networks trained on massive text datasets, aiming to capture the statistical regularities of language for various downstream tasks. Current research focuses on improving PLM efficiency through techniques like parameter-efficient fine-tuning and exploring their application in diverse fields, including scientific text classification, mental health assessment, and financial forecasting, often leveraging architectures like BERT and its variants. The ability of PLMs to effectively process and generate human language has significant implications for numerous scientific disciplines and practical applications, ranging from improved information retrieval to more sophisticated AI assistants.
Papers
IELM: An Open Information Extraction Benchmark for Pre-Trained Language Models
Chenguang Wang, Xiao Liu, Dawn Song
Exploring Mode Connectivity for Pre-trained Language Models
Yujia Qin, Cheng Qian, Jing Yi, Weize Chen, Yankai Lin, Xu Han, Zhiyuan Liu, Maosong Sun, Jie Zhou
Multilingual Relation Classification via Efficient and Effective Prompting
Yuxuan Chen, David Harbecke, Leonhard Hennig
Improving Imbalanced Text Classification with Dynamic Curriculum Learning
Xulong Zhang, Jianzong Wang, Ning Cheng, Jing Xiao
Evaluating Parameter Efficient Learning for Generation
Peng Xu, Mostofa Patwary, Shrimai Prabhumoye, Virginia Adams, Ryan J. Prenger, Wei Ping, Nayeon Lee, Mohammad Shoeybi, Bryan Catanzaro
Different Tunes Played with Equal Skill: Exploring a Unified Optimization Subspace for Delta Tuning
Jing Yi, Weize Chen, Yujia Qin, Yankai Lin, Ning Ding, Xu Han, Zhiyuan Liu, Maosong Sun, Jie Zhou
ELMER: A Non-Autoregressive Pre-trained Language Model for Efficient and Effective Text Generation
Junyi Li, Tianyi Tang, Wayne Xin Zhao, Jian-Yun Nie, Ji-Rong Wen
DiscoSense: Commonsense Reasoning with Discourse Connectives
Prajjwal Bhargava, Vincent Ng
Collaborative Reasoning on Multi-Modal Semantic Graphs for Video-Grounded Dialogue Generation
Xueliang Zhao, Yuxuan Wang, Chongyang Tao, Chenshuo Wang, Dongyan Zhao
Generative Prompt Tuning for Relation Classification
Jiale Han, Shuai Zhao, Bo Cheng, Shengkun Ma, Wei Lu
PATS: Sensitivity-aware Noisy Learning for Pretrained Language Models
Yupeng Zhang, Hongzhi Zhang, Sirui Wang, Wei Wu, Zhoujun Li