BERT Based
BERT-based models, a class of transformer-based neural networks, are revolutionizing natural language processing (NLP) by achieving state-of-the-art performance across diverse tasks. Current research focuses on improving BERT's efficiency (e.g., through model distillation and optimized attention mechanisms), addressing challenges in low-resource settings and imbalanced datasets, and enhancing its robustness against adversarial attacks. These advancements have significant implications for various fields, including biomedical text analysis, protein sequence classification, and financial market prediction, by enabling more accurate and efficient automated analysis of textual data.
Papers
CheX-GPT: Harnessing Large Language Models for Enhanced Chest X-ray Report Labeling
Jawook Gu, Kihyun You, Han-Cheol Cho, Jiho Kim, Eun Kyoung Hong, Byungseok Roh
SEBERTNets: Sequence Enhanced BERT Networks for Event Entity Extraction Tasks Oriented to the Finance Field
Congqing He, Xiangyu Zhu, Yuquan Le, Yuzhong Liu, Jianhong Yin