BERT Model
BERT, a powerful transformer-based language model, is primarily used for natural language processing tasks by leveraging contextualized word embeddings to understand the meaning of text. Current research focuses on improving BERT's efficiency (e.g., through pruning and distillation), adapting it to specific domains (e.g., finance, medicine, law), and exploring its application in diverse areas such as text classification, information extraction, and data imputation. This versatility makes BERT a significant tool for advancing NLP research and impacting various applications, from improving healthcare diagnostics to enhancing search engine capabilities.
Papers
Split-NER: Named Entity Recognition via Two Question-Answering-based Classifications
Jatin Arora, Youngja Park
GPCR-BERT: Interpreting Sequential Design of G Protein Coupled Receptors Using Protein Language Models
Seongwon Kim, Parisa Mollaei, Akshay Antony, Rishikesh Magar, Amir Barati Farimani
BTRec: BERT-Based Trajectory Recommendation for Personalized Tours
Ngai Lam Ho, Roy Ka-Wei Lee, Kwan Hui Lim
Analyzing Textual Data for Fatality Classification in Afghanistan's Armed Conflicts: A BERT Approach
Hikmatullah Mohammadi, Ziaullah Momand, Parwin Habibi, Nazifa Ramaki, Bibi Storay Fazli, Sayed Zobair Rohany, Iqbal Samsoor
Effects of Human Adversarial and Affable Samples on BERT Generalization
Aparna Elangovan, Jiayuan He, Yuan Li, Karin Verspoor