Fine Tuned BERT
Fine-tuned BERT models represent a significant advancement in natural language processing, achieving state-of-the-art results across diverse tasks by adapting a pre-trained language model to specific applications. Current research focuses on improving BERT's performance in areas like sentiment analysis, entity recognition (including handling colloquialisms and ambiguous contexts), and information extraction from various sources (e.g., medical reports, e-commerce websites). This work highlights BERT's versatility and its impact on various fields, ranging from healthcare and finance to social media analysis and improving the efficiency of downstream tasks.
Papers
DS4DH at #SMM4H 2023: Zero-Shot Adverse Drug Events Normalization using Sentence Transformers and Reciprocal-Rank Fusion
Anthony Yazdani, Hossein Rouhizadeh, David Vicente Alvarez, Douglas Teodoro
Finding Stakeholder-Material Information from 10-K Reports using Fine-Tuned BERT and LSTM Models
Victor Zitian Chen