Text Based Model

Text-based models, which process and analyze textual data, are a cornerstone of natural language processing (NLP), aiming to understand and generate human language. Current research focuses on improving their performance across diverse languages and tasks, including knowledge graph reasoning, molecule prediction, and fake news detection, often employing transformer architectures like BERT and RoBERTa, or exploring novel approaches like prefix-tuning for efficiency. These advancements are crucial for various scientific fields, enabling more effective analysis of multilingual scientific literature, improved drug discovery through molecule prediction, and more accurate detection of misinformation.

Papers