Text Based Model
Text-based models, which process and analyze textual data, are a cornerstone of natural language processing (NLP), aiming to understand and generate human language. Current research focuses on improving their performance across diverse languages and tasks, including knowledge graph reasoning, molecule prediction, and fake news detection, often employing transformer architectures like BERT and RoBERTa, or exploring novel approaches like prefix-tuning for efficiency. These advancements are crucial for various scientific fields, enabling more effective analysis of multilingual scientific literature, improved drug discovery through molecule prediction, and more accurate detection of misinformation.
Papers
May 21, 2024
March 27, 2024
March 25, 2024
March 8, 2024
January 16, 2024
June 12, 2023
December 15, 2022
October 27, 2022
May 1, 2022
December 14, 2021