Text to Text
Text-to-text models are transforming how we process and generate textual information, aiming to improve efficiency and accuracy across diverse applications. Current research focuses on adapting these models, often based on transformer architectures like T5 and BERT, to specific domains such as knowledge graph completion, medical text processing, and hate speech detection, often incorporating techniques like contrastive learning and diffusion models for enhanced performance. This work is significant because it enables automation of complex linguistic tasks, improving accessibility to information and facilitating advancements in fields ranging from scientific literature analysis to human-robot interaction.
Papers
Chemical Identification and Indexing in PubMed Articles via BERT and Text-to-Text Approaches
Virginia Adams, Hoo-Chang Shin, Carol Anderson, Bo Liu, Anas Abidin
Text Mining Drug/Chemical-Protein Interactions using an Ensemble of BERT and T5 Based Models
Virginia Adams, Hoo-Chang Shin, Carol Anderson, Bo Liu, Anas Abidin