Text to Text
Text-to-text models are transforming how we process and generate textual information, aiming to improve efficiency and accuracy across diverse applications. Current research focuses on adapting these models, often based on transformer architectures like T5 and BERT, to specific domains such as knowledge graph completion, medical text processing, and hate speech detection, often incorporating techniques like contrastive learning and diffusion models for enhanced performance. This work is significant because it enables automation of complex linguistic tasks, improving accessibility to information and facilitating advancements in fields ranging from scientific literature analysis to human-robot interaction.
Papers
November 12, 2024
October 27, 2024
October 19, 2024
September 30, 2024
September 10, 2024
August 21, 2024
August 12, 2024
July 17, 2024
July 16, 2024
July 6, 2024
June 14, 2024
May 23, 2024
April 25, 2024
April 21, 2024
April 11, 2024
March 14, 2024
March 13, 2024
February 22, 2024