Text to Text
Text-to-text models are transforming how we process and generate textual information, aiming to improve efficiency and accuracy across diverse applications. Current research focuses on adapting these models, often based on transformer architectures like T5 and BERT, to specific domains such as knowledge graph completion, medical text processing, and hate speech detection, often incorporating techniques like contrastive learning and diffusion models for enhanced performance. This work is significant because it enables automation of complex linguistic tasks, improving accessibility to information and facilitating advancements in fields ranging from scientific literature analysis to human-robot interaction.
Papers
December 16, 2022
December 15, 2022
December 9, 2022
November 14, 2022
October 31, 2022
October 27, 2022
October 13, 2022
September 28, 2022
September 21, 2022
August 31, 2022
August 4, 2022
July 29, 2022
June 22, 2022
May 24, 2022
May 23, 2022
May 18, 2022
May 8, 2022
April 30, 2022
March 15, 2022
February 14, 2022