Text to Text
Text-to-text models are transforming how we process and generate textual information, aiming to improve efficiency and accuracy across diverse applications. Current research focuses on adapting these models, often based on transformer architectures like T5 and BERT, to specific domains such as knowledge graph completion, medical text processing, and hate speech detection, often incorporating techniques like contrastive learning and diffusion models for enhanced performance. This work is significant because it enables automation of complex linguistic tasks, improving accessibility to information and facilitating advancements in fields ranging from scientific literature analysis to human-robot interaction.
Papers
ProSpect: Prompt Spectrum for Attribute-Aware Personalization of Diffusion Models
Yuxin Zhang, Weiming Dong, Fan Tang, Nisha Huang, Haibin Huang, Chongyang Ma, Tong-Yee Lee, Oliver Deussen, Changsheng Xu
UniTRec: A Unified Text-to-Text Transformer and Joint Contrastive Learning Framework for Text-based Recommendation
Zhiming Mao, Huimin Wang, Yiming Du, Kam-fai Wong