Text Modality
Text modality research explores how textual information can be effectively integrated with other data modalities (e.g., images, audio, video) to improve the performance and capabilities of AI models. Current research focuses on developing multimodal models using transformer architectures and diffusion models, often incorporating techniques like prompt tuning and meta-learning to enhance controllability and generalization. This work is significant because it enables more sophisticated AI systems capable of understanding and generating complex information across various data types, with applications ranging from improved medical diagnosis to more realistic virtual environments.
Papers
Efficient Retrieval of Temporal Event Sequences from Textual Descriptions
Zefang Liu, Yinzhu Quan
Measuring and Modifying the Readability of English Texts with GPT-4
Sean Trott (1), Pamela D. Rivière (1) ((1) Department of Cognitive Science, University of California San Diego)
Knowledge-Aware Query Expansion with Large Language Models for Textual and Relational Retrieval
Yu Xia, Junda Wu, Sungchul Kim, Tong Yu, Ryan A. Rossi, Haoliang Wang, Julian McAuley
MeloTrans: A Text to Symbolic Music Generation Model Following Human Composition Habit
Yutian Wang, Wanyin Yang, Zhenrong Dai, Yilong Zhang, Kun Zhao, Hui Wang
RespLLM: Unifying Audio and Text with Multimodal LLMs for Generalized Respiratory Health Prediction
Yuwei Zhang, Tong Xia, Aaqib Saeed, Cecilia Mascolo
Editing Music with Melody and Text: Using ControlNet for Diffusion Transformer
Siyuan Hou, Shansong Liu, Ruibin Yuan, Wei Xue, Ying Shan, Mangsuo Zhao, Chao Zhang
Generalizable Prompt Tuning for Vision-Language Models
Qian Zhang
Bridging the Gap between Text, Audio, Image, and Any Sequence: A Novel Approach using Gloss-based Annotation
Sen Fang, Yalin Feng, Sizhou Chen, Xiaofeng Zhang, Teik Toe Teoh
Image First or Text First? Optimising the Sequencing of Modalities in Large Language Model Prompting and Reasoning Tasks
Grant Wardle, Teo Susnjak