Text Modality
Text modality research explores how textual information can be effectively integrated with other data modalities (e.g., images, audio, video) to improve the performance and capabilities of AI models. Current research focuses on developing multimodal models using transformer architectures and diffusion models, often incorporating techniques like prompt tuning and meta-learning to enhance controllability and generalization. This work is significant because it enables more sophisticated AI systems capable of understanding and generating complex information across various data types, with applications ranging from improved medical diagnosis to more realistic virtual environments.
571papers
Papers - Page 5
January 29, 2025
January 28, 2025
January 13, 2025
January 12, 2025
January 7, 2025
KG-TRICK: Unifying Textual and Relational Information Completion of Knowledge for Multilingual Knowledge Graphs
Zelin Zhou, Simone Conia, Daniel Lee, Min Li, Shenglei Huang, Umar Farooq Minhas, Saloni Potdar, Henry Xiao, Yunyao LiText to Band Gap: Pre-trained Language Models as Encoders for Semiconductor Band Gap Prediction
Ying-Ting Yeh, Janghoon Ock, Amir Barati Farimani
January 6, 2025
Leveraging Explainable AI for LLM Text Attribution: Differentiating Human-Written and Multiple LLMs-Generated Text
Ayat Najjar, Huthaifa I. Ashqar, Omar Darwish, Eman HammadVisual Large Language Models for Generalized and Specialized Applications
Yifan Li, Zhixin Lai, Wentao Bao, Zhen Tan, Anh Dao, Kewei Sui, Jiayi Shen, Dong Liu, Huan Liu, Yu KongQuIM-RAG: Advancing Retrieval-Augmented Generation with Inverted Question Matching for Enhanced QA Performance
Binita Saha, Utsha Saha, Muhammad Zubair Malik
January 5, 2025
January 1, 2025