Text Based
Research on text-based methods focuses on improving the understanding, generation, and analysis of textual data, leveraging advancements in large language models (LLMs) and multimodal models. Current efforts concentrate on enhancing causal inference from textual data, mitigating issues like hallucinations and bias in LLM outputs, and developing methods for detecting AI-generated text. This work has significant implications for various fields, including digital forensics, content moderation, and the development of more robust and reliable AI systems.
Papers
TM2T: Stochastic and Tokenized Modeling for the Reciprocal Generation of 3D Human Motions and Texts
Chuan Guo, Xinxin Zuo, Sen Wang, Li Cheng
Location reference recognition from texts: A survey and comparison
Xuke Hu, Zhiyong Zhou, Hao Li, Yingjie Hu, Fuqiang Gu, Jens Kersten, Hongchao Fan, Friederike Klan
I still have Time(s): Extending HeidelTime for German Texts
Andy Lücking, Manuel Stoeckel, Giuseppe Abrami, Alexander Mehler
Rumor Detection with Self-supervised Learning on Texts and Social Graph
Yuan Gao, Xiang Wang, Xiangnan He, Huamin Feng, Yongdong Zhang
Multimodal Hate Speech Detection from Bengali Memes and Texts
Md. Rezaul Karim, Sumon Kanti Dey, Tanhim Islam, Md. Shajalal, Bharathi Raja Chakravarthi