Natural Language
Natural language processing (NLP) focuses on enabling computers to understand, interpret, and generate human language. Current research heavily utilizes large language models (LLMs), such as BERT and others, to tackle diverse tasks including text-to-SQL translation, semantic analysis of images, and even controlling robots via natural language commands. The field's impact spans various sectors, from improving search engines and e-commerce platforms to advancing healthcare diagnostics and facilitating more efficient scientific research through automated literature analysis and data extraction.
Papers
Learning to Compress Prompt in Natural Language Formats
Yu-Neng Chuang, Tianwei Xing, Chia-Yuan Chang, Zirui Liu, Xun Chen, Xia Hu
Beyond Natural Language: LLMs Leveraging Alternative Formats for Enhanced Reasoning and Communication
Weize Chen, Chenfei Yuan, Jiarui Yuan, Yusheng Su, Chen Qian, Cheng Yang, Ruobing Xie, Zhiyuan Liu, Maosong Sun
Social Intelligence Data Infrastructure: Structuring the Present and Navigating the Future
Minzhi Li, Weiyan Shi, Caleb Ziems, Diyi Yang
COMPASS: Computational Mapping of Patient-Therapist Alliance Strategies with Language Modeling
Baihan Lin, Djallel Bouneffouf, Yulia Landa, Rachel Jespersen, Cheryl Corcoran, Guillermo Cecchi
Vision-Language Navigation with Embodied Intelligence: A Survey
Peng Gao, Peng Wang, Feng Gao, Fei Wang, Ruyue Yuan
Online Foundation Model Selection in Robotics
Po-han Li, Oyku Selin Toprak, Aditya Narayanan, Ufuk Topcu, Sandeep Chinchali
ChatCell: Facilitating Single-Cell Analysis with Natural Language
Yin Fang, Kangwei Liu, Ningyu Zhang, Xinle Deng, Penghui Yang, Zhuo Chen, Xiangru Tang, Mark Gerstein, Xiaohui Fan, Huajun Chen