Semantic Parsing
Semantic parsing aims to translate natural language into formal, structured representations, enabling computers to understand and act upon human instructions. Current research focuses on improving the accuracy and robustness of semantic parsers, particularly using large language models and sequence-to-sequence architectures, often augmented with techniques like in-context learning and grammar constraints to handle ambiguity and improve generalization. This field is crucial for bridging the gap between human language and machine action, with applications ranging from question answering and database querying to controlling robots and other intelligent systems. Ongoing efforts address challenges like handling complex queries, diverse data sources, and cross-lingual transfer.
Papers
Bootstrapping Multilingual Semantic Parsers using Large Language Models
Abhijeet Awasthi, Nitish Gupta, Bidisha Samanta, Shachi Dave, Sunita Sarawagi, Partha Talukdar
CLASP: Few-Shot Cross-Lingual Data Augmentation for Semantic Parsing
Andy Rosenbaum, Saleh Soltan, Wael Hamza, Amir Saffari, Marco Damonte, Isabel Groves
Compositional Semantic Parsing with Large Language Models
Andrew Drozdov, Nathanael Schärli, Ekin Akyürek, Nathan Scales, Xinying Song, Xinyun Chen, Olivier Bousquet, Denny Zhou
Generate-and-Retrieve: use your predictions to improve retrieval for semantic parsing
Yury Zemlyanskiy, Michiel de Jong, Joshua Ainslie, Panupong Pasupat, Peter Shaw, Linlu Qiu, Sumit Sanghai, Fei Sha