Pragmatic Inference
Pragmatic inference focuses on how humans and machines derive meaning beyond the literal words spoken, considering context and unspoken assumptions to understand the speaker's intended message. Current research investigates how large language models (LLMs) perform pragmatic inference, particularly concerning scalar implicatures and Grice's maxims, using methods like cosine similarity and chain-of-thought prompting to evaluate their performance across multiple languages. This research is crucial for improving human-computer interaction and developing AI systems capable of nuanced communication, particularly in sensitive domains like healthcare, where understanding implicit meaning is vital. The development of datasets and evaluation frameworks for pragmatic competence is a key focus, enabling more rigorous assessment of LLMs' abilities.