Chinese Humor
Research on Chinese humor focuses on computationally understanding and generating this culturally nuanced form of language, a significant challenge due to its reliance on context and implicit meaning. Current efforts leverage pre-trained language models (PLMs) and transformer architectures, often incorporating techniques like contrastive learning and multi-modal approaches to analyze both textual and visual elements in various forms of humor, including jokes, allegorical sayings, and short-form videos. The development of large, annotated datasets of Chinese humor is crucial for training and evaluating these models, and improved accuracy in humor detection and generation has implications for advancements in natural language processing and affective computing. This work also highlights the need for models to understand diverse humor subtypes, including offensive humor.