Paper ID: 2401.12492
Comparing Pre-trained Human Language Models: Is it Better with Human Context as Groups, Individual Traits, or Both?
Nikita Soni, Niranjan Balasubramanian, H. Andrew Schwartz, Dirk Hovy
Incorporating human context into language models is the next frontier for human-centered natural language processing. Currently, two pre-training methods exist: group-wise attributes (e.g., over-45-year-olds) or individual traits. Group attributes are coarse -- not all 45-year-olds write the same way -- while modeling individual traits allows for a more personalized representation, but requires more complex modeling and data. So far, it is unclear which pre-training approach benefits what tasks. We compare pre-training models with human context via 1) group attributes, 2) individual users, and 3) a combined approach on 5 user- and document-level tasks. We find that pre-training with both group and individual features significantly improves the two user-level regression tasks like age estimation and personality assessment. Pre-training on individual users significantly improves the three document-level classification tasks like stance and topic detection. It even does well for downstream tasks without historical user data. Our results suggest both approaches have specific use cases, opening new avenues for human-centered language modeling.
Submitted: Jan 23, 2024