LLM Simulation
LLM simulation uses large language models to create computational models of human behavior and social phenomena, aiming to understand and potentially mitigate complex societal issues. Current research focuses on simulating opinion dynamics in social networks, exploring the impact of demographic and social factors (personas) on LLM outputs, and analyzing biases inherent in these simulations, often employing techniques like persona prompting and self-fine-tuning. This approach offers valuable insights into human behavior and social processes, with applications ranging from improving machine translation in low-resource languages to enhancing the interpretability of medical diagnoses and potentially informing strategies for addressing social polarization.
Papers
Social Science Meets LLMs: How Reliable Are Large Language Models in Social Simulations?
Yue Huang, Zhengqing Yuan, Yujun Zhou, Kehan Guo, Xiangqi Wang, Haomin Zhuang, Weixiang Sun, Lichao Sun, Jindong Wang, Yanfang Ye, Xiangliang Zhang
FPE-LLM: Highly Intelligent Time-Series Forecasting and Language Interaction LLM in Energy Systems
Zihang Qiu, Chaojie Li, Zhongyang Wang, Huadong Mo, Renyou Xie, Guo Chen, Zhaoyang Dong