Emergent Communication
Emergent communication research investigates how language-like communication systems arise spontaneously in multi-agent systems, primarily using deep reinforcement learning models and variations of the Lewis signaling game. Current research focuses on improving the compositionality, interpretability, and efficiency of emergent languages, often by incorporating attention mechanisms, inductive biases, and information bottleneck principles into model architectures. This field offers valuable insights into the origins and structure of human language, and has potential applications in areas such as human-computer interaction, multi-agent robotics, and network optimization.
Papers
Emergent Communication Protocol Learning for Task Offloading in Industrial Internet of Things
Salwa Mostafa, Mateus P. Mota, Alvaro Valcarce, Mehdi Bennis
Knowledge Distillation from Language-Oriented to Emergent Communication for Multi-Agent Remote Control
Yongjun Kim, Sejin Seo, Jihong Park, Mehdi Bennis, Seong-Lyun Kim, Junil Choi
Emergent Communication for Rules Reasoning
Yuxuan Guo, Yifan Hao, Rui Zhang, Enshuai Zhou, Zidong Du, Xishan Zhang, Xinkai Song, Yuanbo Wen, Yongwei Zhao, Xuehai Zhou, Jiaming Guo, Qi Yi, Shaohui Peng, Di Huang, Ruizhi Chen, Qi Guo, Yunji Chen
Lewis's Signaling Game as beta-VAE For Natural Word Lengths and Segments
Ryo Ueda, Tadahiro Taniguchi