Communication Perspective

Communication-focused research in artificial intelligence explores how to improve the reliability and trustworthiness of AI systems, particularly large language models (LLMs), by enhancing their communication efficiency and aligning their outputs with human values. Current research emphasizes strategies to reduce communication overhead in federated learning and to improve the robustness of LLMs through techniques inspired by communication theory, such as reranking generated outputs and employing tilted exponential layers. This work is crucial for ensuring the safe and responsible deployment of AI, addressing issues like hallucinations and biases, and fostering more effective human-AI collaboration in diverse applications.

Papers