Language Alignment
Language alignment focuses on bridging the semantic gap between different languages and modalities, aiming to improve the performance and cross-lingual capabilities of large language models (LLMs). Current research emphasizes techniques like cross-lingual instruction tuning, Nash learning with adaptive feedback, and hierarchical graph tokenization to achieve better alignment, often leveraging parallel data and incorporating human feedback or preference models. These advancements are crucial for building more robust and inclusive LLMs, enabling improved multilingual applications in areas such as machine translation, bug localization, and cross-cultural communication.
Papers
October 14, 2024
July 22, 2024
June 25, 2024
June 22, 2024
June 21, 2024
June 20, 2024
May 14, 2024
May 2, 2024
April 22, 2024
February 20, 2024
February 5, 2024
December 13, 2023
November 28, 2023
November 14, 2023
August 9, 2023
June 24, 2023
May 23, 2023
November 27, 2022
October 22, 2022
March 28, 2022