ERNIE ViLG
ERNIE is a family of large language models developed primarily for Chinese language processing, but with expanding multilingual capabilities, focusing on improving various NLP and vision-language tasks. Current research emphasizes enhancing model architectures like diffusion models for text-to-image generation and multi-view contrastive learning for improved image-text understanding, as well as exploring knowledge distillation techniques to create smaller, more efficient models. These advancements have led to state-of-the-art results in several areas, including text-to-image synthesis, cross-lingual text-to-speech, and code generation, demonstrating the potential of ERNIE for diverse applications in both research and industry.
Papers
September 17, 2023
February 9, 2023
January 9, 2023
December 13, 2022
November 7, 2022
October 27, 2022
September 30, 2022
September 18, 2022
March 17, 2022
February 14, 2022
December 31, 2021
December 23, 2021