LD Align
LD Align, or Latent Distance Guided Alignment, focuses on improving the alignment of large language models (LLMs) with human preferences without relying on extensive human annotation. Current research emphasizes techniques like optimal transport and latent space analysis to guide the alignment process, often within frameworks employing mixture-of-experts architectures or teacher-student models for efficient training. This research area is significant because it addresses the high cost and scalability challenges associated with existing LLM alignment methods, potentially leading to more efficient and ethically sound AI systems across various applications.
Papers
February 11, 2023
November 17, 2022
November 16, 2022
November 15, 2022
November 11, 2022
October 23, 2022
September 30, 2022
September 15, 2022
August 23, 2022
July 26, 2022
July 14, 2022
July 13, 2022
May 30, 2022
May 5, 2022
April 15, 2022
December 17, 2021