LD Align
LD Align, or Latent Distance Guided Alignment, focuses on improving the alignment of large language models (LLMs) with human preferences without relying on extensive human annotation. Current research emphasizes techniques like optimal transport and latent space analysis to guide the alignment process, often within frameworks employing mixture-of-experts architectures or teacher-student models for efficient training. This research area is significant because it addresses the high cost and scalability challenges associated with existing LLM alignment methods, potentially leading to more efficient and ethically sound AI systems across various applications.
Papers
December 21, 2023
December 11, 2023
December 6, 2023
November 27, 2023
November 16, 2023
November 2, 2023
October 20, 2023
September 28, 2023
August 26, 2023
August 25, 2023
July 20, 2023
June 20, 2023
June 6, 2023
May 30, 2023
May 19, 2023
May 9, 2023
April 18, 2023
April 15, 2023
March 28, 2023