LD Align
LD Align, or Latent Distance Guided Alignment, focuses on improving the alignment of large language models (LLMs) with human preferences without relying on extensive human annotation. Current research emphasizes techniques like optimal transport and latent space analysis to guide the alignment process, often within frameworks employing mixture-of-experts architectures or teacher-student models for efficient training. This research area is significant because it addresses the high cost and scalability challenges associated with existing LLM alignment methods, potentially leading to more efficient and ethically sound AI systems across various applications.
Papers
June 13, 2024
June 1, 2024
May 30, 2024
May 24, 2024
May 23, 2024
April 28, 2024
April 27, 2024
April 22, 2024
April 15, 2024
April 9, 2024
April 8, 2024
April 5, 2024
March 23, 2024
March 18, 2024
February 28, 2024
February 27, 2024
February 20, 2024
February 15, 2024
December 28, 2023