Prompt Representation
Prompt representation research focuses on effectively encoding instructions or contextual information to guide large language models (LLMs) and other AI systems. Current efforts concentrate on developing robust and efficient prompt representations, including techniques like prompt obfuscation for intellectual property protection, contrastive learning for improved voice characteristic control in speech synthesis, and the use of vision-language models to leverage existing world knowledge for reinforcement learning. These advancements are significant because they improve the controllability, security, and generalization capabilities of AI systems, impacting diverse applications from text-to-speech to few-shot learning and beyond.
Papers
September 17, 2024
March 20, 2024
February 5, 2024
March 27, 2023
November 8, 2022
November 3, 2022
October 19, 2022
September 29, 2022
July 26, 2022