Frozen Large Language Model

Frozen large language models (LLMs) represent a research area focused on leveraging the power of pre-trained LLMs without modifying their weights, aiming to improve efficiency and maintain the model's versatility across diverse tasks. Current research explores innovative architectures like inner-adaptor models and various prompt engineering techniques to effectively integrate LLMs with other modalities (e.g., vision, audio) or enhance their performance on specific tasks such as knowledge graph completion and visual question answering. This approach offers significant advantages in terms of computational cost and reduced risk of catastrophic forgetting, potentially leading to more efficient and adaptable AI systems across various applications.

Papers