Post Training
Post-training techniques aim to improve or adapt pre-trained machine learning models without requiring extensive retraining, offering significant computational and time savings. Current research focuses on diverse methods including quantization (e.g., using algorithms like GPTQ and CDQuant) to reduce model size and computational cost, adaptive inference strategies (like early exiting and input-dependent compression) to optimize resource usage, and techniques to enhance model alignment and mitigate issues like unintended sophistry in large language models. These advancements are crucial for deploying large models on resource-constrained devices and improving the efficiency and reliability of AI systems across various applications.
Papers
September 4, 2023
August 25, 2023
August 8, 2023
July 2, 2023
May 10, 2023
April 11, 2023
January 16, 2023
January 6, 2023
July 27, 2022
March 2, 2022
January 20, 2022
January 17, 2022