Post Training
Post-training techniques aim to improve or adapt pre-trained machine learning models without requiring extensive retraining, offering significant computational and time savings. Current research focuses on diverse methods including quantization (e.g., using algorithms like GPTQ and CDQuant) to reduce model size and computational cost, adaptive inference strategies (like early exiting and input-dependent compression) to optimize resource usage, and techniques to enhance model alignment and mitigate issues like unintended sophistry in large language models. These advancements are crucial for deploying large models on resource-constrained devices and improving the efficiency and reliability of AI systems across various applications.
Papers
November 15, 2024
November 5, 2024
October 28, 2024
October 17, 2024
September 19, 2024
August 6, 2024
June 25, 2024
June 20, 2024
June 13, 2024
June 11, 2024
May 12, 2024
April 3, 2024
March 12, 2024
February 3, 2024
January 29, 2024
January 23, 2024
January 4, 2024
December 12, 2023
November 16, 2023