SWIFT DynGFN
SWIFT, encompassing various research projects, broadly focuses on accelerating and improving efficiency in diverse computational tasks. Current research emphasizes developing faster training methods for large language models (LLMs) and multi-modal LLMs, often leveraging architectures like Swin Transformers and employing techniques such as importance sampling and processing-in-memory systems. These advancements aim to enhance the speed and scalability of AI applications across domains including drug discovery, image processing, and federated learning, ultimately impacting both scientific research and practical deployment of AI technologies. A key theme is reducing computational costs while maintaining or improving performance.
Papers
October 30, 2024
October 9, 2024
October 8, 2024
September 24, 2024
August 10, 2024
July 26, 2024
June 17, 2024
May 7, 2024
November 25, 2023
November 24, 2023
October 5, 2023
September 24, 2023
July 12, 2023
July 8, 2023
March 6, 2023
October 25, 2022
March 24, 2022