SWIFT DynGFN

SWIFT, encompassing various research projects, broadly focuses on accelerating and improving efficiency in diverse computational tasks. Current research emphasizes developing faster training methods for large language models (LLMs) and multi-modal LLMs, often leveraging architectures like Swin Transformers and employing techniques such as importance sampling and processing-in-memory systems. These advancements aim to enhance the speed and scalability of AI applications across domains including drug discovery, image processing, and federated learning, ultimately impacting both scientific research and practical deployment of AI technologies. A key theme is reducing computational costs while maintaining or improving performance.

Papers