Modular Deep
Modular deep learning aims to improve the efficiency, interpretability, and scalability of deep neural networks by decomposing them into independent, reusable modules. Current research focuses on developing novel architectures and training methods, such as self-supervised learning and knowledge distillation, to enable efficient transfer learning across tasks and models, including parameter-efficient fine-tuning (PEFT) modules. This approach offers significant advantages, including improved performance on resource-constrained devices, enhanced model interpretability, and the ability to handle diverse data types and tasks, impacting fields ranging from natural language processing and image segmentation to time series analysis.
Papers
November 1, 2024
September 27, 2024
August 5, 2024
March 27, 2024
November 28, 2023
September 1, 2023