Complex Deep
Complex deep learning models are revolutionizing various fields, but their computational demands pose challenges for deployment on resource-constrained devices. Current research focuses on optimizing these models for efficiency, including exploring novel architectures (e.g., transformer-based networks, sparse networks), developing efficient training methods (e.g., knowledge distillation, quantization), and improving explainability through techniques like explanation ensembling. These efforts are crucial for enabling widespread adoption of AI in embedded systems and high-stakes applications while addressing concerns about energy consumption and model interpretability.
Papers
March 22, 2023
March 1, 2023
February 15, 2023
November 15, 2022
October 30, 2022
October 6, 2022
July 5, 2022
June 21, 2022
June 14, 2022
May 25, 2022
April 8, 2022
March 28, 2022
December 8, 2021