Complex Deep
Complex deep learning models are revolutionizing various fields, but their computational demands pose challenges for deployment on resource-constrained devices. Current research focuses on optimizing these models for efficiency, including exploring novel architectures (e.g., transformer-based networks, sparse networks), developing efficient training methods (e.g., knowledge distillation, quantization), and improving explainability through techniques like explanation ensembling. These efforts are crucial for enabling widespread adoption of AI in embedded systems and high-stakes applications while addressing concerns about energy consumption and model interpretability.
Papers
November 4, 2024
October 14, 2024
October 9, 2024
June 25, 2024
April 16, 2024
February 28, 2024
February 1, 2024
December 1, 2023
November 18, 2023
November 13, 2023
August 26, 2023
August 11, 2023
June 27, 2023
June 2, 2023
April 16, 2023
April 6, 2023
March 24, 2023
March 22, 2023
March 1, 2023