Low Complexity
Low complexity research focuses on developing efficient algorithms and models that achieve high performance while minimizing computational resources, memory usage, and energy consumption. Current efforts concentrate on designing lightweight neural networks (e.g., using MobileNets, EfficientNets, and spiking neural networks), employing techniques like pruning, quantization, and knowledge distillation, and optimizing existing architectures (e.g., transformers, random forests) for reduced complexity. This pursuit is crucial for deploying advanced machine learning models on resource-constrained devices (e.g., edge devices, IoT) and for improving the scalability and efficiency of large-scale machine learning tasks.
Papers
July 23, 2022
July 17, 2022
June 16, 2022
April 22, 2022
March 29, 2022
March 25, 2022
November 11, 2021