Deep Architecture
Deep architectures, encompassing deep neural networks with numerous layers, aim to improve the accuracy and efficiency of machine learning models across diverse applications. Current research focuses on optimizing existing architectures like convolutional neural networks (CNNs) and transformers, exploring techniques such as model compression, early exiting, and novel training strategies to enhance performance and address limitations in resource-constrained environments. These advancements are significant for improving the efficiency and applicability of deep learning in areas like computer vision, natural language processing, and system identification, impacting both scientific understanding and practical deployment of AI systems.
Papers
Computer Vision Model Compression Techniques for Embedded Systems: A Survey
Alexandre Lopes, Fernando Pereira dos Santos, Diulhio de Oliveira, Mauricio Schiezaro, Helio Pedrini
Inversion-DeepONet: A Novel DeepONet-Based Network with Encoder-Decoder for Full Waveform Inversion
Zekai Guo, Lihui Chai, Shengjun Huang, Ye Li