Traditional Deep Learning
Traditional deep learning focuses on developing and improving artificial neural networks for various tasks, aiming to enhance accuracy, efficiency, and robustness. Current research emphasizes addressing limitations such as vulnerability to adversarial attacks, high computational costs, and the need for large labeled datasets, exploring solutions like equivariant convolutional networks, physics-informed neural networks, and model compression techniques (e.g., quantization, pruning). These advancements are crucial for deploying deep learning models in resource-constrained environments and improving their reliability and trustworthiness across diverse applications, from image recognition to natural language processing.
Papers
June 5, 2022
May 2, 2022
January 14, 2022
November 19, 2021
November 11, 2021