Deeper Network
Deeper neural networks aim to improve model performance by increasing the number of layers, but this often leads to challenges like parameter redundancy, vanishing gradients, and over-smoothing. Current research focuses on mitigating these issues through architectural innovations such as residual connections (ResNet), adaptive weight adjustments (AdaResNet), and novel training methods like Replacement Learning, as well as exploring the impact of network depth and width on generalization error and optimization dynamics in various contexts (e.g., graph convolutional networks, continual learning). These advancements are significant for improving the efficiency and effectiveness of deep learning models across diverse applications, including image recognition, natural language processing, and scientific computing.
Papers
Deep Learning Brasil at ABSAPT 2022: Portuguese Transformer Ensemble Approaches
Juliana Resplande Santanna Gomes, Eduardo Augusto Santos Garcia, Adalberto Ferreira Barbosa Junior, Ruan Chaves Rodrigues, Diogo Fernandes Costa Silva, Dyonnatan Ferreira Maia, Nádia Félix Felipe da Silva, Arlindo Rodrigues Galvão Filho, Anderson da Silva Soares
DeepLearningBrasil@LT-EDI-2023: Exploring Deep Learning Techniques for Detecting Depression in Social Media Text
Eduardo Garcia, Juliana Gomes, Adalberto Barbosa Júnior, Cardeque Borges, Nádia da Silva