Residual Recurrent Network

Residual recurrent networks (RRNs) combine the recurrent architecture's ability to process sequential data with residual connections to alleviate vanishing/exploding gradient problems and improve training efficiency. Current research focuses on optimizing RRN architectures, such as exploring variations in residual connections to enhance memory properties and expressiveness, and applying them to diverse tasks including speech enhancement, image super-resolution, and natural language processing. These advancements demonstrate RRNs' effectiveness in various domains, leading to improved performance in tasks requiring both sequential processing and deep network architectures.

Papers