Recurrent Neural Network
Recurrent Neural Networks (RNNs) are a class of neural networks designed to process sequential data by maintaining an internal state that is updated over time. Current research focuses on improving RNN efficiency and stability, exploring variations like LSTMs and GRUs, and investigating their application in diverse fields such as time series forecasting, natural language processing, and dynamical systems modeling. This includes developing novel architectures like selective state space models for improved memory efficiency and exploring the use of RNNs in conjunction with other architectures, such as transformers and convolutional neural networks. The resulting advancements have significant implications for various applications requiring sequential data processing, offering improved accuracy, efficiency, and interpretability.
Papers
Exploiting Symmetric Temporally Sparse BPTT for Efficient RNN Training
Xi Chen, Chang Gao, Zuowen Wang, Longbiao Cheng, Sheng Zhou, Shih-Chii Liu, Tobi Delbruck
On The Expressivity of Recurrent Neural Cascades
Nadezda Alexandrovna Knorozova, Alessandro Ronca
Learning Long Sequences in Spiking Neural Networks
Matei Ioan Stan, Oliver Rhodes
Image segmentation with traveling waves in an exactly solvable recurrent neural network
Luisa H. B. Liboni, Roberto C. Budzinski, Alexandra N. Busch, Sindy Löwe, Thomas A. Keller, Max Welling, Lyle E. Muller
FocusLearn: Fully-Interpretable, High-Performance Modular Neural Networks for Time Series
Qiqi Su, Christos Kloukinas, Artur d'Avila Garcez