Recurrent Neural Network
Recurrent Neural Networks (RNNs) are a class of neural networks designed to process sequential data by maintaining an internal state that is updated over time. Current research focuses on improving RNN efficiency and stability, exploring variations like LSTMs and GRUs, and investigating their application in diverse fields such as time series forecasting, natural language processing, and dynamical systems modeling. This includes developing novel architectures like selective state space models for improved memory efficiency and exploring the use of RNNs in conjunction with other architectures, such as transformers and convolutional neural networks. The resulting advancements have significant implications for various applications requiring sequential data processing, offering improved accuracy, efficiency, and interpretability.
Papers
Message Propagation Through Time: An Algorithm for Sequence Dependency Retention in Time Series Modeling
Shaoming Xu, Ankush Khandelwal, Arvind Renganathan, Vipin Kumar
Nonlinear MPC design for incrementally ISS systems with application to GRU networks
Fabio Bonassi, Alessio La Bella, Marcello Farina, Riccardo Scattolini
Emergent mechanisms for long timescales depend on training curriculum and affect performance in memory tasks
Sina Khajehabdollahi, Roxana Zeraati, Emmanouil Giannakakis, Tim Jakob Schäfer, Georg Martius, Anna Levina
Recurrent Temporal Revision Graph Networks
Yizhou Chen, Anxiang Zeng, Guangda Huzhang, Qingtao Yu, Kerui Zhang, Cao Yuanpeng, Kangle Wu, Han Yu, Zhiming Zhou