Recurrent Neural Network
Recurrent Neural Networks (RNNs) are a class of neural networks designed to process sequential data by maintaining an internal state that is updated over time. Current research focuses on improving RNN efficiency and stability, exploring variations like LSTMs and GRUs, and investigating their application in diverse fields such as time series forecasting, natural language processing, and dynamical systems modeling. This includes developing novel architectures like selective state space models for improved memory efficiency and exploring the use of RNNs in conjunction with other architectures, such as transformers and convolutional neural networks. The resulting advancements have significant implications for various applications requiring sequential data processing, offering improved accuracy, efficiency, and interpretability.
Papers
Hidden Traveling Waves bind Working Memory Variables in Recurrent Neural Networks
Arjun Karuvally, Terrence J. Sejnowski, Hava T. Siegelmann
Recurrent Reinforcement Learning with Memoroids
Steven Morad, Chris Lu, Ryan Kortvelesy, Stephan Liwicki, Jakob Foerster, Amanda Prorok
DFORM: Diffeomorphic vector field alignment for assessing dynamics across learned models
Ruiqi Chen, Giacomo Vedovati, Todd Braver, ShiNung Ching
Stability Analysis of Various Symbolic Rule Extraction Methods from Recurrent Neural Network
Neisarg Dave, Daniel Kifer, C. Lee Giles, Ankur Mali
Enhancing Transformer RNNs with Multiple Temporal Perspectives
Razvan-Gabriel Dumitru, Darius Peteleaza, Mihai Surdeanu
Overcoming Order in Autoregressive Graph Generation
Edo Cohen-Karlik, Eyal Rozenberg, Daniel Freedman