Recurrent Neural Network
Recurrent Neural Networks (RNNs) are a class of neural networks designed to process sequential data by maintaining an internal state that is updated over time. Current research focuses on improving RNN efficiency and stability, exploring variations like LSTMs and GRUs, and investigating their application in diverse fields such as time series forecasting, natural language processing, and dynamical systems modeling. This includes developing novel architectures like selective state space models for improved memory efficiency and exploring the use of RNNs in conjunction with other architectures, such as transformers and convolutional neural networks. The resulting advancements have significant implications for various applications requiring sequential data processing, offering improved accuracy, efficiency, and interpretability.
Papers
RNNs, CNNs and Transformers in Human Action Recognition: A Survey and a Hybrid Model
Khaled Alomar, Halil Ibrahim Aysel, Xiaohao Cai
Gated recurrent neural network with TPE Bayesian optimization for enhancing stock index prediction accuracy
Bivas Dinda
On Non-asymptotic Theory of Recurrent Neural Networks in Temporal Point Processes
Zhiheng Chen, Guanhua Fang, Wen Yu
Pattern-Based Time-Series Risk Scoring for Anomaly Detection and Alert Filtering -- A Predictive Maintenance Case Study
Elad Liebman
Understanding the differences in Foundation Models: Attention, State Space Models, and Recurrent Neural Networks
Jerome Sieber, Carmen Amo Alonso, Alexandre Didier, Melanie N. Zeilinger, Antonio Orvieto