Recurrent Neural Network
Recurrent Neural Networks (RNNs) are a class of neural networks designed to process sequential data by maintaining an internal state that is updated over time. Current research focuses on improving RNN efficiency and stability, exploring variations like LSTMs and GRUs, and investigating their application in diverse fields such as time series forecasting, natural language processing, and dynamical systems modeling. This includes developing novel architectures like selective state space models for improved memory efficiency and exploring the use of RNNs in conjunction with other architectures, such as transformers and convolutional neural networks. The resulting advancements have significant implications for various applications requiring sequential data processing, offering improved accuracy, efficiency, and interpretability.
Papers
Unveiling Intractable Epileptogenic Brain Networks with Deep Learning Algorithms: A Novel and Comprehensive Framework for Scalable Seizure Prediction with Unimodal Neuroimaging Data in Pediatric Patients
Bliss Singhal, Fnu Pooja
An LSTM-Based Predictive Monitoring Method for Data with Time-varying Variability
Jiaqi Qiu, Yu Lin, Inez Zwetsloot
A Distance Correlation-Based Approach to Characterize the Effectiveness of Recurrent Neural Networks for Time Series Forecasting
Christopher Salazar, Ashis G. Banerjee
Universal Recurrent Event Memories for Streaming Data
Ran Dou, Jose Principe
Dynamic Analysis and an Eigen Initializer for Recurrent Neural Networks
Ran Dou, Jose Principe