Recurrent Neural Network
Recurrent Neural Networks (RNNs) are a class of neural networks designed to process sequential data by maintaining an internal state that is updated over time. Current research focuses on improving RNN efficiency and stability, exploring variations like LSTMs and GRUs, and investigating their application in diverse fields such as time series forecasting, natural language processing, and dynamical systems modeling. This includes developing novel architectures like selective state space models for improved memory efficiency and exploring the use of RNNs in conjunction with other architectures, such as transformers and convolutional neural networks. The resulting advancements have significant implications for various applications requiring sequential data processing, offering improved accuracy, efficiency, and interpretability.
Papers
On Recurrent Neural Networks for learning-based control: recent results and ideas for future developments
Fabio Bonassi, Marcello Farina, Jing Xie, Riccardo Scattolini
Towards Explainable End-to-End Prostate Cancer Relapse Prediction from H&E Images Combining Self-Attention Multiple Instance Learning with a Recurrent Neural Network
Esther Dietrich, Patrick Fuhlert, Anne Ernst, Guido Sauter, Maximilian Lennartz, H. Siegfried Stiehl, Marina Zimmermann, Stefan Bonn
DSNet: Dynamic Skin Deformation Prediction by Recurrent Neural Network
Hyewon Seo, Kaifeng Zou, Frederic Cordier
Observation Error Covariance Specification in Dynamical Systems for Data assimilation using Recurrent Neural Networks
Sibo Cheng, Mingming Qiu
Longitudinal patient stratification of electronic health records with flexible adjustment for clinical outcomes
Oliver Carr, Avelino Javer, Patrick Rockenschaub, Owen Parsons, Robert Dürichen