Recurrent Neural Network
Recurrent Neural Networks (RNNs) are a class of neural networks designed to process sequential data by maintaining an internal state that is updated over time. Current research focuses on improving RNN efficiency and stability, exploring variations like LSTMs and GRUs, and investigating their application in diverse fields such as time series forecasting, natural language processing, and dynamical systems modeling. This includes developing novel architectures like selective state space models for improved memory efficiency and exploring the use of RNNs in conjunction with other architectures, such as transformers and convolutional neural networks. The resulting advancements have significant implications for various applications requiring sequential data processing, offering improved accuracy, efficiency, and interpretability.
Papers
Using deep neural networks to detect non-analytically defined expert event labels in canoe sprint force sensor signals
Sarah Rockstrok, Patrick Frenzel, Daniel Matthes, Kay Schubert, David Wollburg, Mirco Fuchs
Approximating G(t)/GI/1 queues with deep learning
Eliran Sherzer, Opher Baron, Dmitry Krass, Yehezkel Resheff
Complexity-Aware Deep Symbolic Regression with Robust Risk-Seeking Policy Gradients
Zachary Bastiani, Robert M. Kirby, Jacob Hochhalter, Shandian Zhe
Sample Rate Independent Recurrent Neural Networks for Audio Effects Processing
Alistair Carson, Alec Wright, Jatin Chowdhury, Vesa Välimäki, Stefan Bilbao
Geometric sparsification in recurrent neural networks
Wyatt Mackey, Ioannis Schizas, Jared Deighton, David L. Boothe,, Vasileios Maroulas