Recurrent Neural Network
Recurrent Neural Networks (RNNs) are a class of neural networks designed to process sequential data by maintaining an internal state that is updated over time. Current research focuses on improving RNN efficiency and stability, exploring variations like LSTMs and GRUs, and investigating their application in diverse fields such as time series forecasting, natural language processing, and dynamical systems modeling. This includes developing novel architectures like selective state space models for improved memory efficiency and exploring the use of RNNs in conjunction with other architectures, such as transformers and convolutional neural networks. The resulting advancements have significant implications for various applications requiring sequential data processing, offering improved accuracy, efficiency, and interpretability.
Papers
Recurrent Neural Networks for Dynamical Systems: Applications to Ordinary Differential Equations, Collective Motion, and Hydrological Modeling
Yonggi Park, Kelum Gajamannage, Dilhani I. Jayathilake, Erik M. Bollt
Saving RNN Computations with a Neuron-Level Fuzzy Memoization Scheme
Franyell Silfa, Jose-Maria Arnau, Antonio González
Stability Analysis of Recurrent Neural Networks by IQC with Copositive Mutipliers
Yoshio Ebihara, Hayato Waki, Victor Magron, Ngoc Hoang Anh Mai, Dimitri Peaucelle, Sophie Tarbouriech
On the Implicit Bias of Gradient Descent for Temporal Extrapolation
Edo Cohen-Karlik, Avichai Ben David, Nadav Cohen, Amir Globerson
An Empirical Analysis of Recurrent Learning Algorithms In Neural Lossy Image Compression Systems
Ankur Mali, Alexander Ororbia, Daniel Kifer, Lee Giles
From Motion to Muscle
Marie D. Schmidt, Tobias Glasmachers, Ioannis Iossifidis
Few-shot Transfer Learning for Holographic Image Reconstruction using a Recurrent Neural Network
Luzhe Huang, Xilin Yang, Tairan Liu, Aydogan Ozcan