Recurrent Neural Network
Recurrent Neural Networks (RNNs) are a class of neural networks designed to process sequential data by maintaining an internal state that is updated over time. Current research focuses on improving RNN efficiency and stability, exploring variations like LSTMs and GRUs, and investigating their application in diverse fields such as time series forecasting, natural language processing, and dynamical systems modeling. This includes developing novel architectures like selective state space models for improved memory efficiency and exploring the use of RNNs in conjunction with other architectures, such as transformers and convolutional neural networks. The resulting advancements have significant implications for various applications requiring sequential data processing, offering improved accuracy, efficiency, and interpretability.
Papers
Human Emotion Classification based on EEG Signals Using Recurrent Neural Network And KNN
Shashank Joshi, Falak Joshi
Designing a Recurrent Neural Network to Learn a Motion Planner for High-Dimensional Inputs
Johnathan Chiu
An Edge-Cloud Integrated Framework for Flexible and Dynamic Stream Analytics
Xin Wang, Azim Khan, Jianwu Wang, Aryya Gangopadhyay, Carl E. Busart, Jade Freeman
DeepBayes -- an estimator for parameter estimation in stochastic nonlinear dynamical models
Anubhab Ghosh, Mohamed Abdalmoaty, Saikat Chatterjee, Håkan Hjalmarsson
Virtual Analog Modeling of Distortion Circuits Using Neural Ordinary Differential Equations
Jan Wilczek, Alec Wright, Vesa Välimäki, Emanuël Habets