Recurrent Neural Network
Recurrent Neural Networks (RNNs) are a class of neural networks designed to process sequential data by maintaining an internal state that is updated over time. Current research focuses on improving RNN efficiency and stability, exploring variations like LSTMs and GRUs, and investigating their application in diverse fields such as time series forecasting, natural language processing, and dynamical systems modeling. This includes developing novel architectures like selective state space models for improved memory efficiency and exploring the use of RNNs in conjunction with other architectures, such as transformers and convolutional neural networks. The resulting advancements have significant implications for various applications requiring sequential data processing, offering improved accuracy, efficiency, and interpretability.
Papers
Automated Algorithm Selection: from Feature-Based to Feature-Free Approaches
Mohamad Alissa, Kevin Sim, Emma Hart
Brain inspired neuronal silencing mechanism to enable reliable sequence identification
Shiri Hodassman, Yuval Meir, Karin Kisos, Itamar Ben-Noam, Yael Tugendhaft, Amir Goldental, Roni Vardi, Ido Kanter
Interpretable Latent Variables in Deep State Space Models
Haoxuan Wu, David S. Matteson, Martin T. Wells
A Deep Neural Framework for Image Caption Generation Using GRU-Based Attention Mechanism
Rashid Khan, M Shujah Islam, Khadija Kanwal, Mansoor Iqbal, Md. Imran Hossain, Zhongfu Ye
Deep Q-network using reservoir computing with multi-layered readout
Toshitaka Matsuki
Unfolding AIS transmission behavior for vessel movement modeling on noisy data leveraging machine learning
Gabriel Spadon, Martha D. Ferreira, Amilcar Soares, Stan Matwin
Can deep neural networks learn process model structure? An assessment framework and analysis
Jari Peeperkorn, Seppe vanden Broucke, Jochen De Weerdt