Recurrent Neural Network
Recurrent Neural Networks (RNNs) are a class of neural networks designed to process sequential data by maintaining an internal state that is updated over time. Current research focuses on improving RNN efficiency and stability, exploring variations like LSTMs and GRUs, and investigating their application in diverse fields such as time series forecasting, natural language processing, and dynamical systems modeling. This includes developing novel architectures like selective state space models for improved memory efficiency and exploring the use of RNNs in conjunction with other architectures, such as transformers and convolutional neural networks. The resulting advancements have significant implications for various applications requiring sequential data processing, offering improved accuracy, efficiency, and interpretability.
Papers
Do We Really Need Complicated Model Architectures For Temporal Networks?
Weilin Cong, Si Zhang, Jian Kang, Baichuan Yuan, Hao Wu, Xin Zhou, Hanghang Tong, Mehrdad Mahdavi
Learning from Predictions: Fusing Training and Autoregressive Inference for Long-Term Spatiotemporal Forecasts
Pantelis R. Vlachas, Petros Koumoutsakos
Online Evolutionary Neural Architecture Search for Multivariate Non-Stationary Time Series Forecasting
Zimeng Lyu, Alexander Ororbia, Travis Desell
Dynamic Graph Neural Network with Adaptive Edge Attributes for Air Quality Predictions
Jing Xu, Shuo Wang, Na Ying, Xiao Xiao, Jiang Zhang, Yun Cheng, Zhiling Jin, Gangfeng Zhang