Recurrent Neural Network
Recurrent Neural Networks (RNNs) are a class of neural networks designed to process sequential data by maintaining an internal state that is updated over time. Current research focuses on improving RNN efficiency and stability, exploring variations like LSTMs and GRUs, and investigating their application in diverse fields such as time series forecasting, natural language processing, and dynamical systems modeling. This includes developing novel architectures like selective state space models for improved memory efficiency and exploring the use of RNNs in conjunction with other architectures, such as transformers and convolutional neural networks. The resulting advancements have significant implications for various applications requiring sequential data processing, offering improved accuracy, efficiency, and interpretability.
Papers
Towards Scalable and Stable Parallelization of Nonlinear RNNs
Xavier Gonzalez, Andrew Warrington, Jimmy T. H. Smith, Scott W. Linderman
DTFormer: A Transformer-Based Method for Discrete-Time Dynamic Graph Representation Learning
Xi Chen, Yun Xiong, Siwei Zhang, Jiawei Zhang, Yao Zhang, Shiyang Zhou, Xixi Wu, Mingyang Zhang, Tengfei Liu, Weiqiang Wang
Impact of Recurrent Neural Networks and Deep Learning Frameworks on Real-time Lightweight Time Series Anomaly Detection
Ming-Chang Lee, Jia-Chun Lin, Sokratis Katsikas
Implementing engrams from a machine learning perspective: the relevance of a latent space
J Marco de Lucas
Advanced AI Framework for Enhanced Detection and Assessment of Abdominal Trauma: Integrating 3D Segmentation with 2D CNN and RNN Models
Liheng Jiang, Xuechun yang, Chang Yu, Zhizhong Wu, Yuting Wang