Long Short Term Memory
Long Short-Term Memory (LSTM) networks are a type of recurrent neural network designed to process sequential data by learning long-term dependencies, enabling accurate predictions and classifications in various applications. Current research focuses on enhancing LSTM architectures, such as incorporating convolutional layers, attention mechanisms, and hybrid models combining LSTMs with other deep learning techniques like transformers or graph neural networks, to improve efficiency and accuracy. This work is significant because LSTMs are proving highly effective across diverse fields, from financial forecasting and environmental monitoring to medical image analysis and activity recognition, offering powerful tools for analyzing complex temporal data.
Papers
Easy attention: A simple attention mechanism for temporal predictions with transformers
Marcial Sanchis-Agudo, Yuning Wang, Roger Arnau, Luca Guastoni, Jasmin Lim, Karthik Duraisamy, Ricardo Vinuesa
Fall Detection using Knowledge Distillation Based Long short-term memory for Offline Embedded and Low Power Devices
Hannah Zhou, Allison Chen, Celine Buer, Emily Chen, Kayleen Tang, Lauryn Gong, Zhiqi Liu, Jianbin Tang
Video-based Person Re-identification with Long Short-Term Representation Learning
Xuehu Liu, Pingping Zhang, Huchuan Lu
Stock Market Price Prediction: A Hybrid LSTM and Sequential Self-Attention based Approach
Karan Pardeshi, Sukhpal Singh Gill, Ahmed M. Abdelmoniem
SoilNet: An Attention-based Spatio-temporal Deep Learning Framework for Soil Organic Carbon Prediction with Digital Soil Mapping in Europe
Nafiseh Kakhani, Moien Rangzan, Ali Jamali, Sara Attarchi, Seyed Kazem Alavipanah, Thomas Scholten
Exploring Different Time-series-Transformer (TST) Architectures: A Case Study in Battery Life Prediction for Electric Vehicles (EVs)
Niranjan Sitapure, Atharva Kulkarni
Global Precipitation Nowcasting of Integrated Multi-satellitE Retrievals for GPM: A U-Net Convolutional LSTM Architecture
Reyhaneh Rahimi, Praveen Ravirathinam, Ardeshir Ebtehaj, Ali Behrangi, Jackson Tan, Vipin Kumar
Predicting human motion intention for pHRI assistive control
Paolo Franceschi, Fabio Bertini, Francesco Braghin, Loris Roveda, Nicola Pedrocchi, Manuel Beschi