Long Term
Long-term prediction and reasoning are crucial challenges across diverse scientific domains, aiming to accurately forecast future states or behaviors based on past observations and understanding complex temporal dynamics. Current research focuses on developing robust models, including transformers, diffusion models, and recurrent neural networks, often incorporating memory mechanisms and leveraging multi-modal data (e.g., text, images, sensor readings) to improve prediction accuracy and handle uncertainty. These advancements have significant implications for various fields, from robotics and autonomous systems (e.g., navigation, manipulation) to climate modeling and traffic flow prediction, enabling more reliable and efficient systems and improved decision-making.
Papers
Commonsense-augmented Memory Construction and Management in Long-term Conversations via Context-aware Persona Refinement
Hana Kim, Kai Tzu-iunn Ong, Seoyeon Kim, Dongha Lee, Jinyoung Yeo
Dynamic Long-Term Time-Series Forecasting via Meta Transformer Networks
Muhammad Anwar Ma'sum, MD Rasel Sarkar, Mahardhika Pratama, Savitha Ramasamy, Sreenatha Anavatti, Lin Liu, Habibullah, Ryszard Kowalczyk
ST(OR)2: Spatio-Temporal Object Level Reasoning for Activity Recognition in the Operating Room
Idris Hamoud, Muhammad Abdullah Jamal, Vinkle Srivastav, Didier Mutter, Nicolas Padoy, Omid Mohareri
LHManip: A Dataset for Long-Horizon Language-Grounded Manipulation Tasks in Cluttered Tabletop Environments
Federico Ceola, Lorenzo Natale, Niko Sünderhauf, Krishan Rana