Global Context
Global context modeling in machine learning aims to improve model performance by incorporating broader contextual information beyond immediate local features. Current research focuses on integrating techniques like transformers and state-space models (SSMs), such as Mamba, into existing architectures (e.g., YOLOv8, diffusion models) to enhance the capture of global relationships in various data types, including images, videos, and text. This is crucial for improving accuracy and robustness in tasks ranging from image inpainting and object detection to natural language processing and air quality prediction, ultimately leading to more effective and reliable AI systems across diverse applications.
Papers
October 31, 2022
October 27, 2022
October 8, 2022
October 7, 2022
September 18, 2022
August 4, 2022
July 29, 2022
July 20, 2022
July 18, 2022
July 14, 2022
July 6, 2022
June 28, 2022
June 24, 2022
June 20, 2022
May 15, 2022
May 3, 2022
April 25, 2022
April 21, 2022
April 7, 2022