Global Context
Global context modeling in machine learning aims to improve model performance by incorporating broader contextual information beyond immediate local features. Current research focuses on integrating techniques like transformers and state-space models (SSMs), such as Mamba, into existing architectures (e.g., YOLOv8, diffusion models) to enhance the capture of global relationships in various data types, including images, videos, and text. This is crucial for improving accuracy and robustness in tasks ranging from image inpainting and object detection to natural language processing and air quality prediction, ultimately leading to more effective and reliable AI systems across diverse applications.
Papers
December 18, 2023
December 12, 2023
October 26, 2023
October 18, 2023
October 5, 2023
September 24, 2023
September 22, 2023
June 24, 2023
June 22, 2023
June 19, 2023
June 5, 2023
June 3, 2023
May 26, 2023
May 16, 2023
May 4, 2023
April 18, 2023
March 20, 2023
February 11, 2023
February 3, 2023