Multi Scale
Multi-scale analysis focuses on processing and interpreting data across different scales of resolution, aiming to capture both fine details and broader contextual information. Current research emphasizes the development of novel architectures, such as transformers and state-space models (like Mamba), often incorporating multi-scale convolutional layers, attention mechanisms, and hierarchical structures to improve feature extraction and representation learning. This approach is proving valuable in diverse fields, enhancing performance in tasks ranging from medical image segmentation and time series forecasting to object detection and image super-resolution, ultimately leading to more accurate and robust results in various applications.
Papers
An Entropy-Based Model for Hierarchical Learning
Amir R. Asadi
Scale-MAE: A Scale-Aware Masked Autoencoder for Multiscale Geospatial Representation Learning
Colorado J. Reed, Ritwik Gupta, Shufan Li, Sarah Brockman, Christopher Funk, Brian Clipp, Kurt Keutzer, Salvatore Candido, Matt Uyttendaele, Trevor Darrell
Online Federated Learning via Non-Stationary Detection and Adaptation amidst Concept Drift
Bhargav Ganguly, Vaneet Aggarwal
FE-Fusion-VPR: Attention-based Multi-Scale Network Architecture for Visual Place Recognition by Fusing Frames and Events
Kuanxu Hou, Delei Kong, Junjie Jiang, Hao Zhuang, Xinjie Huang, Zheng Fang