Multi Scale
Multi-scale analysis focuses on processing and interpreting data across different scales of resolution, aiming to capture both fine details and broader contextual information. Current research emphasizes the development of novel architectures, such as transformers and state-space models (like Mamba), often incorporating multi-scale convolutional layers, attention mechanisms, and hierarchical structures to improve feature extraction and representation learning. This approach is proving valuable in diverse fields, enhancing performance in tasks ranging from medical image segmentation and time series forecasting to object detection and image super-resolution, ultimately leading to more accurate and robust results in various applications.
Papers
IL-MCAM: An interactive learning and multi-channel attention mechanism-based weakly supervised colorectal histopathology image classification approach
Haoyuan Chen, Chen Li, Xiaoyan Li, Md Mamunur Rahaman, Weiming Hu, Yixin Li, Wanli Liu, Changhao Sun, Hongzan Sun, Xinyu Huang, Marcin Grzegorzek
MS-RNN: A Flexible Multi-Scale Framework for Spatiotemporal Predictive Learning
Zhifeng Ma, Hao Zhang, Jie Liu
One Model to Synthesize Them All: Multi-contrast Multi-scale Transformer for Missing Data Imputation
Jiang Liu, Srivathsa Pasumarthi, Ben Duffy, Enhao Gong, Keshav Datta, Greg Zaharchuk
Lightweight Bimodal Network for Single-Image Super-Resolution via Symmetric CNN and Recursive Transformer
Guangwei Gao, Zhengxue Wang, Juncheng Li, Wenjie Li, Yi Yu, Tieyong Zeng