Multi Granularity
Multi-granularity research focuses on leveraging information at multiple levels of detail (e.g., pixel, object, scene) to improve the performance and robustness of various machine learning models. Current research emphasizes the development of novel architectures, such as transformers and multi-branch networks, that can effectively integrate and process information across these different granularities, often incorporating techniques like attention mechanisms and hierarchical representations. This approach is proving valuable across diverse fields, enhancing accuracy and efficiency in tasks ranging from medical image analysis and traffic prediction to open-vocabulary object detection and question answering. The resulting improvements in model performance and interpretability have significant implications for both scientific understanding and real-world applications.
Papers
MTU-Bench: A Multi-granularity Tool-Use Benchmark for Large Language Models
Pei Wang, Yanan Wu, Zekun Wang, Jiaheng Liu, Xiaoshuai Song, Zhongyuan Peng, Ken Deng, Chenchen Zhang, Jiakai Wang, Junran Peng, Ge Zhang, Hangyu Guo, Zhaoxiang Zhang, Wenbo Su, Bo Zheng
PSVMA+: Exploring Multi-granularity Semantic-visual Adaption for Generalized Zero-shot Learning
Man Liu, Huihui Bai, Feng Li, Chunjie Zhang, Yunchao Wei, Meng Wang, Tat-Seng Chua, Yao Zhao