Paper ID: 2411.13033
LMM-driven Semantic Image-Text Coding for Ultra Low-bitrate Learned Image Compression
Shimon Murai, Heming Sun, Jiro Katto
Supported by powerful generative models, low-bitrate learned image compression (LIC) models utilizing perceptual metrics have become feasible. Some of the most advanced models achieve high compression rates and superior perceptual quality by using image captions as sub-information. This paper demonstrates that using a large multi-modal model (LMM), it is possible to generate captions and compress them within a single model. We also propose a novel semantic-perceptual-oriented fine-tuning method applicable to any LIC network, resulting in a 41.58\% improvement in LPIPS BD-rate compared to existing methods. Our implementation and pre-trained weights are available at this https URL
Submitted: Nov 20, 2024