Paper ID: 2504.01053 • Published Apr 1, 2025
Knowledge-Base based Semantic Image Transmission Using CLIP
Chongyang Li, Yanmei He, Tianqian Zhang, Mingjian He, Shouyin Liu
Central China Normal University
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
This paper proposes a novel knowledge-Base (KB) assisted semantic
communication framework for image transmission. At the receiver, a Facebook AI
Similarity Search (FAISS) based vector database is constructed by extracting
semantic embeddings from images using the Contrastive Language-Image
Pre-Training (CLIP) model. During transmission, the transmitter first extracts
a 512-dimensional semantic feature using the CLIP model, then compresses it
with a lightweight neural network for transmission. After receiving the signal,
the receiver reconstructs the feature back to 512 dimensions and performs
similarity matching from the KB to retrieve the most semantically similar
image. Semantic transmission success is determined by category consistency
between the transmitted and retrieved images, rather than traditional metrics
like Peak Signal-to-Noise Ratio (PSNR). The proposed system prioritizes
semantic accuracy, offering a new evaluation paradigm for semantic-aware
communication systems. Experimental validation on CIFAR100 demonstrates the
effectiveness of the framework in achieving semantic image transmission.
Figures & Tables
Unlock access to paper figures and tables to enhance your research experience.