Concepedia

Publication | Closed Access

SG2SC: A Generative Semantic Communication Framework for Scene Understanding-Oriented Image Transmission

13

Citations

12

References

2024

Year

Abstract

In recent years, semantic communication based on deep learning for source-channel joint encoding has garnered significant attention. It utilizes network models trained end-to-end to represent signals as embedding vectors and has demonstrated superior performance compared to traditional methods. However, due to the significant disparity between embedding vectors and human language, it can be challenging to succinctly capture abstract semantics such as scenes. In this paper, we introduce the Scene Graph-based Generative Semantic Communication (SG2SC) framework, built upon structured semantics and conditional generative models for image transmission. SG2SC aims to faithfully convey abstract semantics like scenes. It begins by detecting object categories, spatial attributes, and inter-category relationships in the image, representing scene semantics in a graph structure. Subsequently, it employs graph neural networks for scene graph encoding, decoding, and transmission, and finally utilizes a conditional diffusion model for semantic decoding. Benefiting from its concise graph structure semantics, SG2SC outperforms traditional method, semantic communication based on deep joint source-channel coding, and segmentation-based generative semantic communication in terms of noise resistance and encoding efficiency.

References

YearCitations

Page 1