Paper ID: 2409.16615

DeformStream: Deformation-based Adaptive Volumetric Video Streaming

Boyan Li, Yongting Chen, Dayou Zhang, Fangxin Wang

Volumetric video streaming offers immersive 3D experiences but faces significant challenges due to high bandwidth requirements and latency issues in transmitting detailed content in real time. Traditional methods like point cloud streaming compromise visual quality when zoomed in, and neural rendering techniques are too computationally intensive for real-time use. Though mesh-based streaming stands out by preserving surface detail and connectivity, offering a more refined representation for 3D content, traditional mesh streaming methods typically transmit data on a per-frame basis, failing to take full advantage of temporal redundancies across frames. This results in inefficient bandwidth usage and poor adaptability to fluctuating network conditions. We introduce Deformation-based Adaptive Volumetric Video Streaming, a novel framework that enhances volumetric video streaming performance by leveraging the inherent deformability of mesh-based representations. DeformStream uses embedded deformation to reconstruct subsequent frames from inter-frame motion, significantly reducing bandwidth usage while ensuring visual coherence between frames. To address frame reconstruction overhead and network adaptability, we formulate a new QoE model that accounts for client-side deformation latency and design a dynamic programming algorithm to optimize the trade-off between visual quality and bandwidth consumption under varying network conditions. Our evaluation demonstrates that Deformation-based Adaptive Volumetric Video Streaming outperforms existing mesh-based streaming systems in both bandwidth efficiency and visual quality, offering a robust solution for real-time volumetric video applications.

Submitted: Sep 25, 2024