Theatre Scene Description
Theatre scene description research focuses on computationally representing and understanding theatrical environments, aiming to create realistic and diverse virtual scenes and enable applications for visually impaired individuals. Current efforts leverage neural radiance fields (NeRFs), diffusion models, and transformer-based architectures to generate and manipulate 3D scene representations from various input modalities (images, audio, text), often incorporating techniques like style transfer and scene completion. This work is significant for advancing computer vision, particularly in generating realistic and accessible virtual environments, and has implications for accessibility technologies and entertainment applications.
Papers
ChatDyn: Language-Driven Multi-Actor Dynamics Generation in Street Scenes
Yuxi Wei, Jingbo Wang, Yuwen Du, Dingju Wang, Liang Pan, Chenxin Xu, Yao Feng, Bo Dai, Siheng Chen
NeRF-NQA: No-Reference Quality Assessment for Scenes Generated by NeRF and Neural View Synthesis Methods
Qiang Qu, Hanxue Liang, Xiaoming Chen, Yuk Ying Chung, Yiran Shen
Mini-Splatting2: Building 360 Scenes within Minutes via Aggressive Gaussian Densification
Guangchi Fang, Bing Wang
DGTR: Distributed Gaussian Turbo-Reconstruction for Sparse-View Vast Scenes
Hao Li, Yuanyuan Gao, Haosong Peng, Chenming Wu, Weicai Ye, Yufeng Zhan, Chen Zhao, Dingwen Zhang, Jingdong Wang, Junwei Han