Indoor Scene
Indoor scene understanding aims to computationally represent and interpret the three-dimensional structure, semantic content, and visual appearance of indoor environments. Current research heavily utilizes deep learning, focusing on model architectures like Neural Radiance Fields (NeRFs), graph convolutional networks (GCNs), and diffusion models to achieve tasks such as 3D reconstruction, semantic segmentation, and novel view synthesis from various input modalities (RGB, RGB-D, LiDAR, acoustic echoes). These advancements are crucial for applications in robotics, augmented reality, and virtual reality, enabling more robust and intelligent interaction with indoor spaces.
94papers
Papers
March 24, 2025
PDDM: Pseudo Depth Diffusion Model for RGB-PD Semantic Segmentation Based in Complex Indoor Scenes
Xinhua Xu, Hong Liu, Jianbing Wu, Jinfu LiuPeking UniversityNeRFPrior: Learning Neural Radiance Field as a Prior for Indoor Scene Reconstruction
Wenyuan Zhang, Emily Yue-ting Jia, Junsheng Zhou, Baorui Ma, Kanle Shi, Yu-Shen LiuTsinghua University●Kuaishou Technology●Wayne State University
November 29, 2024
November 24, 2024
November 1, 2024
October 25, 2024
September 30, 2024
September 23, 2024