Indoor Scene Generation
Indoor scene generation focuses on computationally creating realistic 3D indoor environments from various inputs, such as text descriptions or scene graphs, aiming to improve the quality, speed, and controllability of the process. Current research emphasizes leveraging large language models and diffusion models, often in conjunction with scene graphs or other structural representations, to generate high-fidelity scenes with diverse layouts and object arrangements. This field is significant for applications in virtual and augmented reality, gaming, interior design, and embodied AI, providing researchers with tools to create realistic training environments and designers with efficient design tools.
Papers
October 11, 2024
July 8, 2024
June 6, 2024
May 9, 2024
May 2, 2024
February 5, 2024
January 7, 2024
November 26, 2023
October 16, 2023
October 5, 2023
June 1, 2023
May 25, 2023
February 16, 2023
June 23, 2022
February 1, 2022