3D Content
3D content generation and manipulation are active research areas aiming to create realistic and versatile three-dimensional models and scenes. Current efforts focus on improving real-time rendering, AI-assisted collaborative creation, and style transfer using techniques like Gaussian splatting and diffusion models, often incorporating 3D priors or leveraging foundation models like Segment Anything Model. These advancements are significant for various applications, including virtual and augmented reality, computer-aided design, and medical imaging, by enabling more efficient and accurate 3D content creation and analysis.
Papers
ObPose: Leveraging Pose for Object-Centric Scene Inference and Generation in 3D
Yizhe Wu, Oiwi Parker Jones, Ingmar Posner
TRAVEL: Traversable Ground and Above-Ground Object Segmentation Using Graph Representation of 3D LiDAR Scans
Minho Oh, Euigon Jung, Hyungtae Lim, Wonho Song, Sumin Hu, Eungchang Mason Lee, Junghee Park, Jaekyung Kim, Jangwoo Lee, Hyun Myung
RayTran: 3D pose estimation and shape reconstruction of multiple objects from videos with ray-traced transformers
Michał J. Tyszkiewicz, Kevis-Kokitsi Maninis, Stefan Popov, Vittorio Ferrari
Egocentric Prediction of Action Target in 3D
Yiming Li, Ziang Cao, Andrew Liang, Benjamin Liang, Luoyao Chen, Hang Zhao, Chen Feng