View Consistency
View consistency in computer vision focuses on ensuring that representations of a 3D scene from multiple viewpoints are coherent and mutually reinforcing. Current research emphasizes developing algorithms and model architectures, such as neural radiance fields (NeRFs) and diffusion models, to achieve this consistency across various tasks, including 3D reconstruction, image generation, and autonomous driving perception. This work is crucial for improving the accuracy and robustness of many computer vision applications, particularly those relying on multi-sensor data or requiring the generation of realistic 3D models from 2D images. The resulting improvements in data quality and model performance have significant implications for fields like augmented and virtual reality, robotics, and medical imaging.
Papers
Efficient Perspective-Correct 3D Gaussian Splatting Using Hybrid Transparency
Florian Hahlbohm, Fabian Friederichs, Tim Weyrich, Linus Franke, Moritz Kappel, Susana Castillo, Marc Stamminger, Martin Eisemann, Marcus Magnor
SeMv-3D: Towards Semantic and Mutil-view Consistency simultaneously for General Text-to-3D Generation with Triplane Priors
Xiao Cai, Pengpeng Zeng, Lianli Gao, Junchen Zhu, Jiaxin Zhang, Sitong Su, Heng Tao Shen, Jingkuan Song
Fine-detailed Neural Indoor Scene Reconstruction using multi-level importance sampling and multi-view consistency
Xinghui Li, Yuchen Ji, Xiansong Lai, Wanting Zhang
An Optimization Framework to Enforce Multi-View Consistency for Texturing 3D Meshes
Zhengyi Zhao, Chen Song, Xiaodong Gu, Yuan Dong, Qi Zuo, Weihao Yuan, Liefeng Bo, Zilong Dong, Qixing Huang
Augmented Reality based Simulated Data (ARSim) with multi-view consistency for AV perception networks
Aqeel Anwar, Tae Eun Choe, Zian Wang, Sanja Fidler, Minwoo Park