High Fidelity
High-fidelity generation aims to create highly realistic and accurate outputs across various domains, from 3D models and videos to audio and simulations. Current research focuses on improving fidelity through advanced model architectures like diffusion models, GANs, and Bayesian methods, often incorporating techniques such as multi-view guidance, fine-tuning, and latent space manipulation to enhance detail and consistency. This pursuit of high fidelity is crucial for advancing numerous fields, including autonomous driving, medical imaging, and digital content creation, by enabling more accurate simulations, improved diagnostics, and more realistic virtual experiences.
Papers
SHYI: Action Support for Contrastive Learning in High-Fidelity Text-to-Image Generation
Tianxiang Xia, Lin Xiao, Yannick Montorfani, Francesco Pavia, Enis Simsar, Thomas Hofmann
Boosting Diffusion Guidance via Learning Degradation-Aware Models for Blind Super Resolution
Shao-Hao Lu, Ren Wang, Ching-Chun Huang, Wei-Chen Chiu
Pragmatist: Multiview Conditional Diffusion Models for High-Fidelity 3D Reconstruction from Unposed Sparse Views
Songchun Zhang, Chunhui Zhao
Verification and Validation of a Vision-Based Landing System for Autonomous VTOL Air Taxis
Ayoosh Bansal, Duo Wang, Mikael Yeghiazaryan, Yangge Li, Chuyuan Tao, Hyung-Jin Yoon, Prateek Arora, Christos Papachristos, Petros Voulgaris, Sayan Mitra, Lui Sha, Naira Hovakimyan
MetaFormer: High-fidelity Metalens Imaging via Aberration Correcting Transformers
Byeonghyeon Lee, Youbin Kim, Yongjae Jo, Hyunsu Kim, Hyemi Park, Yangkyu Kim, Debabrata Mandal, Praneeth Chakravarthula, Inki Kim, Eunbyung Park
Modeling Eye Gaze Velocity Trajectories using GANs with Spectral Loss for Enhanced Fidelity
Shailendra Bhandari, Pedro Lencastre, Rujeena Mathema, Alexander Szorkovszky, Anis Yazidi, Pedro Lind