Diffusion Based
Diffusion-based models are transforming various fields by leveraging the power of stochastic processes to generate high-quality data. Current research focuses on refining these models for diverse applications, including image enhancement, 3D scene generation, and solving inverse problems, often employing architectures like diffusion-based neural networks and integrating techniques such as attention mechanisms and transformer networks for improved efficiency and performance. This approach offers significant advantages in generating realistic and diverse outputs, leading to advancements in areas ranging from medical imaging and satellite image analysis to human motion synthesis and drug discovery. The resulting improvements in data generation and analysis have broad implications across numerous scientific disciplines and practical applications.
Papers
HyperDiffusion: Generating Implicit Neural Fields with Weight-Space Diffusion
Ziya Erkoç, Fangchang Ma, Qi Shan, Matthias Nießner, Angela Dai
A Unified Learning Model for Estimating Fiber Orientation Distribution Functions on Heterogeneous Multi-shell Diffusion-weighted MRI
Tianyuan Yao, Nancy Newlin, Praitayini Kanakaraj, Vishwesh nath, Leon Y Cai, Karthik Ramadass, Kurt Schilling, Bennett A. Landman, Yuankai Huo