Novel View Acoustic Synthesis
Novel-view acoustic synthesis (NVAS) aims to generate realistic sounds from a new perspective, given audio and visual data from an original viewpoint. Current research focuses on developing neural network models, often incorporating visual cues (e.g., images, depth maps, 3D scene reconstructions) to improve sound rendering accuracy and efficiency, with methods ranging from Gaussian splatting to transformer-based architectures. This field is significant because it advances our ability to create immersive audio experiences in virtual and augmented reality applications, and also has implications for audio-visual scene understanding and other related fields.
Papers
September 19, 2024
September 8, 2024
June 13, 2024
March 27, 2024
November 16, 2023
October 23, 2023