Acoustic Matching
Acoustic matching focuses on modifying audio to sound as if recorded in a different environment or using a different instrument, aiming to improve audio realism and manipulation. Current research employs various deep learning architectures, including diffusion models, transformers, and differentiable synthesizers, often trained using self-supervised or mutual learning techniques to overcome data limitations. This field is significant for enhancing audio experiences in virtual and augmented reality, improving speech intelligibility in challenging acoustic conditions, and providing novel tools for music production and sound design.
Papers
July 23, 2024
July 15, 2024
January 23, 2024
January 16, 2024
November 23, 2023
July 27, 2023
January 7, 2023
October 27, 2022
April 6, 2022