Neural Audio Synthesis
Neural audio synthesis aims to generate high-fidelity audio using deep learning models, focusing on achieving both realism and controllable manipulation of sound characteristics. Current research emphasizes developing models that offer intuitive control over synthesized audio, exploring architectures like variational autoencoders (VAEs), generative adversarial networks (GANs), and differentiable digital signal processing (DDSP) to achieve this. These advancements are significant for both scientific understanding of audio generation and practical applications, including music production, sound design, and speech processing, by providing powerful tools for creating and manipulating sounds.
Papers
July 5, 2024
June 11, 2024
June 1, 2024
May 30, 2024
November 30, 2023
November 16, 2023
September 14, 2023
July 16, 2023
February 27, 2023
November 25, 2022
November 16, 2022
April 14, 2022
December 17, 2021
November 19, 2021