Paper ID: 2405.04124

Comparative Study of State-based Neural Networks for Virtual Analog Audio Effects Modeling

Riccardo Simionato, Stefano Fasciani

Analog electronic circuits are at the core of an important category of musical devices, which includes a broad range of sound synthesizers and audio effects. The development of software that simulates analog musical devices, known as virtual analog modeling, is a significant sub-field in audio signal processing. Artificial neural networks are a promising technique for virtual analog modeling. While neural approaches have successfully accurately modeled distortion circuits, they require architectural improvements that account for parameter conditioning and low-latency response. This article explores the application of recent machine learning advancements for virtual analog modeling. In particular, we compare State-Space models and Linear Recurrent Units against the more common Long Short-Term Memory networks. Our comparative study uses these black-box neural modeling techniques with various audio effects. We evaluate the performance and limitations of these models using multiple metrics, providing insights for future research and development. Our metrics aim to assess the models' ability to accurately replicate energy envelopes and frequency contents, with a particular focus on transients in the audio signal. To incorporate control parameters into the models, we employ the Feature-wise Linear Modulation method. Long Short-Term Memory networks exhibit better accuracy in emulating distortions and equalizers, while the State-Space model, followed by Long Short-Term Memory networks when integrated in an encoder-decoder structure, and Linear Recurrent Unit outperforms others in emulating saturation and compression. When considering long time-variant characteristics, the State-Space model demonstrates the greatest capability to track history. Long Short-Term Memory networks tend to introduce audio artifacts.

Submitted: May 7, 2024