Paper ID: 2310.14018

Temporal convolutional neural networks to generate a head-related impulse response from one direction to another

Tatsuki Kobayashi, Yoshiko Maruyama, Isao Nambu, Shohei Yano, Yasuhiro Wada

Virtual sound synthesis is a technology that allows users to perceive spatial sound through headphones or earphones. However, accurate virtual sound requires an individual head-related transfer function (HRTF), which can be difficult to measure due to the need for a specialized environment. In this study, we proposed a method to generate HRTFs from one direction to the other. To this end, we used temporal convolutional neural networks (TCNs) to generate head-related impulse responses (HRIRs). To train the TCNs, publicly available datasets in the horizontal plane were used. Using the trained networks, we successfully generated HRIRs for directions other than the front direction in the dataset. We found that the proposed method successfully generated HRIRs for publicly available datasets. To test the generalization of the method, we measured the HRIRs of a new dataset and tested whether the trained networks could be used for this new dataset. Although the similarity evaluated by spectral distortion was slightly degraded, behavioral experiments with human participants showed that the generated HRIRs were equivalent to the measured ones. These results suggest that the proposed TCNs can be used to generate personalized HRIRs from one direction to another, which could contribute to the personalization of virtual sound.

Submitted: Oct 21, 2023