Paper ID: 2112.13450
Acoustic scene classification using auditory datasets
Jayesh Kumpawat, Shubhajit Dey
The approach used not only challenges some of the fundamental mathematical techniques used so far in early experiments of the same trend but also introduces new scopes and new horizons for interesting results. The physics governing spectrograms have been optimized in the project along with exploring how it handles the intense requirements of the problem at hand. Major contributions and developments brought under the light, through this project involve using better mathematical techniques and problem-specific machine learning methods. Improvised data analysis and data augmentation for audio datasets like frequency masking and random frequency-time stretching are used in the project and hence are explained in this paper. In the used methodology, the audio transforms principle were also tried and explored, and indeed the insights gained were used constructively in the later stages of the project. Using a deep learning principle is surely one of them. Also, in this paper, the potential scopes and upcoming research openings in both short and long term tunnel of time has been presented. Although much of the results gained are domain-specific as of now, they are surely potent enough to produce novel solutions in various different domains of diverse backgrounds.
Submitted: Dec 26, 2021