Paper ID: 2206.07200
Using Machine Learning to Augment Dynamic Time Warping Based Signal Classification
Arvind Seshan
Modern applications such as voice recognition rely on the ability to compare signals to pre-recorded ones to classify them. However, this comparison typically needs to ignore differences due to signal noise, temporal offset, signal magnitude, and other external factors. The Dynamic Time Warping (DTW) algorithm quantifies this similarity by finding corresponding regions between the signals and non-linearly warping one signal by stretching and shrinking it. Unfortunately, searching through all "warps" of a signal to find the best corresponding regions is computationally expensive. The FastDTW algorithm improves performance, but sacrifices accuracy by only considering small signal warps. My goal is to improve the speed of DTW while maintaining high accuracy. My key insight is that in any particular application domain, signals exhibit specific types of variation. For example, the accelerometer signal measured for two different people would differ based on their stride length and weight. My system, called Machine Learning DTW (MLDTW), uses machine learning to learn the types of warps that are common in a particular domain. It then uses the learned model to improve DTW performance by limiting the search of potential warps appropriately. My results show that compared to FastDTW, MLDTW is at least as fast and reduces errors by 60% on average across four different data sets. These improvements will significantly impact a wide variety of applications (e.g. health monitoring) and enable more scalable processing of multivariate, higher frequency, and longer signal recordings.
Submitted: Jun 14, 2022