Paper ID: 2206.12078

Multimodal sensor data fusion for in-situ classification of animal behavior using accelerometry and GNSS data

Reza Arablouei, Ziwei Wang, Greg J. Bishop-Hurley, Jiajun Liu

In this paper, we examine the use of data from multiple sensing modes, i.e., accelerometry and global navigation satellite system (GNSS), for classifying animal behavior. We extract three new features from the GNSS data, namely, distance from water point, median speed, and median estimated horizontal position error. We combine the information available from the accelerometry and GNSS data via two approaches. The first approach is based on concatenating the features extracted from both sensor data and feeding the concatenated feature vector into a multi-layer perceptron (MLP) classifier. The second approach is based on fusing the posterior probabilities predicted by two MLP classifiers. The input to each classifier is the features extracted from the data of one sensing mode. We evaluate the performance of the developed multimodal animal behavior classification algorithms using two real-world datasets collected via smart cattle collar tags and ear tags. The leave-one-animal-out cross-validation results show that both approaches improve the classification performance appreciably compared with using data of only one sensing mode. This is more notable for the infrequent but important behaviors of walking and drinking. The algorithms developed based on both approaches require little computational and memory resources hence are suitable for implementation on embedded systems of our collar tags and ear tags. However, the multimodal animal behavior classification algorithm based on posterior probability fusion is preferable to the one based on feature concatenation as it delivers better classification accuracy, has less computational and memory complexity, is more robust to sensor data failure, and enjoys better modularity.

Submitted: Jun 24, 2022