Paper ID: 2403.15693

Technical Report: Masked Skeleton Sequence Modeling for Learning Larval Zebrafish Behavior Latent Embeddings

Lanxin Xu, Shuo Wang

In this report, we introduce a novel self-supervised learning method for extracting latent embeddings from behaviors of larval zebrafish. Drawing inspiration from Masked Modeling techniquesutilized in image processing with Masked Autoencoders (MAE) \cite{he2022masked} and in natural language processing with Generative Pre-trained Transformer (GPT) \cite{radford2018improving}, we treat behavior sequences as a blend of images and language. For the skeletal sequences of swimming zebrafish, we propose a pioneering Transformer-CNN architecture, the Sequence Spatial-Temporal Transformer (SSTFormer), designed to capture the inter-frame correlation of different joints. This correlation is particularly valuable, as it reflects the coordinated movement of various parts of the fish body across adjacent frames. To handle the high frame rate, we segment the skeleton sequence into distinct time slices, analogous to "words" in a sentence, and employ self-attention transformer layers to encode the consecutive frames within each slice, capturing the spatial correlation among different joints. Furthermore, we incorporate a CNN-based attention module to enhance the representations outputted by the transformer layers. Lastly, we introduce a temporal feature aggregation operation between time slices to improve the discrimination of similar behaviors.

Submitted: Mar 23, 2024