Paper ID: 2207.08609
ExAgt: Expert-guided Augmentation for Representation Learning of Traffic Scenarios
Lakshman Balasubramanian, Jonas Wurst, Robin Egolf, Michael Botsch, Wolfgang Utschick, Ke Deng
Representation learning in recent years has been addressed with self-supervised learning methods. The input data is augmented into two distorted views and an encoder learns the representations that are invariant to distortions -- cross-view prediction. Augmentation is one of the key components in cross-view self-supervised learning frameworks to learn visual representations. This paper presents ExAgt, a novel method to include expert knowledge for augmenting traffic scenarios, to improve the learnt representations without any human annotation. The expert-guided augmentations are generated in an automated fashion based on the infrastructure, the interactions between the EGO and the traffic participants and an ideal sensor model. The ExAgt method is applied in two state-of-the-art cross-view prediction methods and the representations learnt are tested in downstream tasks like classification and clustering. Results show that the ExAgt method improves representation learning compared to using only standard augmentations and it provides a better representation space stability. The code is available at https://github.com/lab176344/ExAgt.
Submitted: Jul 18, 2022