Semantic Scene Graph
Semantic scene graphs represent scenes as structured graphs, with objects as nodes and their relationships as edges, aiming to provide a richer, more interpretable understanding of visual data than traditional methods. Current research focuses on generating these graphs from various input modalities (images, point clouds, RGB-D data) using techniques like graph neural networks (GNNs), transformers, and message-passing neural networks, often incorporating geometric and temporal information. This representation is proving valuable for numerous applications, including autonomous driving, robotics, and medical image analysis, by enabling more sophisticated scene reasoning and improved performance in tasks like object tracking, motion prediction, and scene understanding.
Papers
Self Supervised Clustering of Traffic Scenes using Graph Representations
Maximilian Zipfl, Moritz Jarosch, J. Marius Zöllner
Relation-based Motion Prediction using Traffic Scene Graphs
Maximilian Zipfl, Felix Hertlein, Achim Rettinger, Steffen Thoma, Lavdim Halilaj, Juergen Luettin, Stefan Schmid, Cory Henson