Paper ID: 2312.04970
Datasets, Models, and Algorithms for Multi-Sensor, Multi-agent Autonomy Using AVstack
R. Spencer Hallyburton, Miroslav Pajic
Recent advancements in assured autonomy have brought autonomous vehicles (AVs) closer to fruition. Despite strong evidence that multi-sensor, multi-agent (MSMA) systems can yield substantial improvements in the safety and security of AVs, there exists no unified framework for developing and testing representative MSMA configurations. Using the recently-released autonomy platform, AVstack, this work proposes a new framework for datasets, models, and algorithms in MSMA autonomy. Instead of releasing a single dataset, we deploy a dataset generation pipeline capable of generating unlimited volumes of ground-truth-labeled MSMA perception data. The data derive from cameras (semantic segmentation, RGB, depth), LiDAR, and radar, and are sourced from ground-vehicles and, for the first time, infrastructure platforms. Pipelining generating labeled MSMA data along with AVstack's third-party integrations defines a model training framework that allows training multi-sensor perception for vehicle and infrastructure applications. We provide the framework and pretrained models open-source. Finally, the dataset and model training pipelines culminate in insightful multi-agent case studies. While previous works used specific ego-centric multi-agent designs, our framework considers the collaborative autonomy space as a network of noisy, time-correlated sensors. Within this environment, we quantify the impact of the network topology and data fusion pipeline on an agent's situational awareness.
Submitted: Dec 8, 2023