Paper ID: 2112.04688

Learning Generalizable Multi-Lane Mixed-Autonomy Behaviors in Single Lane Representations of Traffic

Abdul Rahman Kreidieh, Yibo Zhao, Samyak Parajuli, Alexandre Bayen

Reinforcement learning techniques can provide substantial insights into the desired behaviors of future autonomous driving systems. By optimizing for societal metrics of traffic such as increased throughput and reduced energy consumption, such methods can derive maneuvers that, if adopted by even a small portion of vehicles, may significantly improve the state of traffic for all vehicles involved. These methods, however, are hindered in practice by the difficulty of designing efficient and accurate models of traffic, as well as the challenges associated with optimizing for the behaviors of dozens of interacting agents. In response to these challenges, this paper tackles the problem of learning generalizable traffic control strategies in simple representations of vehicle driving dynamics. In particular, we look to mixed-autonomy ring roads as depictions of instabilities that result in the formation of congestion. Within this problem, we design a curriculum learning paradigm that exploits the natural extendability of the network to effectively learn behaviors that reduce congestion over long horizons. Next, we study the implications of modeling lane changing on the transferability of policies. Our findings suggest that introducing lane change behaviors that even approximately match trends in more complex systems can significantly improve the generalizability of subsequent learned models to more accurate multi-lane models of traffic.

Submitted: Dec 9, 2021