RGB Event
RGB-event fusion research aims to combine the strengths of traditional RGB cameras (rich color and texture information) with event cameras (high temporal resolution and sensitivity to motion) for improved perception in challenging scenarios. Current research focuses on developing robust multimodal architectures, often employing transformer-based networks or spiking neural networks, to effectively fuse these complementary data streams for tasks like object detection and tracking, particularly in low-light conditions and for high-speed objects. This work is driving advancements in various fields, including autonomous driving, assistive robotics, and drone detection, by enabling more reliable and efficient perception systems. The development of new, large-scale, publicly available RGB-event datasets is also a significant focus, facilitating further research and algorithm development.