Neuromorphic Event
Neuromorphic event-based vision leverages asynchronous sensors mimicking biological retinas to capture changes in light intensity, offering advantages in speed, power efficiency, and dynamic range compared to traditional frame-based cameras. Current research focuses on developing algorithms and models, including spiking neural networks (SNNs) and transformers, to process these event streams for tasks like object detection, tracking, and 3D reconstruction, often integrating event data with complementary RGB or LiDAR information. This field is significant for its potential to enable low-power, high-performance vision systems in applications ranging from autonomous driving and robotics to human-computer interaction and biomedical sensing.
Papers
Integer-Valued Training and Spike-Driven Inference Spiking Neural Network for High-performance and Energy-efficient Object Detection
Xinhao Luo, Man Yao, Yuhong Chou, Bo Xu, Guoqi Li
Spiking-DD: Neuromorphic Event Camera based Driver Distraction Detection with Spiking Neural Network
Waseem Shariff, Paul Kielty, Joseph Lemley, Peter Corcoran