Neuromorphic Event
Neuromorphic event-based vision leverages asynchronous sensors mimicking biological retinas to capture changes in light intensity, offering advantages in speed, power efficiency, and dynamic range compared to traditional frame-based cameras. Current research focuses on developing algorithms and models, including spiking neural networks (SNNs) and transformers, to process these event streams for tasks like object detection, tracking, and 3D reconstruction, often integrating event data with complementary RGB or LiDAR information. This field is significant for its potential to enable low-power, high-performance vision systems in applications ranging from autonomous driving and robotics to human-computer interaction and biomedical sensing.
Papers
Neuromorphic Camera Denoising using Graph Neural Network-driven Transformers
Yusra Alkendi, Rana Azzam, Abdulla Ayyad, Sajid Javed, Lakmal Seneviratne, Yahya Zweiri
Enhanced Frame and Event-Based Simulator and Event-Based Video Interpolation Network
Adam Radomski, Andreas Georgiou, Thomas Debrunner, Chenghan Li, Luca Longinotti, Minwon Seo, Moosung Kwak, Chang-Woo Shin, Paul K. J. Park, Hyunsurk Eric Ryu, Kynan Eng