Visual Sensor
Visual sensors are crucial for enabling machines to perceive their environment, with research focusing on improving accuracy, efficiency, and robustness across diverse applications. Current efforts concentrate on developing novel sensor designs (e.g., event-based cameras), optimizing data processing using algorithms like deep learning (including transformers and spiking neural networks), and integrating multiple sensor modalities (e.g., vision and inertial sensors) for enhanced perception. These advancements are driving progress in fields such as autonomous driving, robotics, and human-computer interaction, leading to more reliable and efficient systems in various real-world scenarios.
Papers
Real-Time Driver Monitoring Systems through Modality and View Analysis
Yiming Ma, Victor Sanchez, Soodeh Nikan, Devesh Upadhyay, Bhushan Atote, Tanaya Guha
A Symbolic Representation of Human Posture for Interpretable Learning and Reasoning
Richard G. Freedman, Joseph B. Mueller, Jack Ladwig, Steven Johnston, David McDonald, Helen Wauck, Ruta Wheelock, Hayley Borck