Human Sensing
Human sensing research focuses on developing technologies to accurately and reliably perceive human activities and states, primarily for applications in healthcare, robotics, and autonomous systems. Current efforts concentrate on improving the robustness and efficiency of sensing modalities, including vision, acoustics, and tactile sensing, often employing deep learning models like neural networks and generative AI for data processing and feature extraction. These advancements are crucial for enhancing human-robot interaction safety, improving healthcare diagnostics through vocal biomarkers, and enabling more sophisticated applications in areas like autonomous driving and smart environments.
Papers
Nested ResNet: A Vision-Based Method for Detecting the Sensing Area of a Drop-in Gamma Probe
Songyu Xu, Yicheng Hu, Jionglong Su, Daniel Elson, Baoru Huang
MiniTac: An Ultra-Compact 8 mm Vision-Based Tactile Sensor for Enhanced Palpation in Robot-Assisted Minimally Invasive Surgery
Wanlin Li, Zihang Zhao, Leiyao Cui, Weiyi Zhang, Hangxin Liu, Li-An Li, Yixin Zhu
Sensor Fusion for Autonomous Indoor UAV Navigation in Confined Spaces
Alice James, Avishkar Seth, Endrowednes Kuantama, Subhas Mukhopadhyay, Richard Han
Aerodynamics and Sensing Analysis for Efficient Drone-Based Parcel Delivery
Avishkar Seth, Alice James, Endrowednes Kuantama, Subhas Mukhopadhyay, Richard Han