Human Sensing
Human sensing research focuses on developing technologies to accurately and reliably perceive human activities and states, primarily for applications in healthcare, robotics, and autonomous systems. Current efforts concentrate on improving the robustness and efficiency of sensing modalities, including vision, acoustics, and tactile sensing, often employing deep learning models like neural networks and generative AI for data processing and feature extraction. These advancements are crucial for enhancing human-robot interaction safety, improving healthcare diagnostics through vocal biomarkers, and enabling more sophisticated applications in areas like autonomous driving and smart environments.
Papers
Power Plant Detection for Energy Estimation using GIS with Remote Sensing, CNN & Vision Transformers
Blessing Austin-Gabriel, Cristian Noriega Monsalve, Aparna S. Varde
KNN-MMD: Cross Domain Wireless Sensing via Local Distribution Alignment
Zijian Zhao, Zhijie Cai, Tingwei Chen, Xiaoyang Li, Hang Li, Qimei Chen, Guangxu Zhu
Nested ResNet: A Vision-Based Method for Detecting the Sensing Area of a Drop-in Gamma Probe
Songyu Xu, Yicheng Hu, Jionglong Su, Daniel Elson, Baoru Huang
MiniTac: An Ultra-Compact 8 mm Vision-Based Tactile Sensor for Enhanced Palpation in Robot-Assisted Minimally Invasive Surgery
Wanlin Li, Zihang Zhao, Leiyao Cui, Weiyi Zhang, Hangxin Liu, Li-An Li, Yixin Zhu