Multisensory Data
Multisensory data research focuses on integrating information from multiple sensor modalities (e.g., vision, audio, touch) to improve perception and decision-making in robots and other systems. Current research emphasizes developing robust models, such as deep autoencoders and multimodal large language models, for anomaly detection, object recognition, and action recognition using these integrated data streams. This work is significant for advancing robotics, human-computer interaction, and IoT applications by enabling more accurate and reliable systems that can operate effectively in complex, real-world environments. The development of large-scale, multisensory datasets is also a key focus, facilitating the training and evaluation of these advanced models.