Multi Sensory
Multisensory research explores how organisms integrate information from multiple sensory modalities (e.g., vision, hearing, touch) to create a unified perception of the world. Current research focuses on developing computational models, including neural networks and transformers, to understand and replicate this integration, often using large datasets of multisensory interactions to train these models for tasks like object recognition, scene understanding, and robotic control. This field is significant for advancing artificial intelligence, particularly in robotics and virtual/augmented reality, by enabling more robust and human-like interaction with the environment, and also provides insights into the cognitive mechanisms underlying human perception.