Sensory Modality
Sensory modality research focuses on how artificial intelligence systems can integrate and reason across multiple sensory inputs (e.g., vision, touch, sound) to achieve a more comprehensive understanding of the environment, mirroring human perception. Current research emphasizes the development of multimodal foundation models, often employing transformer architectures and contrastive learning techniques, to fuse information from diverse sources and learn unified representations of object properties. This work is crucial for advancing robotics, improving human-AI interaction, and furthering our understanding of cognitive development, with applications ranging from assistive technologies to more robust autonomous systems.
Papers
September 19, 2024
June 5, 2024
April 29, 2024
April 5, 2024
January 22, 2024
December 7, 2023
September 15, 2023
December 7, 2022