Multimodal Communication
Multimodal communication research focuses on understanding and replicating how humans integrate multiple communication channels (speech, gestures, facial expressions, etc.) for richer interaction. Current research emphasizes developing models, often employing transformer networks and graph convolutional networks, to detect and interpret these multimodal signals in various contexts, including human-robot interaction and conversational agents. This work is crucial for improving the naturalness and effectiveness of human-computer interaction, leading to more intuitive and empathetic AI systems and a deeper understanding of human communication itself.
Papers
July 8, 2024
July 1, 2024
May 18, 2024
April 23, 2024
January 7, 2024
December 22, 2023
November 30, 2023
October 24, 2023
September 11, 2023
June 16, 2023
May 28, 2023
December 18, 2022
November 24, 2022
September 29, 2022
July 29, 2022
May 20, 2022